Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Dustin Arendt"'
Autor:
Prasha Shrestha, Arun Sathanur, Suraj Maharjan, Emily Saldanha, Dustin Arendt, Svitlana Volkova
Publikováno v:
PLoS ONE, Vol 15, Iss 3, p e0230250 (2020)
The awareness about software vulnerabilities is crucial to ensure effective cybersecurity practices, the development of high-quality software, and, ultimately, national security. This awareness can be better understood by studying the spread, structu
Externí odkaz:
https://doaj.org/article/3fb2e2d222ee4d10bdeaccfa51eccea4
Autor:
Maria Glenski, Ellyn Ayton, Sannisth Soni, Emily Saldanha, Dustin Arendt, Brian Quiter, Ren Cooper, Svitlana Volkova
Publikováno v:
IEEE Transactions on Nuclear Science. 69:1375-1384
Autor:
Sinan Aksoy, Brett Jefferson, Ellyn Ayton, Svitlana Volkova, Dustin Arendt, Karthnik Shrivaram, Joseph Cottam, Emily Saldanha, Maria Glenski
Publikováno v:
Computational and Mathematical Organization Theory. 29:220-241
Ground Truth program was designed to evaluate social science modeling approaches using simulation test beds with ground truth intentionally and systematically embedded to understand and model complex Human Domain systems and their dynamics Lazer et a
Autor:
Svitlana Volkova, ZhuanYi Shaw, Alex Endert, Emily Saldanha, Maria Glenski, Grace Guo, Dustin Arendt
Publikováno v:
2021 IEEE Visualization Conference (VIS).
Natural experiments are observational studies where the assignment of treatment conditions to different populations occurs by chance "in the wild". Researchers from fields such as economics, healthcare, and the social sciences leverage natural experi
Publikováno v:
Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances.
Current methods for evaluation of natural language generation models focus on measuring text quality but fail to probe the model creativity, i.e., its ability to generate novel but coherent text sequences not seen in the training corpus. We present t
Publikováno v:
SocialNLP@NAACL
With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs. We propose an extensive analysis of model robustness against linguistic variation in the se
Publikováno v:
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing.
Publikováno v:
EACL
We evaluate neural model robustness to adversarial attacks using different types of linguistic unit perturbations – character and word, and propose a new method for strategic sentence-level perturbations. We experiment with different amounts of per
Publikováno v:
2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX).
Research into the explanation of machine learning models, i.e., explainable AI (XAI), has seen a commensurate exponential growth alongside deep artificial neural networks throughout the past decade. For historical reasons, explanation and trust have
Evaluation beyond aggregate performance metrics, e.g. F1-score, is crucial to both establish an appropriate level of trust in machine learning models and identify future model improvements. In this paper we demonstrate CrossCheck, an interactive visu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7eb63fc8d4aba84750f33d65bd5fdb13
http://arxiv.org/abs/2004.07993
http://arxiv.org/abs/2004.07993