Zobrazeno 1 - 10
of 162
pro vyhledávání: '"Boggust A"'
Autor:
Boggust, Angie, Sivaraman, Venkatesh, Assogba, Yannick, Ren, Donghao, Moritz, Dominik, Hohman, Fred
To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many
Externí odkaz:
http://arxiv.org/abs/2408.03274
Abstraction -- the process of generalizing specific examples into broad reusable patterns -- is central to how people efficiently process and store information and apply their knowledge to new data. Promisingly, research has shown that ML models lear
Externí odkaz:
http://arxiv.org/abs/2407.12543
Vision Transformers (ViTs), with their ability to model long-range dependencies through self-attention mechanisms, have become a standard architecture in computer vision. However, the interpretability of these models remains a challenge. To address t
Externí odkaz:
http://arxiv.org/abs/2404.03214
Generative text-to-image (TTI) models produce high-quality images from short textual descriptions and are widely used in academic and creative domains. Like humans, TTI models have a worldview, a conception of the world learned from their training da
Externí odkaz:
http://arxiv.org/abs/2309.09944
Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities. However, current approaches for automatically generating such captions strug
Externí odkaz:
http://arxiv.org/abs/2307.05356
Impact of COVID-19 on emergency medical services utilization and severity in the U.S. Upper Midwest.
Autor:
Shalom, Moshe1 (AUTHOR), Boggust, Brett2 (AUTHOR), Rogerson IV, M. Carson3 (AUTHOR), Myers, Lucas A.4 (AUTHOR), Huang, Shuo J.4 (AUTHOR), McCoy, Rozalina G.3,4,5 (AUTHOR) Rozalina.McCoy@som.umaryland.edu
Publikováno v:
PLoS ONE. 10/01/2024, Vol. 19 Issue 10, p1-15. 15p.
Saliency methods are a common class of machine learning interpretability techniques that calculate how important each input feature is to a model's output. We find that, with the rapid pace of development, users struggle to stay informed of the stren
Externí odkaz:
http://arxiv.org/abs/2206.02958
Autor:
Rouditchenko, Andrew, Boggust, Angie, Harwath, David, Thomas, Samuel, Kuehne, Hilde, Chen, Brian, Panda, Rameswar, Feris, Rogerio, Kingsbury, Brian, Picheny, Michael, Glass, James
In this paper, we explore self-supervised audio-visual models that learn from instructional videos. Prior work has shown that these models can relate spoken words and sounds to visual content after training on a large-scale dataset of videos, but the
Externí odkaz:
http://arxiv.org/abs/2111.04823
Autor:
Rode, Matthew M, Boggust, Brett A, Manggaard, Jennifer M, Myers, Lucas A, Swanson, Kristi M, McCoy, Rozalina G
Publikováno v:
In Diabetes Research and Clinical Practice July 2024 213
Saliency methods -- techniques to identify the importance of input features on a model's output -- are a common step in understanding neural network behavior. However, interpreting saliency requires tedious manual inspection to identify and aggregate
Externí odkaz:
http://arxiv.org/abs/2107.09234