Zobrazeno 1 - 10
of 30 922
pro vyhledávání: '"A. DODGE"'
Autor:
Mu, Xiaofan, Seyedi, Salman, Zheng, Iris, Jiang, Zifan, Chen, Liu, Omofojoye, Bolaji, Hershenberg, Rachel, Levey, Allan I., Clifford, Gari D., Dodge, Hiroko H., Kwon, Hyeokhyen
The aging society urgently requires scalable methods to monitor cognitive decline and identify social and psychological factors indicative of dementia risk in older adults. Our machine learning (ML) models captured facial, acoustic, linguistic, and c
Externí odkaz:
http://arxiv.org/abs/2412.14194
Autor:
Bhagia, Akshita, Liu, Jiacheng, Wettig, Alexander, Heineman, David, Tafjord, Oyvind, Jha, Ananya Harsh, Soldaini, Luca, Smith, Noah A., Groeneveld, Dirk, Koh, Pang Wei, Dodge, Jesse, Hajishirzi, Hannaneh
We develop task scaling laws and model ladders to predict the individual task performance of pretrained language models (LMs) in the overtrained setting. Standard power laws for language modeling loss cannot accurately model task performance. Therefo
Externí odkaz:
http://arxiv.org/abs/2412.04403
We analyze the optical pump-probe reflection and transmission coefficients when the photoinduced response depends nonlinearly on the incident pump intensity. Under these conditions, we expect the photoconductivity depth profile to change shape as a f
Externí odkaz:
http://arxiv.org/abs/2410.21496
Autor:
Na, Clara, Magnusson, Ian, Jha, Ananya Harsh, Sherborne, Tom, Strubell, Emma, Dodge, Jesse, Dasigi, Pradeep
Training data compositions for Large Language Models (LLMs) can significantly affect their downstream performance. However, a thorough data ablation study exploring large sets of candidate data mixtures is typically prohibitively expensive since the
Externí odkaz:
http://arxiv.org/abs/2410.15661
Autor:
Morrison, Jacob, Smith, Noah A., Hajishirzi, Hannaneh, Koh, Pang Wei, Dodge, Jesse, Dasigi, Pradeep
Adapting general-purpose language models to new skills is currently an expensive process that must be repeated as new instruction datasets targeting new skills are created, or can cause the models to forget older skills. In this work, we investigate
Externí odkaz:
http://arxiv.org/abs/2410.12937
Autor:
Zhang, Haotian, Gao, Mingfei, Gan, Zhe, Dufter, Philipp, Wenzel, Nina, Huang, Forrest, Shah, Dhruti, Du, Xianzhi, Zhang, Bowen, Li, Yanghao, Dodge, Sam, You, Keen, Yang, Zhen, Timofeev, Aleksei, Xu, Mingze, Chen, Hong-You, Fauconnier, Jean-Philippe, Lai, Zhengfeng, You, Haoxuan, Wang, Zirui, Dehghan, Afshin, Grasch, Peter, Yang, Yinfei
We present MM1.5, a new family of multimodal large language models (MLLMs) designed to enhance capabilities in text-rich image understanding, visual referring and grounding, and multi-image reasoning. Building upon the MM1 architecture, MM1.5 adopts
Externí odkaz:
http://arxiv.org/abs/2409.20566
Autor:
Koujalgi, Sujay, Anderson, Andrew, Adenuga, Iyadunni, Soneji, Shikha, Dikkala, Rupika, Nader, Teresita Guzman, Soccio, Leo, Panda, Sourav, Das, Rupak Kumar, Burnett, Margaret, Dodge, Jonathan
Assessing an AI system's behavior-particularly in Explainable AI Systems-is sometimes done empirically, by measuring people's abilities to predict the agent's next move-but how to perform such measurements? In empirical studies with humans, an obviou
Externí odkaz:
http://arxiv.org/abs/2409.00069
Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models in particular is challenging, as small changes to how a model is evaluated on a task can lead to large
Externí odkaz:
http://arxiv.org/abs/2406.08446
Autor:
Wang, Jiankun, Ahn, Sumyeong, Dalal, Taykhoom, Zhang, Xiaodan, Pan, Weishen, Zhang, Qiannan, Chen, Bin, Dodge, Hiroko H., Wang, Fei, Zhou, Jiayu
Alzheimer's disease (AD) is the fifth-leading cause of death among Americans aged 65 and older. Screening and early detection of AD and related dementias (ADRD) are critical for timely intervention and for identifying clinical trial participants. The
Externí odkaz:
http://arxiv.org/abs/2405.16413
Autor:
Hamid, Md Montaser, Moussaoui, Fatima, Guevara, Jimena Noa, Anderson, Andrew, Agarwal, Puja, Dodge, Jonathan, Burnett, Margaret
Motivations: Explainable Artificial Intelligence (XAI) systems aim to improve users' understanding of AI, but XAI research shows many cases of different explanations serving some users well and being unhelpful to others. In non-AI systems, some softw
Externí odkaz:
http://arxiv.org/abs/2404.13217