Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Hartvigsen, Thomas"'
Large language models (LLMs) are being applied to time series tasks, particularly time series forecasting. However, are language models actually useful for time series? After a series of ablation studies on three recent and popular LLM-based time ser
Externí odkaz:
http://arxiv.org/abs/2406.16964
Autor:
Gallifant, Jack, Chen, Shan, Moreira, Pedro, Munch, Nikolaj, Gao, Mingye, Pond, Jackson, Celi, Leo Anthony, Aerts, Hugo, Hartvigsen, Thomas, Bitterman, Danielle
Medical knowledge is context-dependent and requires consistent reasoning across various natural language expressions of semantically equivalent phrases. This is particularly crucial for drug names, where patients often use brand names like Advil or T
Externí odkaz:
http://arxiv.org/abs/2406.12066
Autor:
Sun, Shenghuan, Goldgof, Gregory M., Schubert, Alexander, Sun, Zhiqing, Hartvigsen, Thomas, Butte, Atul J., Alaa, Ahmed
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions to assist in diagnostic and treatment tasks. However, VLMs often exhibit "hallucinogenic" behavior, generating textual outpu
Externí odkaz:
http://arxiv.org/abs/2405.19567
Autor:
Jain, Devansh, Kumar, Priyanshu, Gehman, Samuel, Zhou, Xuhui, Hartvigsen, Thomas, Sap, Maarten
Recent advances in large language models (LLMs) have led to their extensive global deployment, and ensuring their safety calls for comprehensive and multilingual toxicity evaluations. However, existing toxicity benchmarks are overwhelmingly focused o
Externí odkaz:
http://arxiv.org/abs/2405.09373
Humans rarely learn one fact in isolation. Instead, learning a new fact induces knowledge of other facts about the world. For example, in learning a korat is a type of cat, you also infer it is a mammal and has claws, ensuring your model of the world
Externí odkaz:
http://arxiv.org/abs/2404.15004
Autor:
Gao, Shanghua, Koker, Teddy, Queen, Owen, Hartvigsen, Thomas, Tsiligkaridis, Theodoros, Zitnik, Marinka
Advances in time series models are driving a shift from conventional deep learning methods to pre-trained foundational models. While pre-trained transformers and reprogrammed text-based LLMs report state-of-the-art results, the best-performing archit
Externí odkaz:
http://arxiv.org/abs/2403.00131
Math word problems are critical K-8 educational tools, but writing them is time-consuming and requires domain expertise. We suggest that language models can support K-8 math education by automatically generating problems. To be educational, generated
Externí odkaz:
http://arxiv.org/abs/2402.15861
Autor:
O'Brien, Kyle, Ng, Nathan, Puri, Isha, Mendez, Jorge, Palangi, Hamid, Kim, Yoon, Ghassemi, Marzyeh, Hartvigsen, Thomas
Machine learning models often excel on in-distribution (ID) data but struggle with unseen out-of-distribution (OOD) inputs. Most techniques for improving OOD robustness are not applicable to settings where the model is effectively a black box, such a
Externí odkaz:
http://arxiv.org/abs/2402.08225
Autor:
Nagaraj, Sujay, Gerych, Walter, Tonekaboni, Sana, Goldenberg, Anna, Ustun, Berk, Hartvigsen, Thomas
Many sequential classification tasks are affected by label noise that varies over time. Such noise can cause label quality to improve, worsen, or periodically change over time. We first propose and formalize temporal label noise, an unstudied problem
Externí odkaz:
http://arxiv.org/abs/2402.04398
Autor:
Hegselmann, Stefan, Parziale, Antonio, Shanmugam, Divya, Tang, Shengpu, Asiedu, Mercy Nyamewaa, Chang, Serina, Hartvigsen, Thomas, Singh, Harvineet
A collection of the accepted Findings papers that were presented at the 3rd Machine Learning for Health symposium (ML4H 2023), which was held on December 10, 2023, in New Orleans, Louisiana, USA. ML4H 2023 invited high-quality submissions on relevant
Externí odkaz:
http://arxiv.org/abs/2312.00655