Zobrazeno 1 - 10
of 130
pro vyhledávání: '"Hou Yufang"'
Autor:
Mondal, Ishani, Li, Zongxia, Hou, Yufang, Natarajan, Anandhavelu, Garimella, Aparna, Boyd-Graber, Jordan
Publikováno v:
Empirical Methods in Natural Language Processing 2024
Automating the creation of scientific diagrams from academic papers can significantly streamline the development of tutorials, presentations, and posters, thereby saving time and accelerating the process. Current text-to-image models struggle with ge
Externí odkaz:
http://arxiv.org/abs/2409.19242
Large Language Models (LLMs) have ushered in a transformative era in Natural Language Processing (NLP), reshaping research and extending NLP's influence to other fields of study. However, there is little to no work examining the degree to which LLMs
Externí odkaz:
http://arxiv.org/abs/2409.19508
Natural Language Processing (NLP) is a dynamic, interdisciplinary field that integrates intellectual traditions from computer science, linguistics, social science, and more. Despite its established presence, the definition of what constitutes NLP res
Externí odkaz:
http://arxiv.org/abs/2409.19505
Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessme
Externí odkaz:
http://arxiv.org/abs/2409.12656
Health-related misinformation claims often falsely cite a credible biomedical publication as evidence, which superficially appears to support the false claim. The publication does not really support the claim, but a reader could believe it thanks to
Externí odkaz:
http://arxiv.org/abs/2408.12812
Autor:
Hou, Yufang, Tran, Thy Thy, Vu, Doan Nam Long, Cao, Yiwen, Li, Kai, Rohde, Lukas, Gurevych, Iryna
This paper presents a shared task that we organized at the Foundations of Language Technology (FoLT) course in 2023/2024 at the Technical University of Darmstadt, which focuses on evaluating the output of Large Language Models (LLMs) in generating ha
Externí odkaz:
http://arxiv.org/abs/2408.00122
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks. Yet, this flexibility brings new challenges, as it introduces new degrees of freedom in formulating the
Externí odkaz:
http://arxiv.org/abs/2407.04046
Autor:
Hou, Yufang, Pascale, Alessandra, Carnerero-Cano, Javier, Tchrakian, Tigran, Marinescu, Radu, Daly, Elizabeth, Padhi, Inkit, Sattigeri, Prasanna
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts ari
Externí odkaz:
http://arxiv.org/abs/2406.13805
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers. Such misinformation often misrepresents scientific publications and cites them as "proof" to gain perceived credibility. To effectively counter
Externí odkaz:
http://arxiv.org/abs/2406.03181
We introduce Holmes, a new benchmark designed to assess language models (LMs) linguistic competence - their unconscious understanding of linguistic phenomena. Specifically, we use classifier-based probing to examine LMs' internal representations rega
Externí odkaz:
http://arxiv.org/abs/2404.18923