Zobrazeno 1 - 10
of 6 589
pro vyhledávání: '"Godbout A"'
Autor:
Touzel, Maximilian Puelma, Sarangi, Sneheel, Welch, Austin, Krishnakumar, Gayatri, Zhao, Dan, Yang, Zachary, Yu, Hao, Kosak-Hine, Ethan, Gibbs, Tom, Musulan, Andreea, Thibault, Camille, Gurbuz, Busra Tugce, Rabbany, Reihaneh, Godbout, Jean-François, Pelrine, Kellin
The rise of AI-driven manipulation poses significant risks to societal trust and democratic processes. Yet, studying these effects in real-world settings at scale is ethically and logistically impractical, highlighting a need for simulation tools tha
Externí odkaz:
http://arxiv.org/abs/2410.13915
Hallucination has been a popular topic in natural language generation (NLG). In real-world applications, unfaithful content can result in bad data quality or loss of trust from end users. Thus, it is crucial to fact-check before adopting NLG for prod
Externí odkaz:
http://arxiv.org/abs/2410.12222
Autor:
Tian, Jacob-Junqi, Yu, Hao, Orlovskiy, Yury, Vergho, Tyler, Rivera, Mauricio, Goel, Mayank, Yang, Zachary, Godbout, Jean-Francois, Rabbany, Reihaneh, Pelrine, Kellin
This paper develops an agent-based automated fact-checking approach for detecting misinformation. We demonstrate that combining a powerful LLM agent, which does not have access to the internet for searches, with an online web search agent yields bett
Externí odkaz:
http://arxiv.org/abs/2409.00009
Autor:
Yang, Zachary, Imouza, Anne, Touzel, Maximilian Puelma, Amadoro, Cecile, Desrosiers-Brisebois, Gabrielle, Pelrine, Kellin, Levy, Sacha, Godbout, Jean-Francois, Rabbany, Reihaneh
Public health measures were among the most polarizing topics debated online during the COVID-19 pandemic. Much of the discussion surrounded specific events, such as when and which particular interventions came into practise. In this work, we develop
Externí odkaz:
http://arxiv.org/abs/2407.02807
Large Language Models have emerged as prime candidates to tackle misinformation mitigation. However, existing approaches struggle with hallucinations and overconfident predictions. We propose an uncertainty quantification framework that leverages bot
Externí odkaz:
http://arxiv.org/abs/2401.08694
Autor:
Orlovskiy, Yury, Thibault, Camille, Imouza, Anne, Godbout, Jean-François, Rabbany, Reihaneh, Pelrine, Kellin
Misinformation poses a variety of risks, such as undermining public trust and distorting factual discourse. Large Language Models (LLMs) like GPT-4 have been shown effective in mitigating misinformation, particularly in handling statements where enou
Externí odkaz:
http://arxiv.org/abs/2401.01197
Real-time toxicity detection in online environments poses a significant challenge, due to the increasing prevalence of social media and gaming platforms. We introduce ToxBuster, a simple and scalable model that reliably detects toxic content in real-
Externí odkaz:
http://arxiv.org/abs/2310.18330
Autor:
Pelrine, Kellin, Imouza, Anne, Yang, Zachary, Tian, Jacob-Junqi, Lévy, Sacha, Desrosiers-Brisebois, Gabrielle, Feizi, Aarash, Amadoro, Cécile, Blais, André, Godbout, Jean-François, Rabbany, Reihaneh
A large number of studies on social media compare the behaviour of users from different political parties. As a basic step, they employ a predictive model for inferring their political affiliation. The accuracy of this model can change the conclusion
Externí odkaz:
http://arxiv.org/abs/2308.13699
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks. But many questions remain, including whether open-source models match closed ones, why these models excel or struggle with certain tasks,
Externí odkaz:
http://arxiv.org/abs/2308.10092
Publikováno v:
Communications Biology, Vol 7, Iss 1, Pp 1-19 (2024)
Abstract Chronic stress is associated with anxiety and cognitive impairment. Repeated social defeat (RSD) in mice induces anxiety-like behavior driven by microglia and the recruitment of inflammatory monocytes to the brain. Nonetheless, it is unclear
Externí odkaz:
https://doaj.org/article/fd422a43e25548ae9d99f01aca64e490