Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Veselovsky, Veniamin"'
Autor:
Davidson, Tim R., Surkov, Viacheslav, Veselovsky, Veniamin, Russo, Giuseppe, West, Robert, Gulcehre, Caglar
A rapidly growing number of applications rely on a small set of closed-source language models (LMs). This dependency might introduce novel security risks if LMs develop self-recognition capabilities. Inspired by human identity verification methods, w
Externí odkaz:
http://arxiv.org/abs/2407.06946
Autor:
Latona, Giuseppe Russo, Ribeiro, Manoel Horta, Davidson, Tim R., Veselovsky, Veniamin, West, Robert
Journals and conferences worry that peer reviews assisted by artificial intelligence (AI), in particular, large language models (LLMs), may negatively influence the validity and fairness of the peer-review system, a cornerstone of modern science. In
Externí odkaz:
http://arxiv.org/abs/2405.02150
We ask whether multilingual language models trained on unbalanced, English-dominated corpora use English as an internal pivot language -- a question of key importance for understanding how language models function and the origins of linguistic bias.
Externí odkaz:
http://arxiv.org/abs/2402.10588
Autor:
Davidson, Tim R., Veselovsky, Veniamin, Josifoski, Martin, Peyrard, Maxime, Bosselut, Antoine, Kosinski, Michal, West, Robert
We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study mult
Externí odkaz:
http://arxiv.org/abs/2401.04536
Societal change is often driven by shifts in public opinion. As citizens evolve in their norms, beliefs, and values, public policies change too. While traditional opinion polling and surveys can outline the broad strokes of whether public opinion on
Externí odkaz:
http://arxiv.org/abs/2312.09611
Autor:
Veselovsky, Veniamin, Ribeiro, Manoel Horta, Cozzolino, Philip, Gordon, Andrew, Rothschild, David, West, Robert
We show that the use of large language models (LLMs) is prevalent among crowd workers, and that targeted mitigation strategies can significantly reduce, but not eliminate, LLM use. On a text summarization task where workers were not directed in any w
Externí odkaz:
http://arxiv.org/abs/2310.15683
Research using YouTube data often explores social and semantic dimensions of channels and videos. Typically, analyses rely on laborious manual annotation of content and content creators, often found by low-recall methods such as keyword search. Here,
Externí odkaz:
http://arxiv.org/abs/2306.17298
Large language models (LLMs) are remarkable data annotators. They can be used to generate high-fidelity supervised training data, as well as survey and experimental data. With the widespread adoption of LLMs, human gold--standard annotations are key
Externí odkaz:
http://arxiv.org/abs/2306.07899
Autor:
Veselovsky, Veniamin, Ribeiro, Manoel Horta, Arora, Akhil, Josifoski, Martin, Anderson, Ashton, West, Robert
Large Language Models (LLMs) have democratized synthetic data generation, which in turn has the potential to simplify and broaden a wide gamut of NLP tasks. Here, we tackle a pervasive problem in synthetic data generation: its generative distribution
Externí odkaz:
http://arxiv.org/abs/2305.15041
Autor:
Veselovsky, Veniamin, Anderson, Ashton
When the COVID-19 pandemic hit, much of life moved online. Platforms of all types reported surges of activity, and people remarked on the various important functions that online platforms suddenly fulfilled. However, researchers lack a rigorous under
Externí odkaz:
http://arxiv.org/abs/2304.10777