Zobrazeno 1 - 10
of 755
pro vyhledávání: '"Adaku, A."'
Autor:
Artemova, Ekaterina, Lucas, Jason, Venkatraman, Saranya, Lee, Jooyoung, Tilga, Sergei, Uchendu, Adaku, Mikhailov, Vladislav
The rapid proliferation of large language models (LLMs) has increased the volume of machine-generated texts (MGTs) and blurred text authorship in various domains. However, most existing MGT benchmarks include single-author texts (human-written and ma
Externí odkaz:
http://arxiv.org/abs/2411.04032
Recent literature has highlighted potential risks to academic integrity associated with large language models (LLMs), as they can memorize parts of training instances and reproduce them in the generated texts without proper attribution. In addition,
Externí odkaz:
http://arxiv.org/abs/2406.16288
Autor:
Macko, Dominik, Moro, Robert, Uchendu, Adaku, Srba, Ivan, Lucas, Jason Samuel, Yamashita, Michiharu, Tripto, Nafis Irtiza, Lee, Dongwon, Simko, Jakub, Bielikova, Maria
High-quality text generation capability of recent Large Language Models (LLMs) causes concerns about their misuse (e.g., in massive generation/spread of disinformation). Machine-generated text (MGT) detection is important to cope with such threats. H
Externí odkaz:
http://arxiv.org/abs/2401.07867
Autor:
Tripto, Nafis Irtiza, Venkatraman, Saranya, Macko, Dominik, Moro, Robert, Srba, Ivan, Uchendu, Adaku, Le, Thai, Lee, Dongwon
In the realm of text manipulation and linguistic transformation, the question of authorship has been a subject of fascination and philosophical inquiry. Much like the Ship of Theseus paradox, which ponders whether a ship remains the same when each of
Externí odkaz:
http://arxiv.org/abs/2311.08374
Autor:
Tripto, Nafis Irtiza, Uchendu, Adaku, Le, Thai, Setzu, Mattia, Giannotti, Fosca, Lee, Dongwon
Authorship Analysis, also known as stylometry, has been an essential aspect of Natural Language Processing (NLP) for a long time. Likewise, the recent advancement of Large Language Models (LLMs) has made authorship analysis increasingly crucial for d
Externí odkaz:
http://arxiv.org/abs/2310.16746
Autor:
Lucas, Jason, Uchendu, Adaku, Yamashita, Michiharu, Lee, Jooyoung, Rohatgi, Shaurya, Lee, Dongwon
Recent ubiquity and disruptive impacts of large language models (LLMs) have raised concerns about their potential to be misused (.i.e, generating large-scale harmful and misleading content). To combat this emerging risk of LLMs, we propose a novel "F
Externí odkaz:
http://arxiv.org/abs/2310.15515
Autor:
Macko, Dominik, Moro, Robert, Uchendu, Adaku, Lucas, Jason Samuel, Yamashita, Michiharu, Pikuliak, Matúš, Srba, Ivan, Le, Thai, Lee, Dongwon, Simko, Jakub, Bielikova, Maria
Publikováno v:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings. This is also reflected in the available ben
Externí odkaz:
http://arxiv.org/abs/2310.13606
The Uniform Information Density (UID) principle posits that humans prefer to spread information evenly during language production. We examine if this UID principle can help capture differences between Large Language Models (LLMs)-generated and human-
Externí odkaz:
http://arxiv.org/abs/2310.06202
Recent advances in Large Language Models (LLMs) have enabled the generation of open-ended high-quality texts, that are non-trivial to distinguish from human-written texts. We refer to such LLM-generated texts as deepfake texts. There are currently ov
Externí odkaz:
http://arxiv.org/abs/2309.12934
Advances in Large Language Models (e.g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts. However, this progress poses security and priv
Externí odkaz:
http://arxiv.org/abs/2304.01002