Zobrazeno 1 - 10
of 26 998
pro vyhledávání: '"Dugan IN"'
Autor:
Chapman, Spencer, Dugan, Eli B., Gaskari, Shadi, Lycan, Emi, De La Cruz, Sarah Mendoza, O'Neill, Christopher, Ponomarenko, Vadim
A factorization of an element $x$ in a monoid $(M, \cdot)$ is an expression of the form $x = u_1^{z_1} \cdots u_k^{z_k}$ for irreducible elements $u_1, \ldots, u_k \in M$, and the length of such a factorization is $z_1 + \cdots + z_k$. We introduce t
Externí odkaz:
http://arxiv.org/abs/2411.17010
The proliferation of inflammatory or misleading "fake" news content has become increasingly common in recent years. Simultaneously, it has become easier than ever to use AI tools to generate photorealistic images depicting any scene imaginable. Combi
Externí odkaz:
http://arxiv.org/abs/2410.09045
Autor:
Dugan, William T.
The Chan-Robbins-Yuen polytope ($CRY_n$) of order $n$ is a face of the Birkhoff polytope of doubly stochastic matrices that is also a flow polytope of the directed complete graph $K_{n+1}$ with netflow $(1,0,0, \ldots , 0, -1)$. The volume and lattic
Externí odkaz:
http://arxiv.org/abs/2409.15519
Autor:
Ashktorab, Zahra, Pan, Qian, Geyer, Werner, Desmond, Michael, Danilevsky, Marina, Johnson, James M., Dugan, Casey, Bachman, Michelle
In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data.
Externí odkaz:
http://arxiv.org/abs/2409.08937
Recently, there has been increasing interest in using Large Language Models (LLMs) to construct complex multi-agent systems to perform tasks such as compiling literature reviews, drafting consumer reports, and planning vacations. Many tools and libra
Externí odkaz:
http://arxiv.org/abs/2408.02248
Autor:
Dugan, Owen, Beneto, Donato Manuel Jimenez, Loh, Charlotte, Chen, Zhuo, Dangovski, Rumen, Soljačić, Marin
Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. Language model systems often enable LLMs to generate code for arithmetic ope
Externí odkaz:
http://arxiv.org/abs/2406.06576
We propose Quantum-informed Tensor Adaptation (QuanTA), a novel, easy-to-implement, fine-tuning method with no inference overhead for large-scale pre-trained language models. By leveraging quantum-inspired methods derived from quantum circuit structu
Externí odkaz:
http://arxiv.org/abs/2406.00132
Autor:
Do, Hyo Jin, Ostrand, Rachel, Weisz, Justin D., Dugan, Casey, Sattigeri, Prasanna, Wei, Dennis, Murugesan, Keerthiram, Geyer, Werner
While humans increasingly rely on large language models (LLMs), they are susceptible to generating inaccurate or false information, also known as "hallucinations". Technical advancements have been made in algorithms that detect hallucinated content b
Externí odkaz:
http://arxiv.org/abs/2405.20434
Autor:
Dugan, Liam, Hwang, Alyssa, Trhlik, Filip, Ludan, Josh Magnus, Zhu, Andrew, Xu, Hainiu, Ippolito, Daphne, Callison-Burch, Chris
Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99% or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for ev
Externí odkaz:
http://arxiv.org/abs/2405.07940
Autor:
Dugan, Hugh T.
Publikováno v:
Horizons: Journal of International Relations and Sustainable Development, 2024 Oct 01(28), 82-93.
Externí odkaz:
https://www.jstor.org/stable/48794580