Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Hallinan, Skyler"'
StyleRemix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements
Autor:
Fisher, Jillian, Hallinan, Skyler, Lu, Ximing, Gordon, Mitchell, Harchaoui, Zaid, Choi, Yejin
Authorship obfuscation, rewriting a text to intentionally obscure the identity of the author, is an important but challenging task. Current methods using large language models (LLMs) lack interpretability and controllability, often ignoring author-sp
Externí odkaz:
http://arxiv.org/abs/2408.15666
While text style transfer has many applications across natural language processing, the core premise of transferring from a single source style is unrealistic in a real-world setting. In this work, we focus on arbitrary style transfer: rewriting a te
Externí odkaz:
http://arxiv.org/abs/2311.07167
Autor:
Ramnath, Sahana, Joshi, Brihi, Hallinan, Skyler, Lu, Ximing, Li, Liunian Harold, Chan, Aaron, Hessel, Jack, Choi, Yejin, Ren, Xiang
Publikováno v:
The Twelfth International Conference on Learning Representations, 2024
Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant scales (e.g., 175B parameter GPT-3); and 2) focuses
Externí odkaz:
http://arxiv.org/abs/2311.02805
Autor:
Lu, Ximing, Brahman, Faeze, West, Peter, Jang, Jaehun, Chandu, Khyathi, Ravichander, Abhilasha, Qin, Lianhui, Ammanabrolu, Prithviraj, Jiang, Liwei, Ramnath, Sahana, Dziri, Nouha, Fisher, Jillian, Lin, Bill Yuchen, Hallinan, Skyler, Ren, Xiang, Welleck, Sean, Choi, Yejin
While extreme-scale language models have demonstrated exceptional performance on a variety of language tasks, the degree of control over these language models through pure prompting can often be limited. Directly fine-tuning such language models can
Externí odkaz:
http://arxiv.org/abs/2305.15065
Autor:
Madaan, Aman, Tandon, Niket, Gupta, Prakhar, Hallinan, Skyler, Gao, Luyu, Wiegreffe, Sarah, Alon, Uri, Dziri, Nouha, Prabhumoye, Shrimai, Yang, Yiming, Gupta, Shashank, Majumder, Bodhisattwa Prasad, Hermann, Katherine, Welleck, Sean, Yazdanbakhsh, Amir, Clark, Peter
Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative fe
Externí odkaz:
http://arxiv.org/abs/2303.17651
Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle. We introduce MaRCo, a detoxification algorithm that combines controllable gener
Externí odkaz:
http://arxiv.org/abs/2212.10543
Autor:
Liu, Jiacheng, Hallinan, Skyler, Lu, Ximing, He, Pengfei, Welleck, Sean, Hajishirzi, Hannaneh, Choi, Yejin
Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the performance even on top of state-of-the-art. The funda
Externí odkaz:
http://arxiv.org/abs/2210.03078
Autor:
Gabriel, Saadia, Hallinan, Skyler, Sap, Maarten, Nguyen, Pemi, Roesner, Franziska, Choi, Eunsol, Choi, Yejin
Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e.g. inferring the writer's intent), emotionally (e.g. feeling distrust), and behaviorally (e.g. sharing the news with their friends). Such reactions are ins
Externí odkaz:
http://arxiv.org/abs/2104.08790
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.