Zobrazeno 1 - 10
of 355
pro vyhledávání: '"Lee, Young Suk"'
Autor:
Lee, Young-Suk, Gunasekara, Chulaka, Contractor, Danish, Astudillo, Ramón Fernandez, Florian, Radu
We introduce a technique for multi-document grounded multi-turn synthetic dialog generation that incorporates three main ideas. First, we control the overall dialog flow using taxonomy-driven user queries that are generated with Chain-of-Thought (CoT
Externí odkaz:
http://arxiv.org/abs/2409.11500
Autor:
Ramji, Keshav, Lee, Young-Suk, Astudillo, Ramón Fernandez, Sultan, Md Arafat, Naseem, Tahira, Munawar, Asim, Florian, Radu, Roukos, Salim
It is often desirable for Large Language Models (LLMs) to capture multiple objectives when providing a response. In document-grounded response generation, for example, agent responses are expected to be relevant to a user's query while also being gro
Externí odkaz:
http://arxiv.org/abs/2403.00827
Autor:
Lee, Young-Suk, Sultan, Md Arafat, El-Kurdi, Yousef, Munawar, Tahira Naseem Asim, Florian, Radu, Roukos, Salim, Astudillo, Ramón Fernandez
Publikováno v:
EMNLP 2023
Using in-context learning (ICL) for data generation, techniques such as Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023) can train strong conversational agents with only a small amount of human supervision. One limitatio
Externí odkaz:
http://arxiv.org/abs/2310.13961
Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstr
Externí odkaz:
http://arxiv.org/abs/2304.12272
Autor:
Neelam, Sumit, Sharma, Udit, Karanam, Hima, Ikbal, Shajith, Kapanipathi, Pavan, Abdelaziz, Ibrahim, Mihindukulasooriya, Nandana, Lee, Young-Suk, Srivastava, Santosh, Pendus, Cezar, Dana, Saswati, Garg, Dinesh, Fokoue, Achille, Bhargav, G P Shrivatsa, Khandelwal, Dinesh, Ravishankar, Srinivas, Gurajada, Sairam, Chang, Maria, Uceda-Sosa, Rosario, Roukos, Salim, Gray, Alexander, Lima, Guilherme, Riegel, Ryan, Luus, Francois, Subramaniam, L Venkata
Knowledge Base Question Answering (KBQA) tasks that involve complex reasoning are emerging as an important research direction. However, most existing KBQA datasets focus primarily on generic multi-hop reasoning over explicit facts, largely ignoring o
Externí odkaz:
http://arxiv.org/abs/2201.05793
We present DR.DECR (Dense Retrieval with Distillation-Enhanced Cross-Lingual Representation), a new cross-lingual information retrieval (CLIR) system trained using multi-stage knowledge distillation (KD). The teacher of DR.DECR relies on a highly eff
Externí odkaz:
http://arxiv.org/abs/2112.08185
Autor:
Naseem, Tahira, Blodgett, Austin, Kumaravel, Sadhana, O'Gorman, Tim, Lee, Young-Suk, Flanigan, Jeffrey, Astudillo, Ramón Fernandez, Florian, Radu, Roukos, Salim, Schneider, Nathan
Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined rep
Externí odkaz:
http://arxiv.org/abs/2112.08513
Autor:
Lee, Young-Suk, Astudillo, Ramon Fernandez, Hoang, Thanh Lam, Naseem, Tahira, Florian, Radu, Roukos, Salim
Publikováno v:
NAACL-HLT 2022
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning. Self-learning techniques have also played a role in pushing performa
Externí odkaz:
http://arxiv.org/abs/2112.07790
Autor:
Zhou, Jiawei, Naseem, Tahira, Astudillo, Ramón Fernandez, Lee, Young-Suk, Florian, Radu, Roukos, Salim
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained sequence-to-sequence Transformer models has recently led to large improvements on AMR parsing benchmarks. These parsers are simple and avoid explicit modeling of str
Externí odkaz:
http://arxiv.org/abs/2110.15534
Autor:
Lam, Hoang Thanh, Picco, Gabriele, Hou, Yufang, Lee, Young-Suk, Nguyen, Lam M., Phan, Dzung T., López, Vanessa, Astudillo, Ramon Fernandez
In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the o
Externí odkaz:
http://arxiv.org/abs/2110.09131