Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Budzianowski, Paweł"'
Decision-making under full alignment requires balancing between reasoning and faithfulness - a challenge for large language models (LLMs). This study explores whether LLMs prioritize following instructions over reasoning and truth when given "mislead
Externí odkaz:
http://arxiv.org/abs/2409.00113
In recent years, speech generation has seen remarkable progress, now achieving one-shot generation capability that is often virtually indistinguishable from real human voice. Integrating such advancements in speech generation with large language mode
Externí odkaz:
http://arxiv.org/abs/2401.02839
Autor:
Razumovskaia, Evgeniia, Vulić, Ivan, Marković, Pavle, Cichy, Tomasz, Zheng, Qian, Wen, Tsung-Hsien, Budzianowski, Paweł
Factuality is a crucial requirement in information seeking dialogue: the system should respond to the user's queries so that the responses are meaningful and aligned with the knowledge provided to the system. However, most modern large language model
Externí odkaz:
http://arxiv.org/abs/2311.09800
Manually annotating fine-grained slot-value labels for task-oriented dialogue (ToD) systems is an expensive and time-consuming endeavour. This motivates research into slot-filling methods that operate with limited amounts of labelled data. Moreover,
Externí odkaz:
http://arxiv.org/abs/2307.01764
Knowledge-based authentication is crucial for task-oriented spoken dialogue systems that offer personalised and privacy-focused services. Such systems should be able to enrol (E), verify (V), and identify (I) new and recurring users based on their pe
Externí odkaz:
http://arxiv.org/abs/2204.13496
We present NLU++, a novel dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems, with the aim to provide a much more challenging evaluation environment for dialogue NLU models, up to date with the current applicatio
Externí odkaz:
http://arxiv.org/abs/2204.13021
Transformer-based pretrained language models (PLMs) offer unmatched performance across the majority of natural language understanding (NLU) tasks, including a body of question answering (QA) tasks. We hypothesize that improvements in QA methodology c
Externí odkaz:
http://arxiv.org/abs/2204.02123
Publikováno v:
In Computer Speech & Language January 2025 89
Autor:
Vulić, Ivan, Su, Pei-Hao, Coope, Sam, Gerz, Daniela, Budzianowski, Paweł, Casanueva, Iñigo, Mrkšić, Nikola, Wen, Tsung-Hsien
Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge. However, 1) they are not effective as sentence encoders when used off-the-shelf, and 2) thus typically lag behind convers
Externí odkaz:
http://arxiv.org/abs/2109.10126
Autor:
Tseng, Bo-Hsiang, Rei, Marek, Budzianowski, Paweł, Turner, Richard E., Byrne, Bill, Korhonen, Anna
Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels. However, collecting these annotations is expensive and time-consuming, holding
Externí odkaz:
http://arxiv.org/abs/1911.11672