Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Eunjoon Cho"'
Autor:
Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, Zhiguang Wang
Publikováno v:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Autor:
Rajen Subba, Seungwhan Moon, Eunjoon Cho, Zhiguang Wang, Zhenpeng Zhou, Bing Liu, Zhaojiang Lin, Paul A. Crook, Andrea Madotto, Zhou Yu
Publikováno v:
NAACL-HLT
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot descriptions enhanced generative approach for zero-shot cross-domain DST. Spec
Autor:
Shivani Poddar, Theodore Levin, Paul A. Crook, David Whitney, Satwik Kottur, Seungwhan Moon, Ankita De, Ahmad Beirami, Rajen Subba, Eunjoon Cho, Daniel Difranco, Alborz Geramifard
Publikováno v:
COLING
Next generation virtual assistants are envisioned to handle multimodal inputs (e.g., vision, memories of previous interactions, in addition to the user's utterances), and perform multimodal actions (e.g., displaying a route in addition to generating
Autor:
Stephen Roller, Eunjoon Cho, Claire Cardie, Seungwhan Moon, Paul A. Crook, Becka Silvert, Bing Liu, Kai Sun, Honglei Liu, Zhiguang Wang
Publikováno v:
NAACL-HLT
Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e.g., booking hotels), open-domain chatbots aim at making socially engaging conversations. In thi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::95890797653ccbee818f5846ed062193
Autor:
Shankar Kumar, Eunjoon Cho
Publikováno v:
ICASSP
Speech recognition in digital assistants such as Google Assistant can potentially benefit from the use of conversational context consisting of user queries and responses from the agent. We explore the use of recurrent, Long Short-Term Memory (LSTM),
Publikováno v:
ICASSP
Standard automatic speech recognition (ASR) systems are increasingly expected to recognize foreign entities, yet doing so while preserving accuracy on native words remains a challenge. We describe a novel approach for recognizing foreign words by inj
Autor:
Francoise Beaufays, Kaisuke Nakajima, Keith Hall, Cyril Allauzen, Eunjoon Cho, Linda Zhang, Brian Roark, Michael Riley, David Rybach, Noah Coccaro
Publikováno v:
INTERSPEECH
We introduce a technique for dynamically applying contextually-derived language models to a state-of-the-art speech recognition system. These generally small-footprint models can be seen as a generalization of cache-based models [1], whereby contextu
Publikováno v:
INTERSPEECH
Scopus-Elsevier
Scopus-Elsevier
Trying to automatically detect laughter and other nonlinguistic events in speech raises a fundamental question: Is it appropriate to simply adopt acoustic features that have traditionally been used for analyzing linguistic events? Thus we take a step
Publikováno v:
ICASSP
We provide a single channel speech enhancement method leveraging the harmonic structure of voiced speech. A sinusoidal model, based on the pitch of the speaker, is used to filter noisy speech and remove any noise components that lie between the harmo
Publikováno v:
KDD
Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-bas