Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Puyang Xu"'
Autor:
Yi Liu, Lan Zhao, Jiaofeng Wang, Yinshi Guo, Yifei Wang, Lishan Zhang, Zhoujie Wu, Mingzhi Zhu, Xukai Yang, Puyang Xu, Shandong Wu, Zhongshan Gao, Jin-Lyu Sun
Publikováno v:
Frontiers in Immunology, Vol 14 (2023)
BackgroundHouse dust mite (HDM) is the most common airborne source causing complex allergy symptoms. There are geographic differences in the allergen molecule sensitization profiles. Serological testing with allergen components may provide more clues
Externí odkaz:
https://doaj.org/article/465639d1944d437c88e0d91fdce2c0de
Autor:
Puyang Xu
Publikováno v:
Second IYSF Academic Symposium on Artificial Intelligence and Computer Engineering.
Publikováno v:
ACL (1)
We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especiall
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2e4ce1eabbdc32bce12ce38cfbf25f97
http://arxiv.org/abs/1805.01555
http://arxiv.org/abs/1805.01555
Publikováno v:
INTERSPEECH
We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed arch
Publikováno v:
INTERSPEECH
We present a novel application of hypothesis ranking (HR) for the task of domain detection in a multi-domain, multiturn dialog system. Alternate, domain dependent, semantic frames from a spoken language understanding (SLU) analysis are ranked using a
Autor:
Ruhi Sarikaya, Puyang Xu
Publikováno v:
INTERSPEECH
In slot filling with conditional random field (CRF), the strong current word and dictionary features tend to swamp the effect of contextual features, a phenomenon also known as feature undertraining. This is a dangerous tradeoff especially when train
Autor:
Ruhi Sarikaya, Puyang Xu
Publikováno v:
ICASSP
In a multi-domain, multi-turn spoken language understanding session, information from the history often greatly reduces the ambiguity of the current turn. In this paper, we apply the recurrent neural network (RNN) to exploit contextual information fo
Autor:
Ruhi Sarikaya, Puyang Xu
Publikováno v:
INTERSPEECH
Multi-intent natural language sentence classification aims at identifying multiple user goals in a single natural language sentence (e.g., “find Beyonce’s movie and music” ! find movie, find music). The main motivation of this work is to exploi
Publikováno v:
INTERSPEECH
Autor:
Brian Roark, Adam Lopez, Emily Prud'hommeaux, Damianos Karakos, Sanjeev Khudanpur, Philipp Koehn, Eva Hasler, Darcey Riley, Kenji Sagae, Daniel M. Bikel, Maider Lehr, Murat Saraclar, Puyang Xu, Matt Post, Keith Hall, Nathan Glenn, Chris Callison-Burch, Izhak Shafran, Yuan Cao
Publikováno v:
ICASSP
Sagae, K, Lehr, M, Prud'hommeaux, E T, Xu, P, Glenn, N, Karakos, D, Khudanpur, S, Roark, B, Saraclar, M, Shafran, I, Bikel, D M, Callison-Burch, C, Cao, Y, Hall, K B, Hasler, E, Koehn, P, Lopez, A, Post, M & Riley, D 2012, Hallucinated n-best lists for discriminative language modeling . in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2012, Kyoto, Japan, March 25-30, 2012 . Institute of Electrical and Electronics Engineers (IEEE), pp. 5001-5004 . https://doi.org/10.1109/ICASSP.2012.6289043
Sagae, K, Lehr, M, Prud'hommeaux, E T, Xu, P, Glenn, N, Karakos, D, Khudanpur, S, Roark, B, Saraclar, M, Shafran, I, Bikel, D M, Callison-Burch, C, Cao, Y, Hall, K B, Hasler, E, Koehn, P, Lopez, A, Post, M & Riley, D 2012, Hallucinated n-best lists for discriminative language modeling . in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2012, Kyoto, Japan, March 25-30, 2012 . Institute of Electrical and Electronics Engineers (IEEE), pp. 5001-5004 . https://doi.org/10.1109/ICASSP.2012.6289043
This paper investigates semi-supervised methods for discriminative language modeling, whereby n-best lists are “hallucinated” for given reference text and are then used for training n-gram language models using the perceptron algorithm. We perfor