Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Marco Dinarelli"'
Publikováno v:
Interspeech 2022.
Recent advances in spoken language understanding benefited from Self-Supervised models trained on large speech corpora. For French, the LeBenchmark project has made such models available and has led to impressive progress on several tasks including s
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Multi-encoder models are a broad family of context-aware neural machine translation systems that aim to improve translation quality by encoding document-level contextual information alongside the current sentence. The context encoding is undertaken b
Publikováno v:
HAL
Recent advances in spoken language understanding benefited from Self-Supervised models trained on large speech corpora. For French, the LeBenchmark project has made such models available and has led to impressive progress on several tasks including s
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::94ea9e999284dcd0d42fcc82e9a66bb4
Autor:
Hang Le, Sina Alisamir, Marco Dinarelli, Fabien Ringeval, Solène Evain, Ha Nguyen, Marcely Zanon Boito, Salima Mdhaffar, Ziyi Tong, Natalia Tomashenko, Titouan Parcollet, Alexandre Allauzen, Yannick Estève, Benjamin Lecouteux, François Portet, Solange Rossato, Didier Schwab, Laurent Besacier
Publikováno v:
HAL
L'apprentissage autosupervisé a apporté des améliorations remarquables dans de nombreux domaines tels que la vision par ordinateur ou le traitement de la langue et de la parole, en exploitant de grandes quantités de données non étiquetées. Dan
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5b45625171543c36fa01b1db28207288
https://hal.archives-ouvertes.fr/hal-03706952
https://hal.archives-ouvertes.fr/hal-03706952
Autor:
Laurent Besacier, Marcely Zanon Boito, Solène Evain, Ziyi Tong, Solange Rossato, Yannick Estève, Titouan Parcollet, Marco Dinarelli, Natalia A. Tomashenko, Benjamin Lecouteux, Hang Le, Sina Alisamir, François Portet, Ha Nguyen, Didier Schwab, Salima Mdhaffar, Alexandre Allauzen, Fabien Ringeval
Publikováno v:
INTERSPEECH 2021
INTERSPEECH 2021: Conference of the International Speech Communication Association
INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic
HAL
INTERSPEECH 2021: Conference of the International Speech Communication Association
INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic
HAL
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image and natural language processing. Recent works also investigated SSL from speech. They were notably successful to improve performance on downstream tasks
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::613b79ecb8bb6d24c24c38e95fe1281e
http://arxiv.org/abs/2104.11462
http://arxiv.org/abs/2104.11462
Publikováno v:
ICASSP
End-to-end architectures have been recently proposed for spoken language understanding (SLU) and semantic parsing. Based on a large amount of data, those models learn jointly acoustic and linguistic-sequential features. Such architectures give very g
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5d891eb1656ba461288d395c1531e971
http://arxiv.org/abs/2002.05955
http://arxiv.org/abs/2002.05955
Publikováno v:
Computational Linguistics and Intelligent Text Processing ISBN: 9783319771120
CICLing (1)
Intelligent Text Processing and Computational Linguistics (CICling)
Intelligent Text Processing and Computational Linguistics (CICling), Apr 2017, Budapest, Hungary
CICLing (1)
Intelligent Text Processing and Computational Linguistics (CICling)
Intelligent Text Processing and Computational Linguistics (CICling), Apr 2017, Budapest, Hungary
In the last few years, Recurrent Neural Networks (RNNs) have proved effective on several NLP tasks. Despite such great success, their ability to model \emph{sequence labeling} is still limited. This lead research toward solutions where RNNs are combi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::78e3291c6402e7d49edf006f9552cd8f
https://doi.org/10.1007/978-3-319-77113-7_4
https://doi.org/10.1007/978-3-319-77113-7_4
Autor:
Jean-Yves Antoine, Adèle Désoyer, Anaïs Lefeuvre, Frédéric Landragin, Isabelle Tellier, Marco Dinarelli
Publikováno v:
Computational Linguistics and Intelligent Text Processing ISBN: 9783319754765
CICLing (1)
CICLing (1)
We present CROC (Coreference Resolution for Oral Corpus), the first machine learning system for coreference resolution in French. One specific aspect of the system is that it has been trained on data that come exclusively from transcribed speech, nam
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::dad355263511f877db26fe93f9ebf20e
https://doi.org/10.1007/978-3-319-75477-2_36
https://doi.org/10.1007/978-3-319-75477-2_36