Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Mendonça, John"'
Although human evaluation remains the gold standard for open-domain dialogue evaluation, the growing popularity of automated evaluation using Large Language Models (LLMs) has also extended to dialogue. However, most frameworks leverage benchmarks tha
Externí odkaz:
http://arxiv.org/abs/2408.10902
Despite being heralded as the new standard for dialogue evaluation, the closed-source nature of GPT-4 poses challenges for the community. Motivated by the need for lightweight, open source, and multilingual dialogue evaluators, this paper introduces
Externí odkaz:
http://arxiv.org/abs/2407.11660
Large Language Models (LLMs) have showcased remarkable capabilities in various Natural Language Processing tasks. For automatic open-domain dialogue evaluation in particular, LLMs have been seamlessly integrated into evaluation frameworks, and togeth
Externí odkaz:
http://arxiv.org/abs/2407.03841
Autor:
Mendonça, John, Pereira, Patrícia, Menezes, Miguel, Cabarrão, Vera, Farinha, Ana C., Moniz, Helena, Carvalho, João Paulo, Lavie, Alon, Trancoso, Isabel
Task-oriented conversational datasets often lack topic variability and linguistic diversity. However, with the advent of Large Language Models (LLMs) pretrained on extensive, multilingual and diverse text data, these limitations seem overcome. Nevert
Externí odkaz:
http://arxiv.org/abs/2311.13910
Autor:
Mendonça, John, Pereira, Patrícia, Moniz, Helena, Carvalho, João Paulo, Lavie, Alon, Trancoso, Isabel
Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses
Externí odkaz:
http://arxiv.org/abs/2308.16797
The main limiting factor in the development of robust multilingual dialogue evaluation metrics is the lack of multilingual data and the limited availability of open sourced multilingual dialogue systems. In this work, we propose a workaround for this
Externí odkaz:
http://arxiv.org/abs/2308.16795
Using Self-Supervised Feature Extractors with Attention for Automatic COVID-19 Detection from Speech
The ComParE 2021 COVID-19 Speech Sub-challenge provides a test-bed for the evaluation of automatic detectors of COVID-19 from speech. Such models can be of value by providing test triaging capabilities to health authorities, working alongside traditi
Externí odkaz:
http://arxiv.org/abs/2107.00112
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Mendonca, John
This dissertation investigates the immediate effects of securities analysts' statements on shareholders. Two of the most important questions posed in research on capital markets are when and how analysts matter. A time at which analysts might matter
Externí odkaz:
http://hdl.handle.net/2152/10569