Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Farinhas, António"'
Autor:
Agrawal, Sweta, de Souza, José G. C., Rei, Ricardo, Farinhas, António, Faria, Gonçalo, Fernandes, Patrick, Guerreiro, Nuno M, Martins, Andre
Alignment with human preferences is an important step in developing accurate and safe large language models. This is no exception in machine translation (MT), where better handling of language nuances and context-specific variations leads to improved
Externí odkaz:
http://arxiv.org/abs/2410.07779
To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reran
Externí odkaz:
http://arxiv.org/abs/2409.07131
Autor:
Faria, Gonçalo R. A., Agrawal, Sweta, Farinhas, António, Rei, Ricardo, de Souza, José G. C., Martins, André F. T.
An important challenge in machine translation (MT) is to generate high-quality and diverse translations. Prior work has shown that the estimated likelihood from the MT model correlates poorly with translation quality. In contrast, quality evaluation
Externí odkaz:
http://arxiv.org/abs/2406.00049
Automatic metrics for evaluating translation quality are typically validated by measuring how well they correlate with human assessments. However, correlation methods tend to capture only the ability of metrics to differentiate between good and bad s
Externí odkaz:
http://arxiv.org/abs/2405.18348
Autor:
Campos, Margarida M., Farinhas, António, Zerva, Chrysoula, Figueiredo, Mário A. T., Martins, André F. T.
The rapid proliferation of large language models and natural language processing (NLP) applications creates a crucial need for uncertainty quantification to mitigate risks such as hallucinations and to enhance decision-making reliability in critical
Externí odkaz:
http://arxiv.org/abs/2405.01976
Reinforcement learning from human feedback (RLHF) is a recent technique to improve the quality of the text generated by a language model, making it closer to what humans would generate. A core ingredient in RLHF's success in aligning and improving la
Externí odkaz:
http://arxiv.org/abs/2311.09132
Large language models (LLMs) are becoming a one-fits-many solution, but they sometimes hallucinate or produce unreliable output. In this paper, we investigate how hypothesis ensembling can improve the quality of the generated text for the specific pr
Externí odkaz:
http://arxiv.org/abs/2310.11430
Split conformal prediction has recently sparked great interest due to its ability to provide formally guaranteed uncertainty sets or intervals for predictions made by black-box neural models, ensuring a predefined probability of containing the actual
Externí odkaz:
http://arxiv.org/abs/2310.01262
Autor:
Fernandes, Patrick, Madaan, Aman, Liu, Emmy, Farinhas, António, Martins, Pedro Henrique, Bertsch, Amanda, de Souza, José G. C., Zhou, Shuyan, Wu, Tongshuang, Neubig, Graham, Martins, André F. T.
Many recent advances in natural language generation have been fueled by training large language models on internet-scale data. However, this paradigm can lead to models that generate toxic, inaccurate, and unhelpful content, and automatic evaluation
Externí odkaz:
http://arxiv.org/abs/2305.00955
Autor:
Fernandes, Patrick, Farinhas, António, Rei, Ricardo, de Souza, José G. C., Ogayo, Perez, Neubig, Graham, Martins, André F. T.
Despite the progress in machine translation quality estimation and evaluation in the last years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers around finding the most probable translation according to the model
Externí odkaz:
http://arxiv.org/abs/2205.00978