Zobrazeno 1 - 10
of 6 557
pro vyhledávání: '"JANET, B."'
Large Language Models (LLMs) have been shown to perform well for many downstream tasks. Transfer learning can enable LLMs to acquire skills that were not targeted during pre-training. In financial contexts, LLMs can sometimes beat well-established be
Externí odkaz:
http://arxiv.org/abs/2407.17624
Autor:
Su, Ruiran, Pierrehumbert, Janet B.
This work introduces the ClimateSent-GAT Model, an innovative method that integrates Graph Attention Networks (GATs) with techniques from natural language processing to accurately identify and predict disagreements within Reddit comment-reply pairs.
Externí odkaz:
http://arxiv.org/abs/2407.07038
Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora, reflecting the lack of datasets with temporal metadata. This approach is not aligned with the evolving nature of language. Conventional methods for c
Externí odkaz:
http://arxiv.org/abs/2404.18543
Probing Large Language Models for Scalar Adjective Lexical Semantics and Scalar Diversity Pragmatics
Scalar adjectives pertain to various domain scales and vary in intensity within each scale (e.g. certain is more intense than likely on the likelihood scale). Scalar implicatures arise from the consideration of alternative statements which could have
Externí odkaz:
http://arxiv.org/abs/2404.03301
The rise of social media platforms has led to an increase in polarised online discussions, especially on political and socio-cultural topics such as elections and climate change. We propose a simple and novel unsupervised method to predict whether th
Externí odkaz:
http://arxiv.org/abs/2403.15885
Autor:
Lin, Fangru, La Malfa, Emanuele, Hofmann, Valentin, Yang, Elle Michelle, Cohn, Anthony, Pierrehumbert, Janet B.
Planning is a fundamental property of human intelligence. Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs. Can large language models (LLMs) succeed at this task? Here, we pre
Externí odkaz:
http://arxiv.org/abs/2402.02805
We propose a fully unsupervised method to detect bias in contextualized embeddings. The method leverages the assortative information latently encoded by social networks and combines orthogonality regularization, structured sparsity learning, and grap
Externí odkaz:
http://arxiv.org/abs/2212.07547