Zobrazeno 1 - 10
of 251
pro vyhledávání: '"Poibeau Thierry"'
In this short paper, we examine the main metrics used to evaluate textual coreference and we detail some of their limitations. We show that a unique score cannot represent the full complexity of the problem at stake, and is thus uninformative, or eve
Externí odkaz:
http://arxiv.org/abs/2401.00238
Compositionality is a hallmark of human language that not only enables linguistic generalization, but also potentially facilitates acquisition. When simulating language emergence with neural networks, compositionality has been shown to improve commun
Externí odkaz:
http://arxiv.org/abs/2305.12941
We present a novel neural model for modern poetry generation in French. The model consists of two pretrained neural models that are fine-tuned for the poem generation task. The encoder of the model is a RoBERTa based one while the decoder is based on
Externí odkaz:
http://arxiv.org/abs/2212.02911
We present a novel approach to generating news headlines in Finnish for a given news story. We model this as a summarization task where a model is given a news article, and its task is to produce a concise headline describing the main topic of the ar
Externí odkaz:
http://arxiv.org/abs/2212.02170
We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: \textit{anger, disgust, fear, h
Externí odkaz:
http://arxiv.org/abs/2212.02168
Word order, an essential property of natural languages, is injected in Transformer-based neural language models using position encoding. However, recent experiments have shown that explicit position encoding is not always useful, since some models wi
Externí odkaz:
http://arxiv.org/abs/2211.04427
Both humans and neural language models are able to perform subject-verb number agreement (SVA). In principle, semantics shouldn't interfere with this task, which only requires syntactic knowledge. In this work we test whether meaning interferes with
Externí odkaz:
http://arxiv.org/abs/2209.10538
A central quest of probing is to uncover how pre-trained models encode a linguistic property within their representations. An encoding, however, might be spurious-i.e., the model might not rely on it when making predictions. In this paper, we try to
Externí odkaz:
http://arxiv.org/abs/2204.08831