Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Carvalho, Danilo S."'
Syllogistic reasoning is crucial for Natural Language Inference (NLI). This capability is particularly significant in specialized domains such as biomedicine, where it can support automatic evidence interpretation and scientific discovery. This paper
Externí odkaz:
http://arxiv.org/abs/2410.14399
Understanding the internal mechanisms of large language models (LLMs) is integral to enhancing their reliability, interpretability, and inference processes. We present Constituent-Aware Pooling (CAP), a methodology designed to analyse how LLMs proces
Externí odkaz:
http://arxiv.org/abs/2410.12924
This work presents a novel systematic methodology to analyse the capabilities and limitations of Large Language Models (LLMs) with feedback from a formal inference engine, on logic theory induction. The analysis is complexity-graded w.r.t. rule depen
Externí odkaz:
http://arxiv.org/abs/2408.16779
Locating and editing knowledge in large language models (LLMs) is crucial for enhancing their accuracy, safety, and inference rationale. We introduce ``concept editing'', an innovative variation of knowledge editing that uncovers conceptualisation me
Externí odkaz:
http://arxiv.org/abs/2408.11827
Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon. Recent re
Externí odkaz:
http://arxiv.org/abs/2402.00723
Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces. To combine the controllability of VAE latent spaces
Externí odkaz:
http://arxiv.org/abs/2312.13208
The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation. An effective strategy to achieve such a goal is to separate the encoding of distributiona
Externí odkaz:
http://arxiv.org/abs/2311.08579
Explainable natural language inference aims to provide a mechanism to produce explanatory (abductive) inference chains which ground claims to their supporting premises. A recent corpus called EntailmentBank strives to advance this task by explaining
Externí odkaz:
http://arxiv.org/abs/2308.03581
Natural language definitions possess a recursive, self-explanatory semantic structure that can support representation learning methods able to preserve explicit conceptual relations and constraints in the latent space. This paper presents a multi-rel
Externí odkaz:
http://arxiv.org/abs/2305.07303
Disentangled latent spaces usually have better semantic separability and geometrical properties, which leads to better interpretability and more controllable data generation. While this has been well investigated in Computer Vision, in tasks such as
Externí odkaz:
http://arxiv.org/abs/2305.01713