Zobrazeno 1 - 10
of 774
pro vyhledávání: '"Henderson, James P."'
Link prediction models can benefit from incorporating textual descriptions of entities and relations, enabling fully inductive learning and flexibility in dynamic graphs. We address the challenge of also capturing rich structured information about th
Externí odkaz:
http://arxiv.org/abs/2408.06778
The ever-growing volume of biomedical publications creates a critical need for efficient knowledge discovery. In this context, we introduce an open-source end-to-end framework designed to construct knowledge around specific diseases directly from raw
Externí odkaz:
http://arxiv.org/abs/2407.13492
Autor:
Nagano, Yuta, Pyo, Andrew, Milighetti, Martina, Henderson, James, Shawe-Taylor, John, Chain, Benny, Tiffeau-Mayer, Andreas
Computational prediction of the interaction of T cell receptors (TCRs) and their ligands is a grand challenge in immunology. Despite advances in high-throughput assays, specificity-labelled TCR data remains sparse. In other domains, the pre-training
Externí odkaz:
http://arxiv.org/abs/2406.06397
A key challenge in molecular biology is to decipher the mapping of protein sequence to function. To perform this mapping requires the identification of sequence features most informative about function. Here, we quantify the amount of information (in
Externí odkaz:
http://arxiv.org/abs/2404.12565
Autor:
Fehr, Fabio, Henderson, James
The current paradigm of large-scale pre-training and fine-tuning Transformer large language models has lead to significant improvements across the board in natural language processing. However, such large models are susceptible to overfitting to thei
Externí odkaz:
http://arxiv.org/abs/2312.00662
Lachesis protocol~\cite{lachesis2021} leverages a DAG of events to allow nodes to reach fast consensus of events. This work introduces DAG progress metrics to drive the nodes to emit new events more effectively. With these metrics, nodes can select e
Externí odkaz:
http://arxiv.org/abs/2311.02339
We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case. Attention weights are functionally equivalent to graph edges. Our Graph-to-Graph Transformer architecture makes this ability explicit, by inpu
Externí odkaz:
http://arxiv.org/abs/2310.17936
Learned representations at the level of characters, sub-words, words and sentences, have each contributed to advances in understanding different NLP tasks and linguistic phenomena. However, learning textual embeddings is costly as they are tokenizati
Externí odkaz:
http://arxiv.org/abs/2310.17284
Document-level relation extraction typically relies on text-based encoders and hand-coded pooling heuristics to aggregate information learned by the encoder. In this paper, we leverage the intrinsic graph processing capabilities of the Transformer mo
Externí odkaz:
http://arxiv.org/abs/2308.14423
Autor:
Mahabadi, Rabeeh Karimi, Ivison, Hamish, Tae, Jaesung, Henderson, James, Beltagy, Iz, Peters, Matthew E., Cohan, Arman
Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various continuous domains. However, applying continuous diffusion models to natural language remains challenging due to its discrete nature and the
Externí odkaz:
http://arxiv.org/abs/2305.08379