Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Pascual, Damián"'
Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is c
Externí odkaz:
http://arxiv.org/abs/2202.04350
Autor:
Kastrati, Ard, Płomecka, Martyna Beata, Pascual, Damián, Wolf, Lukas, Gillioz, Victor, Wattenhofer, Roger, Langer, Nicolas
We present a new dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements. Our dataset, EEGEyeNet, consists of simultaneous Electroencephalography (EEG) and Eye-tracking (ET) recordings from
Externí odkaz:
http://arxiv.org/abs/2111.05100
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec
Externí odkaz:
http://arxiv.org/abs/2109.13304
Large pre-trained language models have repeatedly shown their ability to produce fluent text. Yet even when starting from a prompt, generation can continue in many plausible directions. Current decoding methods with the goal of controlling generation
Externí odkaz:
http://arxiv.org/abs/2109.09707
Deep Neural Networks have taken Natural Language Processing by storm. While this led to incredible improvements across many tasks, it also initiated a new research field, questioning the robustness of these neural networks by attacking them. In this
Externí odkaz:
http://arxiv.org/abs/2109.07403
Automatic ICD coding is the task of assigning codes from the International Classification of Diseases (ICD) to medical notes. These codes describe the state of the patient and have multiple applications, e.g., computer-assisted diagnosis or epidemiol
Externí odkaz:
http://arxiv.org/abs/2104.06709
In this work we provide new insights into the transformer architecture, and in particular, its best-known variant, BERT. First, we propose a method to measure the degree of non-linearity of different elements of transformers. Next, we focus our inves
Externí odkaz:
http://arxiv.org/abs/2101.04547
Large pre-trained language models are capable of generating realistic text. However, controlling these models so that the generated text satisfies lexical constraints, i.e., contains specific words, is a challenging problem. Given that state-of-the-a
Externí odkaz:
http://arxiv.org/abs/2012.15416
Brain decoding, understood as the process of mapping brain activities to the stimuli that generated them, has been an active research area in the last years. In the case of language stimuli, recent studies have shown that it is possible to decode fMR
Externí odkaz:
http://arxiv.org/abs/2009.04765
Autor:
Faber, Lukas, Luck, Sandro, Pascual, Damian, Roth, Andreas, Brunner, Gino, Wattenhofer, Roger
The automatic generation of medleys, i.e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature. To facilitate research on this topic, we make available a dataset called Medley2K
Externí odkaz:
http://arxiv.org/abs/2008.11159