Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Havrylov, Serhii"'
Advances in deep learning theory have revealed how average generalization relies on superficial patterns in data. The consequences are brittle models with poor performance with shift in group distribution at test time. When group annotation is availa
Externí odkaz:
http://arxiv.org/abs/2210.12195
In NLP, a large volume of tasks involve pairwise comparison between two sequences (e.g. sentence similarity and paraphrase identification). Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders. Bi-encoders
Externí odkaz:
http://arxiv.org/abs/2109.13059
Autor:
Havrylov, Serhii, Titov, Ivan
Variational autoencoders (VAEs) are a standard framework for inducing latent variable models that have been shown effective in learning text representations as well as in text generation. The key challenge with using VAEs is the {\it posterior collap
Externí odkaz:
http://arxiv.org/abs/2004.14758
Since first introduced, computer simulation has been an increasingly important tool in evolutionary linguistics. Recently, with the development of deep learning techniques, research in grounded language learning has also started to focus on facilitat
Externí odkaz:
http://arxiv.org/abs/1910.05291
The goal of homomorphic encryption is to encrypt data such that another party can operate on it without being explicitly exposed to the content of the original data. We introduce an idea for a privacy-preserving transformation on natural language dat
Externí odkaz:
http://arxiv.org/abs/1904.09585
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, \citet{NangiaB18} has recently shown that the current best systems fail to learn the correct parsing strat
Externí odkaz:
http://arxiv.org/abs/1902.09393
We introduce a method for embedding words as probability densities in a low-dimensional space. Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we gene
Externí odkaz:
http://arxiv.org/abs/1711.11027
Autor:
Havrylov, Serhii, Titov, Ivan
Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, deve
Externí odkaz:
http://arxiv.org/abs/1705.11192
In NLP, a large volume of tasks involve pairwise comparison between two sequences (e.g. sentence similarity and paraphrase identification). Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders. Bi-encoders
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::f501436ba107e80143fe60a1daa8c323
We introduce a method for embedding words as probability densities in a low-dimensional space. Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we gene
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cc924425a8e7ba4d68bdbd43128e23b8
http://arxiv.org/abs/1711.11027
http://arxiv.org/abs/1711.11027