Zobrazeno 1 - 10
of 2 385
pro vyhledávání: '"Klein, Richard A."'
A number of machine learning models have been proposed with the goal of achieving systematic generalization: the ability to reason about new situations by combining aspects of previous experiences. These models leverage compositional architectures wh
Externí odkaz:
http://arxiv.org/abs/2409.14981
Pretraining has been shown to improve performance in many domains, including semantic segmentation, especially in domains with limited labelled data. In this work, we perform a large-scale evaluation and benchmarking of various pretraining methods fo
Externí odkaz:
http://arxiv.org/abs/2402.17611
Autor:
Torpey, David, Klein, Richard
Often, applications of self-supervised learning to 3D medical data opt to use 3D variants of successful 2D network architectures. Although promising approaches, they are significantly more computationally demanding to train, and thus reduce the wides
Externí odkaz:
http://arxiv.org/abs/2402.15598
Autor:
Torpey, David, Klein, Richard
The standard approach to modern self-supervised learning is to generate random views through data augmentations and minimise a loss computed from the representations of these views. This inherently encourages invariance to the transformations that co
Externí odkaz:
http://arxiv.org/abs/2402.09071
We use the POLARIS radiative transfer code to produce simulated circular polarization Zeeman emission maps of the CN $J = 1 - 0$ molecular line transition for two types of protostellar envelope magnetohydrodynamic simulations. Our first model is a lo
Externí odkaz:
http://arxiv.org/abs/2312.00884
While reinforcement learning has achieved remarkable successes in several domains, its real-world application is limited due to many methods failing to generalise to unfamiliar conditions. In this work, we consider the problem of generalising to new
Externí odkaz:
http://arxiv.org/abs/2310.16686
Autor:
Torpey, David, Klein, Richard
It is known that representations from self-supervised pre-training can perform on par, and often better, on various downstream tasks than representations from fully-supervised pre-training. This has been shown in a host of settings such as generic ob
Externí odkaz:
http://arxiv.org/abs/2208.00787
We present the stability analysis of two regions, OMC-3 and OMC-4, in the massive and long molecular cloud complex of Orion A. We obtained $214~\mu$m HAWC+/SOFIA polarization data, and we make use of archival data for the column density and C$^{18}$O
Externí odkaz:
http://arxiv.org/abs/2206.00119
Autor:
Chen, Che-Yu, Li, Zhi-Yun, Mazzei, Renato R., Park, Jinsoo, Fissel, Laura M., Chen, Michael C. -Y., Klein, Richard I., Li, Pak Shing
Despite the rich observational results on interstellar magnetic fields in star-forming regions, it is still unclear how dynamically significant the magnetic fields are at varying physical scales, because direct measurement of the field strength is ob
Externí odkaz:
http://arxiv.org/abs/2205.09134
In this work, we investigate the properties of data that cause popular representation learning approaches to fail. In particular, we find that in environments where states do not significantly overlap, variational autoencoders (VAEs) fail to learn us
Externí odkaz:
http://arxiv.org/abs/2205.06000