Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Ericsson, Linus"'
Autor:
Ericsson, Linus, Espinosa, Miguel, Yang, Chenhongyi, Antoniou, Antreas, Storkey, Amos, Cohen, Shay B., McDonagh, Steven, Crowley, Elliot J.
Neural architecture search (NAS) finds high performing networks for a given task. Yet the results of NAS are fairly prosaic; they did not e.g. create a shift from convolutional structures to transformers. This is not least because the search spaces i
Externí odkaz:
http://arxiv.org/abs/2405.20838
In continual learning (CL) -- where a learner trains on a stream of data -- standard hyperparameter optimisation (HPO) cannot be applied, as a learner does not have access to all of the data at the same time. This has prompted the development of CL-s
Externí odkaz:
http://arxiv.org/abs/2404.06466
Autor:
Yang, Chenhongyi, Chen, Zehui, Espinosa, Miguel, Ericsson, Linus, Wang, Zhenyu, Liu, Jiaming, Crowley, Elliot J.
We present PlainMamba: a simple non-hierarchical state space model (SSM) designed for general visual recognition. The recent Mamba model has shown how SSMs can be highly competitive with other architectures on sequential data and initial attempts hav
Externí odkaz:
http://arxiv.org/abs/2403.17695
Autor:
Eastwood, Cian, von Kügelgen, Julius, Ericsson, Linus, Bouchacourt, Diane, Vincent, Pascal, Schölkopf, Bernhard, Ibrahim, Mark
Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data. However, with downstream tasks generally unknown at training time, it is difficult to deduce a priori which attributes
Externí odkaz:
http://arxiv.org/abs/2311.08815
Distribution shifts are all too common in real-world applications of machine learning. Domain adaptation (DA) aims to address this by providing various frameworks for adapting models to the deployment data without using labels. However, the domain sh
Externí odkaz:
http://arxiv.org/abs/2309.03879
Foundation models have significantly advanced medical image analysis through the pre-train fine-tune paradigm. Among various fine-tuning algorithms, Parameter-Efficient Fine-Tuning (PEFT) is increasingly utilized for knowledge transfer across diverse
Externí odkaz:
http://arxiv.org/abs/2305.08252
Self-supervised pre-training, based on the pretext task of instance discrimination, has fueled the recent advance in label-efficient object detection. However, existing studies focus on pre-training only a feature extractor network to learn transfera
Externí odkaz:
http://arxiv.org/abs/2211.09022
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A wealth of effective new methods based on instance matching rely on data-augmentation to drive learning, and these have reached a rough agreement on an
Externí odkaz:
http://arxiv.org/abs/2111.11398
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets, thus alleviating the annotation bottleneck that is one of the main barriers to practical deployment of
Externí odkaz:
http://arxiv.org/abs/2110.09327
Publikováno v:
In Neurocomputing 7 April 2024 577