Zobrazeno 1 - 10
of 6 933
pro vyhledávání: '"A. Bethge"'
Humans excel at detecting and segmenting moving objects according to the Gestalt principle of "common fate". Remarkably, previous works have shown that human perception generalizes this principle in a zero-shot fashion to unseen textures or random do
Externí odkaz:
http://arxiv.org/abs/2411.01505
Autor:
Binz, Marcel, Akata, Elif, Bethge, Matthias, Brändle, Franziska, Callaway, Fred, Coda-Forno, Julian, Dayan, Peter, Demircan, Can, Eckstein, Maria K., Éltető, Noémi, Griffiths, Thomas L., Haridi, Susanne, Jagadish, Akshay K., Ji-An, Li, Kipnis, Alexander, Kumar, Sreejan, Ludwig, Tobias, Mathony, Marvin, Mattar, Marcelo, Modirshanechi, Alireza, Nath, Surabhi S., Peterson, Joshua C., Rmus, Milena, Russek, Evan M., Saanum, Tankred, Scharfenberg, Natalia, Schubert, Johannes A., Buschoff, Luca M. Schulze, Singhi, Nishad, Sui, Xin, Thalmann, Mirko, Theis, Fabian, Truong, Vuong, Udandarao, Vishaal, Voudouris, Konstantinos, Wilson, Robert, Witte, Kristin, Wu, Shuchen, Wulff, Dirk, Xiong, Huadong, Schulz, Eric
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in
Externí odkaz:
http://arxiv.org/abs/2410.20268
Autor:
Mayilvahanan, Prasanna, Zimmermann, Roland S., Wiedemer, Thaddäus, Rusak, Evgenia, Juhos, Attila, Bethge, Matthias, Brendel, Wieland
Out-of-Domain (OOD) generalization is the ability of a model trained on one or more domains to generalize to unseen domains. In the ImageNet era of computer vision, evaluation sets for measuring a model's OOD performance were designed to be strictly
Externí odkaz:
http://arxiv.org/abs/2410.08258
Autor:
Öncel, Fırat, Bethge, Matthias, Ermis, Beyza, Ravanelli, Mirco, Subakan, Cem, Yıldız, Çağatay
In the last decade, the generalization and adaptation abilities of deep learning models were typically evaluated on fixed training and test distributions. Contrary to traditional deep learning, large language models (LLMs) are (i) even more overparam
Externí odkaz:
http://arxiv.org/abs/2410.05581
Autor:
Roth, Karsten, Udandarao, Vishaal, Dziadzio, Sebastian, Prabhu, Ameya, Cherti, Mehdi, Vinyals, Oriol, Hénaff, Olivier, Albanie, Samuel, Bethge, Matthias, Akata, Zeynep
Multimodal foundation models serve numerous applications at the intersection of vision and language. Still, despite being pretrained on extensive data, they become outdated over time. To keep models updated, research into continual pretraining mainly
Externí odkaz:
http://arxiv.org/abs/2408.14471
Autor:
Li, Hao, Alemán, Tanausú del Pino, Bueno, Javier Trujillo, Ishikawa, Ryohko, Ballester, Ernest Alsina, McKenzie, David E., Belluzzi, Luca, Song, Donguk, Okamoto, Takenori J., Kobayashi, Ken, Rachmeler, Laurel A., Bethge, Christian, Auchère, Frédéric
We apply the HanleRT Tenerife Inversion Code to the spectro-polarimetric observations obtained by the Chromospheric LAyer SpectroPolarimeter. This suborbital space experiment measured the variation with wavelength of the four Stokes parameters in the
Externí odkaz:
http://arxiv.org/abs/2408.06094
Autor:
Press, Ori, Hochlehnert, Andreas, Prabhu, Ameya, Udandarao, Vishaal, Press, Ofir, Bethge, Matthias
Thousands of new scientific papers are published each month. Such information overload complicates researcher efforts to stay current with the state-of-the-art as well as to verify and correctly attribute claims. We pose the following research questi
Externí odkaz:
http://arxiv.org/abs/2407.12861
With the advent and recent ubiquity of foundation models, continual learning (CL) has recently shifted from continual training from scratch to the continual adaptation of pretrained models, seeing particular success on rehearsal-free CL benchmarks (R
Externí odkaz:
http://arxiv.org/abs/2406.09384
This work aims to improve generalization and interpretability of dynamical systems by recovering the underlying lower-dimensional latent states and their time evolutions. Previous work on disentangled representation learning within the realm of dynam
Externí odkaz:
http://arxiv.org/abs/2406.03337
Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they're faced with new data at test time. EM is a self-supervised learning method that optimizes classifiers to assign even higher probabilities to th
Externí odkaz:
http://arxiv.org/abs/2405.05012