Zobrazeno 1 - 10
of 500
pro vyhledávání: '"Rath, Matthias"'
We address the problem of improving the performance and in particular the sample complexity of deep neural networks by enforcing and guaranteeing invariances to symmetry transformations rather than learning them from data. Group-equivariant convoluti
Externí odkaz:
http://arxiv.org/abs/2303.01567
Autor:
Marci-Boehncke, Gudrun1, Rath, Matthias2
Publikováno v:
Medien & Erziehung. Oct2024, Vol. 68 Issue 5, p53-61. 9p.
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks. This makes them applicable to practically important use-cases where training data is scarce. Rathe
Externí odkaz:
http://arxiv.org/abs/2202.03967
Autor:
Rath, Matthias Franz1,2 (AUTHOR) matthias.rath@tugraz.at, Birgel, Christof2 (AUTHOR), Buchroithner, Armin2 (AUTHOR), Schweighofer, Bernhard1,2 (AUTHOR), Wegleiter, Hannes1,2 (AUTHOR)
Publikováno v:
Sensors (14248220). Jul2024, Vol. 24 Issue 13, p4292. 20p.
Autor:
Zaninetti, Carlo, Rivera, Jose’, Vater, Leonard, Ohlenforst, Sandra, Leinøe, Eva, Böckelmann, Doris, Freson, Kathleen, Thiele, Thomas, Makhloufi, Houssain, Rath, Matthias, Eberl, Wolfgang, Wolff, Martina, Freyer, Carmen, Wesche, Jan, Zieger, Barbara, Felbor, Ute, Heidel, Florian H., Greinacher, Andreas
Publikováno v:
In Journal of Thrombosis and Haemostasis April 2024 22(4):1179-1186
Deep Neural Networks achieve state-of-the-art results in many different problem settings by exploiting vast amounts of training data. However, collecting, storing and - in the case of supervised learning - labelling the data is expensive and time-con
Externí odkaz:
http://arxiv.org/abs/2006.16867
In this contribution, we show how to incorporate prior knowledge to a deep neural network architecture in a principled manner. We enforce feature space invariances using a novel layer based on invariant integration. This allows us to construct a comp
Externí odkaz:
http://arxiv.org/abs/2004.09166