Zobrazeno 1 - 10
of 111
pro vyhledávání: '"Gepperth, Alexander"'
Reinforcement learning is commonly concerned with problems of maximizing accumulated rewards in Markov decision processes. Oftentimes, a certain goal state or a subset of the state space attain maximal reward. In such a case, the environment may be c
Externí odkaz:
http://arxiv.org/abs/2405.18118
Autor:
Verwimp, Eli, Aljundi, Rahaf, Ben-David, Shai, Bethge, Matthias, Cossu, Andrea, Gepperth, Alexander, Hayes, Tyler L., Hüllermeier, Eyke, Kanan, Christopher, Kudithipudi, Dhireesha, Lampert, Christoph H., Mundt, Martin, Pascanu, Razvan, Popescu, Adrian, Tolias, Andreas S., van de Weijer, Joost, Liu, Bing, Lomonaco, Vincenzo, Tuytelaars, Tinne, van de Ven, Gido M.
Publikováno v:
Transactions on Machine Learning Research (TMLR), 2024
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past. In this work, we take a step back, and ask
Externí odkaz:
http://arxiv.org/abs/2311.11908
Autor:
Gepperth, Alexander
Gaussian Mixture Models (GMMs) are a standard tool in data analysis. However, they face problems when applied to high-dimensional data (e.g., images) due to the size of the required full covariance matrices (CMs), whereas the use of diagonal or spher
Externí odkaz:
http://arxiv.org/abs/2308.13778
Conventional replay-based approaches to continual learning (CL) require, for each learning phase with new data, the replay of samples representing all of the previously learned knowledge in order to avoid catastrophic forgetting. Since the amount of
Externí odkaz:
http://arxiv.org/abs/2303.13157
This study proposes a framework for the automated hyperparameter optimization of a bearing fault detection pipeline for permanent magnet synchronous motors (PMSMs) without the need of external sensors. A automated machine learning (AutoML) pipeline s
Externí odkaz:
http://arxiv.org/abs/2303.08858
Continual Learning (CL, sometimes also termed incremental learning) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted. When naively applying, e.g., DNNs in CL problems, changes in the dat
Externí odkaz:
http://arxiv.org/abs/2208.14307
Autor:
Bagus, Benedikt, Gepperth, Alexander
We present an empirical study on the use of continual learning (CL) methods in a reinforcement learning (RL) scenario, which, to the best of our knowledge, has not been described before. CL is a very active recent research topic concerned with machin
Externí odkaz:
http://arxiv.org/abs/2206.03934
Autor:
Gepperth, Alexander
We present the Deep Convolutional Gaussian Mixture Model (DCGMM), a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference. DCGMM instances exhibit a CNN-like layered structure, in which the prin
Externí odkaz:
http://arxiv.org/abs/2203.11034
Autor:
Bagus, Benedikt, Gepperth, Alexander
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF). Recent works indicate that CL is a complex topic, even more so when real-world s
Externí odkaz:
http://arxiv.org/abs/2108.06758
We present an approach for continual learning (CL) that is based on fully probabilistic (or generative) models of machine learning. In contrast to, e.g., GANs that are "generative" in the sense that they can generate samples, fully probabilistic mode
Externí odkaz:
http://arxiv.org/abs/2104.09240