Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Wawrzynski, Paweł"'
Autor:
Krukowski, Patryk, Bielawska, Anna, Książek, Kamil, Wawrzyński, Paweł, Batorski, Paweł, Spurek, Przemysław
Recently, a new Continual Learning (CL) paradigm was presented to control catastrophic forgetting, called Interval Continual Learning (InterContiNet), which relies on enforcing interval constraints on the neural network parameter space. Unfortunately
Externí odkaz:
http://arxiv.org/abs/2405.15444
Graph embeddings have emerged as a powerful tool for representing complex network structures in a low-dimensional space, enabling the use of efficient methods that employ the metric structure in the embedding space as a proxy for the topological stru
Externí odkaz:
http://arxiv.org/abs/2404.10784
Autor:
Łyskawa, Jakub, Wawrzyński, Paweł
Reinforcement learning (RL) methods work in discrete time. In order to apply RL to inherently continuous problems like robotic control, a specific time discretization needs to be defined. This is a choice between sparse time control, which may be eas
Externí odkaz:
http://arxiv.org/abs/2308.04299
Autor:
Lepak, Łukasz, Wawrzyński, Paweł
An increasing share of energy is produced from renewable sources by many small producers. The efficiency of those sources is volatile and, to some extent, random, exacerbating the problem of energy market balancing. In many countries, this balancing
Externí odkaz:
http://arxiv.org/abs/2303.16266
Autor:
Bortkiewicz, Michał, Łyskawa, Jakub, Wawrzyński, Paweł, Ostaszewski, Mateusz, Grudkowski, Artur, Trzciński, Tomasz
Hierarchical decomposition of control is unavoidable in large dynamical systems. In reinforcement learning (RL), it is usually solved with subgoals defined at higher policy levels and achieved at lower policy levels. Reaching these goals can take a s
Externí odkaz:
http://arxiv.org/abs/2211.06351
Effective reinforcement learning requires a proper balance of exploration and exploitation defined by the dispersion of action distribution. However, this balance depends on the task, the current stage of the learning process, and the current environ
Externí odkaz:
http://arxiv.org/abs/2208.00156
Invertible transformation of large graphs into fixed dimensional vectors (embeddings) remains a challenge. Its overcoming would reduce any operation on graphs to an operation in a vector space. However, most existing methods are limited to graphs wit
Externí odkaz:
http://arxiv.org/abs/2201.12165
We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the train
Externí odkaz:
http://arxiv.org/abs/2201.06534
We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space. Deep generative models suffer from catastrophic forgetting in the same way as other neural structures. Recent genera
Externí odkaz:
http://arxiv.org/abs/2106.12196
A number of problems in the processing of sound and natural language, as well as in other areas, can be reduced to simultaneously reading an input sequence and writing an output sequence of generally different length. There are well developed methods
Externí odkaz:
http://arxiv.org/abs/2105.14097