Zobrazeno 1 - 10
of 141
pro vyhledávání: '"Bhat, Prashant"'
Continual learning (CL) remains a significant challenge for deep neural networks, as it is prone to forgetting previously acquired knowledge. Several approaches have been proposed in the literature, such as experience rehearsal, regularization, and p
Externí odkaz:
http://arxiv.org/abs/2405.13978
Continual learning (CL) remains one of the long-standing challenges for deep neural networks due to catastrophic forgetting of previously acquired knowledge. Although rehearsal-based approaches have been fairly successful in mitigating catastrophic f
Externí odkaz:
http://arxiv.org/abs/2404.18161
Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks. Several techniques such as weight regularization, experience rehearsal, and parameter isolation have
Externí odkaz:
http://arxiv.org/abs/2310.08217
Autor:
Bhat, Prashant
The nucleus is spatially organized such that DNA, RNA, and protein molecules involved in shared functional and regulatory processes are compartmentalized in three-dimensional (3D) structures. These structures are emerging as a paradigm for gene regul
The ability of deep neural networks to continually learn and adapt to a sequence of tasks has remained challenging due to catastrophic forgetting of previously learned tasks. Humans, on the other hand, have a remarkable ability to acquire, assimilate
Externí odkaz:
http://arxiv.org/abs/2305.04769
Intelligent systems deployed in the real world suffer from catastrophic forgetting when exposed to a sequence of tasks. Humans, on the other hand, acquire, consolidate, and transfer knowledge between tasks that rarely interfere with the consolidated
Externí odkaz:
http://arxiv.org/abs/2302.11346
Continual learning (CL) over non-stationary data streams remains one of the long-standing challenges in deep neural networks (DNNs) as they are prone to catastrophic forgetting. CL models can benefit from self-supervised pre-training as it enables le
Externí odkaz:
http://arxiv.org/abs/2207.06267
Deep neural networks struggle to continually learn multiple sequential tasks due to catastrophic forgetting of previously learned tasks. Rehearsal-based methods which explicitly store previous task samples in the buffer and interleave them with the c
Externí odkaz:
http://arxiv.org/abs/2207.04998
Self-supervised learning solves pretext prediction tasks that do not require annotations to learn feature representations. For vision tasks, pretext tasks such as predicting rotation, solving jigsaw are solely created from the input data. Yet, predic
Externí odkaz:
http://arxiv.org/abs/2104.09866
Publikováno v:
In Physica E: Low-dimensional Systems and Nanostructures May 2023 149