Zobrazeno 1 - 10
of 152
pro vyhledávání: '"Dovrolis, Constantine"'
Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual Learning
We focus on a relatively unexplored learning paradigm known as {\em Online Unsupervised Continual Learning} (O-UCL), where an agent receives a non-stationary, unlabeled data stream and progressively learns to identify an increasing number of classes.
Externí odkaz:
http://arxiv.org/abs/2409.16391
Deep neural networks (DNNs) struggle to learn in dynamic environments since they rely on fixed datasets or stationary environments. Continual learning (CL) aims to address this limitation and enable DNNs to accumulate knowledge incrementally, similar
Externí odkaz:
http://arxiv.org/abs/2305.18563
Publikováno v:
37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Natural target functions and tasks typically exhibit hierarchical modularity -- they can be broken down into simpler sub-functions that are organized in a hierarchy. Such sub-functions have two important features: they have a distinct set of inputs (
Externí odkaz:
http://arxiv.org/abs/2305.18402
Diffusion MRI imaging and tractography algorithms have enabled the mapping of the macro-scale connectome of the entire brain. At the functional level, probably the simplest way to study the dynamics of macro-scale brain activity is to compute the "ac
Externí odkaz:
http://arxiv.org/abs/2207.07965
The goal of continual learning (CL) is to learn different tasks over time. The main desiderata associated with CL are to maintain performance on older tasks, leverage the latter to improve learning of future tasks, and to introduce minimal overhead i
Externí odkaz:
http://arxiv.org/abs/2206.09117
Methods that sparsify a network at initialization are important in practice because they greatly improve the efficiency of both learning and inference. Our work is based on a recently proposed decomposition of the Neural Tangent Kernel (NTK) that has
Externí odkaz:
http://arxiv.org/abs/2010.11354
Many complex systems, both in technology and nature, exhibit hierarchical modularity: smaller modules, each of them providing a certain function, are used within larger modules that perform more complex functions. Previously, we have proposed a model
Externí odkaz:
http://arxiv.org/abs/1906.02446
We first pose the Unsupervised Progressive Learning (UPL) problem: an online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, learning a growing number of features that persist over time even t
Externí odkaz:
http://arxiv.org/abs/1904.02021
Autor:
Dovrolis, Constantine
We propose that the Continual Learning desiderata can be achieved through a neuro-inspired architecture, grounded on Mountcastle's cortical column hypothesis. The proposed architecture involves a single module, called Self-Taught Associative Memory (
Externí odkaz:
http://arxiv.org/abs/1810.09391
It is well known that many complex systems, both in technology and nature, exhibit hierarchical modularity: smaller modules, each of them providing a certain function, are used within larger modules that perform more complex functions. What is not we
Externí odkaz:
http://arxiv.org/abs/1805.04924