Zobrazeno 1 - 10
of 55
pro vyhledávání: '"Miconi, Thomas"'
Autor:
Schmidgall, Samuel, Achterberg, Jascha, Miconi, Thomas, Kirsch, Louis, Ziaei, Rojin, Hajiseyedrazi, S. Pardis, Eshraghian, Jason
Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics. However, there exist fundamental differen
Externí odkaz:
http://arxiv.org/abs/2305.11252
Autor:
Miconi, Thomas
Open-endedness stands to benefit from the ability to generate an infinite variety of diverse, challenging environments. One particularly interesting type of challenge is meta-learning ("learning-to-learn"), a hallmark of intelligent behavior. However
Externí odkaz:
http://arxiv.org/abs/2302.05583
Autor:
Miconi, Thomas
Publikováno v:
40th International Conference on Machine Learning (ICML 2023)
A hallmark of intelligence is the ability to autonomously learn new flexible, cognitive behaviors - that is, behaviors where the appropriate action depends not just on immediate stimuli (as in simple reflexive stimulus-response associations), but on
Externí odkaz:
http://arxiv.org/abs/2112.08588
Autor:
Miconi, Thomas
Deep learning networks generally use non-biological learning methods. By contrast, networks based on more biologically plausible learning, such as Hebbian learning, show comparatively poor performance and difficulties of implementation. Here we show
Externí odkaz:
http://arxiv.org/abs/2107.01729
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neu
Externí odkaz:
http://arxiv.org/abs/2006.16558
Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity
Publikováno v:
7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019
The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the b
Externí odkaz:
http://arxiv.org/abs/2002.10585
Autor:
Edwards, Ashley D., Sahni, Himanshu, Liu, Rosanne, Hung, Jane, Jain, Ankit, Wang, Rui, Ecoffet, Adrien, Miconi, Thomas, Isbell, Charles, Yosinski, Jason
In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a
Externí odkaz:
http://arxiv.org/abs/2002.09505
Autor:
Beaulieu, Shawn, Frati, Lapo, Miconi, Thomas, Lehman, Joel, Stanley, Kenneth O., Clune, Jeff, Cheney, Nick
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning mod
Externí odkaz:
http://arxiv.org/abs/2002.09571
Autor:
Moskovitz, Ted, Wang, Rui, Lan, Janice, Kapoor, Sanyam, Miconi, Thomas, Yosinski, Jason, Rawal, Aditya
Standard gradient descent methods are susceptible to a range of issues that can impede training, such as high correlations and different scaling in parameter space.These difficulties can be addressed by second-order approaches that apply a pre-condit
Externí odkaz:
http://arxiv.org/abs/1910.08461
Publikováno v:
Proceedings of the 35th International Conference on Machine Learning (ICML2018), Stockholm, Sweden, PMLR 80, 2018
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to
Externí odkaz:
http://arxiv.org/abs/1804.02464