Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Tyler L. Hayes"'
Publikováno v:
PLoS ONE, Vol 15, Iss 9, p e0238302 (2020)
Supervised classification methods often assume the train and test data distributions are the same and that all classes in the test set are present in the training set. However, deployed classifiers often require the ability to recognize inputs from o
Externí odkaz:
https://doaj.org/article/fa4dec258c664f548d3bd6a3dde1e688
Autor:
Terrence J. Sejnowski, Tyler L. Hayes, Hava T. Siegelmann, Christopher Kanan, Maxim Bazhenov, Giri P. Krishnan
Publikováno v:
Neural Comput
Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a
Autor:
Simone Scardapane, Martin Mundt, Tyler L. Hayes, Simone Calderara, Keiland W. Cooper, Christopher Kanan, Eden Belouadah, Lorenzo Pellegrini, Adrian Popescu, Matthias De Lange, Fabio Cuzzolin, Jeremy Forest, Jary Pomponi, Subutai Ahmad, Qi She, Luca Antiga, Gido M. van de Ven, Davide Maltoni, Davide Bacciu, Vincenzo Lomonaco, Joost van de Weijer, Marc Masana, Antonio Carta, Gabriele Graffieti, Andreas S. Tolias, German Ignacio Parisi, Andrea Cossu, Tinne Tuytelaars
Publikováno v:
CVPR Workshops
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d293de3db110ff7791c5e28c4027ea01
http://hdl.handle.net/11573/1612489
http://hdl.handle.net/11573/1612489
Autor:
Christopher Kanan, Tyler L. Hayes
Publikováno v:
CVPR Workshops
In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neura
Publikováno v:
CVPR Workshops
Publikováno v:
PLoS ONE
PLoS ONE, Vol 15, Iss 9, p e0238302 (2020)
PLoS ONE, Vol 15, Iss 9, p e0238302 (2020)
Supervised classification methods often assume the train and test data distributions are the same and that all classes in the test set are present in the training set. However, deployed classifiers often require the ability to recognize inputs from o
Publikováno v:
Computer Vision – ECCV 2020 Workshops ISBN: 9783030664145
ECCV Workshops (1)
ECCV Workshops (1)
Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training. However, real-world classifiers must handle inputs that are far from the training
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::fa1234cdb3902a439725440f7ba58d27
https://doi.org/10.1007/978-3-030-66415-2_12
https://doi.org/10.1007/978-3-030-66415-2_12
Publikováno v:
Computer Vision – ECCV 2020 ISBN: 9783030585976
ECCV (8)
ECCV (8)
People learn throughout life. However, incrementally updating conventional neural networks leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the brain consolidates memory. Replay involves fine-tuning a network on a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::0c2c958a0f7c82158302e644ec997308
https://doi.org/10.1007/978-3-030-58598-3_28
https://doi.org/10.1007/978-3-030-58598-3_28
Autor:
Christopher Kanan, Tyler L. Hayes
Publikováno v:
CVPR Workshops
When an agent acquires new information, ideally it would immediately be capable of using that information to understand its environment. This is not possible using conventional deep neural networks, which suffer from catastrophic forgetting when they
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::83d57a4c9221201c54dc99edaca24fbf
Publikováno v:
ICRA
In supervised machine learning, an agent is typically trained once and then deployed. While this works well for static settings, robots often operate in changing environments and must quickly learn new things from data streams. In this paradigm, know
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d8cfba6f83222324c845a35fac170cb9