Zobrazeno 1 - 10
of 72
pro vyhledávání: '"Chaudhry Arslan"'
Large language models (LLMs) are increasingly employed in information-seeking and decision-making tasks. Despite their broad utility, LLMs tend to generate information that conflicts with real-world facts, and their persuasive style can make these in
Externí odkaz:
http://arxiv.org/abs/2409.12180
Autor:
Jurenka, Irina, Kunesch, Markus, McKee, Kevin R., Gillick, Daniel, Zhu, Shaojian, Wiltberger, Sara, Phal, Shubham Milind, Hermann, Katherine, Kasenberg, Daniel, Bhoopchand, Avishkar, Anand, Ankit, Pîslar, Miruna, Chan, Stephanie, Wang, Lisa, She, Jennifer, Mahmoudieh, Parsa, Rysbek, Aliya, Ko, Wei-Jen, Huber, Andrea, Wiltshire, Brett, Elidan, Gal, Rabin, Roni, Rubinovitz, Jasmin, Pitaru, Amit, McAllister, Mac, Wilkowski, Julia, Choi, David, Engelberg, Roee, Hackmon, Lidan, Levin, Adva, Griffin, Rachel, Sears, Michael, Bar, Filip, Mesar, Mia, Jabbour, Mana, Chaudhry, Arslan, Cohan, James, Thiagarajan, Sridhar, Levine, Nir, Brown, Ben, Gorur, Dilan, Grant, Svetlana, Hashimshoni, Rachel, Weidinger, Laura, Hu, Jieru, Chen, Dawn, Dolecki, Kuba, Akbulut, Canfer, Bileschi, Maxwell, Culp, Laura, Dong, Wen-Xin, Marchal, Nahema, Van Deman, Kelsie, Misra, Hema Bajaj, Duah, Michael, Ambar, Moran, Caciularu, Avi, Lefdal, Sandra, Summerfield, Chris, An, James, Kamienny, Pierre-Alexandre, Mohdi, Abhinit, Strinopoulous, Theofilos, Hale, Annie, Anderson, Wayne, Cobo, Luis C., Efron, Niv, Ananda, Muktha, Mohamed, Shakir, Heymans, Maureen, Ghahramani, Zoubin, Matias, Yossi, Gomes, Ben, Ibrahim, Lila
A major challenge facing the world is the provision of equitable and universal access to quality education. Recent advances in generative AI (gen AI) have created excitement about the potential of new technologies to offer a personal tutor for every
Externí odkaz:
http://arxiv.org/abs/2407.12687
Publikováno v:
ICLR 2023
One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continual learning
Externí odkaz:
http://arxiv.org/abs/2303.08207
Autor:
Bornschein, Jorg, Galashov, Alexandre, Hemsley, Ross, Rannen-Triki, Amal, Chen, Yutian, Chaudhry, Arslan, He, Xu Owen, Douillard, Arthur, Caccia, Massimo, Feng, Qixuang, Shen, Jiajun, Rebuffi, Sylvestre-Alvise, Stacpoole, Kitty, Casas, Diego de las, Hawkins, Will, Lazaridou, Angeliki, Teh, Yee Whye, Rusu, Andrei A., Pascanu, Razvan, Ranzato, Marc'Aurelio
A shared goal of several machine learning communities like continual learning, meta-learning and transfer learning, is to design algorithms and models that efficiently and robustly adapt to unseen tasks. An even more ambitious goal is to build models
Externí odkaz:
http://arxiv.org/abs/2211.11747
Autor:
Chaudhry, Arslan, Menon, Aditya Krishna, Veit, Andreas, Jayasumana, Sadeep, Ramalingam, Srikumar, Kumar, Sanjiv
Publikováno v:
NeurIPS 2022 (First Workshop on Interpolation and Beyond)
Mixup is a regularization technique that artificially produces new samples using convex combinations of original training points. This simple technique has shown strong empirical performance, and has been heavily used as part of semi-supervised learn
Externí odkaz:
http://arxiv.org/abs/2210.16413
Autor:
Mirzadeh, Seyed Iman, Chaudhry, Arslan, Yin, Dong, Nguyen, Timothy, Pascanu, Razvan, Gorur, Dilan, Farajtabar, Mehrdad
A large body of research in continual learning is devoted to overcoming the catastrophic forgetting of neural networks by designing new algorithms that are robust to the distribution shifts. However, the majority of these works are strictly focused o
Externí odkaz:
http://arxiv.org/abs/2202.00275
Autor:
Mirzadeh, Seyed Iman, Chaudhry, Arslan, Yin, Dong, Hu, Huiyi, Pascanu, Razvan, Gorur, Dilan, Farajtabar, Mehrdad
A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning l
Externí odkaz:
http://arxiv.org/abs/2110.11526
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target). Unsupervised methods that can adapt to domain shift are highly desirable as the
Externí odkaz:
http://arxiv.org/abs/2108.00977
Publikováno v:
NeurIPS, 2020
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter r
Externí odkaz:
http://arxiv.org/abs/2010.11635
In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetti
Externí odkaz:
http://arxiv.org/abs/2002.08165