Zobrazeno 1 - 10
of 1 094
pro vyhledávání: '"catastrophic forgetting"'
Autor:
Sharma, Mandar
Language modeling, especially through the use of transformer-based large language models (LLMs), has drastically changed how we view and use artificial intelligence (AI) and machine learning (ML) in our daily lives. Although LLMs have showcased remar
Externí odkaz:
https://hdl.handle.net/10919/121122
Publikováno v:
PeerJ Computer Science, Vol 10, p e2327 (2024)
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data. To address the problem of catastrophic forgetting, some existing research endeavors have focused on exploring memory replay method
Externí odkaz:
https://doaj.org/article/9423f03d5c864cb5889b4b523a2bd35b
Publikováno v:
PeerJ Computer Science, Vol 10, p e2191 (2024)
Background The Automatic Essay Score (AES) prediction system is essential in education applications. The AES system uses various textural and grammatical features to investigate the exact score value for AES. The derived features are processed by var
Externí odkaz:
https://doaj.org/article/9ae92bdb7c974e38a92314475a3b1702
Publikováno v:
Intelligent Systems with Applications, Vol 23, Iss , Pp 200415- (2024)
Generalisation across multiple tasks is a major challenge in deep learning for medical imaging applications, as it can cause a catastrophic forgetting problem. One commonly adopted approach to address these challenges is to train the model from scrat
Externí odkaz:
https://doaj.org/article/40a462b8a3ae43e18797799813cddc3a
Publikováno v:
Frontiers in Big Data, Vol 7 (2024)
IntroductionRecently, Google introduced Pathways as its next-generation AI architecture. Pathways must address three critical challenges: learning one general model for several continuous tasks, ensuring tasks can leverage each other without forgetti
Externí odkaz:
https://doaj.org/article/9190128a7a734f17a3c66e01347131fe
Publikováno v:
Complex & Intelligent Systems, Vol 10, Iss 3, Pp 3891-3906 (2024)
Abstract Catastrophic forgetting in neural networks is a common problem, in which neural networks lose information from previous tasks after training on new tasks. Although adopting a regularization method that preferentially retains the parameters i
Externí odkaz:
https://doaj.org/article/00056f40e74e4ee9890437c8585d655d
Publikováno v:
Zhihui kongzhi yu fangzhen, Vol 46, Iss 1, Pp 44-54 (2024)
In view of the catastrophic forgetting of previous knowledge in class incremental learning for image classification, existing replay-based methods focus on memory updating and sampling, while overlooking the feature relationships between old and new
Externí odkaz:
https://doaj.org/article/b4085dc027a4465881fbe422f3929c73
Publikováno v:
IEEE Access, Vol 12, Pp 138501-138509 (2024)
Deep learning models have shown impressive performance in various tasks. However, they are prone to a phenomenon called catastrophic forgetting. This means they do not remember what they have learned when training on new tasks. In this research paper
Externí odkaz:
https://doaj.org/article/b5cfbcccefeb4f82b7a91b8f82b84b60
Autor:
Muhammad Umer, Robi Polikar
Publikováno v:
IEEE Access, Vol 12, Pp 126108-126121 (2024)
Continual learning approaches are useful to help a model learn new information or new tasks sequentially, while also retaining the previously acquired information. However, such approaches are known to be extremely vulnerable to the adversarial backd
Externí odkaz:
https://doaj.org/article/ceb8d56eb08e4fd78cab5263b2cc78df
Autor:
Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Kasidis Kanwatchara, Boonserm Kijsirikul, Peerapon Vateekul
Publikováno v:
IEEE Access, Vol 12, Pp 34099-34115 (2024)
Meta-learning has been applied to lifelong language learning due to its ability to find an optimal model for efficient adaptation to any learned tasks. Generally, meta lifelong-learning partially stores samples from seen tasks in a memory and selects
Externí odkaz:
https://doaj.org/article/289dbfd7474049c3983689a1b92778fa