Zobrazeno 1 - 10
of 639
pro vyhledávání: '"pre-trained language models"'
Publikováno v:
Complex & Intelligent Systems, Vol 11, Iss 1, Pp 1-11 (2024)
Abstract Thanks to the strong representation capability of the pre-trained language models, supervised grammatical error correction has achieved promising performance. However, traditional model training depends significantly on the large scale of si
Externí odkaz:
https://doaj.org/article/c317d7bcdd2a47529935c35ada7e8d56
Autor:
Kovaleva, Olga A., Samokhvalov, Alexey Vladimirovich, Liashkov, Mikhail A., Pchelintsev, Sergey Yurevich
Publikováno v:
Известия Саратовского университета. Новая серия. Серия Математика. Механика. Информатика, Vol 24, Iss 3, Pp 442-451 (2024)
This paper explores the use of deep learning techniques to improve the performance of web application firewalls (WAFs), describes a specific method for improving the performance of web application firewalls, and presents the results of its testing on
Externí odkaz:
https://doaj.org/article/40b5d5234a644e1b8f08b5793182478b
Publikováno v:
ICT Express, Vol 10, Iss 4, Pp 871-890 (2024)
Keyphrase Prediction (KP) is essential for identifying keyphrases in a document that can summarize its content. However, recent Natural Language Processing (NLP) advances have developed more efficient KP models using deep learning techniques. The lim
Externí odkaz:
https://doaj.org/article/9a34a02624b64eb58334334aaeaaca51
Publikováno v:
PeerJ Computer Science, Vol 10, p e2358 (2024)
The construction of hypernym taxonomic trees, a critical task in the field of natural language processing, involves extracting lexical relationships, specifically creating a tree structure that represents hypernym relationships among a given set of w
Externí odkaz:
https://doaj.org/article/ba3e85af1679408981bef0b39d4f2e9e
Autor:
Amit Kumar Sah, Muhammad Abulaish
Publikováno v:
Machine Learning with Applications, Vol 17, Iss , Pp 100575- (2024)
This paper presents DeepCKID, a Multi-Head Attention (MHA)-based deep learning model that exploits statistical and semantic knowledge corresponding to documents across different classes in the datasets to improve the model’s ability to detect minor
Externí odkaz:
https://doaj.org/article/3aecbee635724bd081263777e8298a3e
Autor:
AlMasaud, Alanod ⁎, Al-Baity, Heyam H.
Publikováno v:
In Journal of King Saud University - Computer and Information Sciences December 2024 36(10)
Autor:
Quoc-Hung, Pham a, c, Nguyen, Minh-Tien a, ⁎, Inoue, Shumpei b, Tran-Tien, Manh a, Phan, Xuan-Hieu c
Publikováno v:
In Knowledge-Based Systems 25 November 2024 304
Autor:
Mustapha, K.B.
Publikováno v:
In Advanced Engineering Informatics March 2025 64
Autor:
Li, Zhenglong a, Zhu, Yi a, b, c, ⁎, Hua, Chenqi a, Li, Yun a, Yuan, Yunhao a, Qiang, Jipeng a
Publikováno v:
In Neurocomputing 1 March 2025 620
Publikováno v:
Journal of Big Data, Vol 11, Iss 1, Pp 1-16 (2024)
Abstract Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to col
Externí odkaz:
https://doaj.org/article/85becda319d847ada25c0cdcd29693f0