Zobrazeno 1 - 10
of 317
pro vyhledávání: '"Transformer Architecture"'
Publikováno v:
Alexandria Engineering Journal, Vol 114, Iss , Pp 82-94 (2025)
The video recognition community is undergoing a significant change in backbone shifting from CNNs to transformers. However, due to the temporal information existing in the video, vision transformers, which have been shown to be effective in image tas
Externí odkaz:
https://doaj.org/article/2ab45143d83c41c6a29288b84950e537
Autor:
Stefan Emil Repede, Remus Brad
Publikováno v:
Computers, Vol 13, Iss 11, p 292 (2024)
This study investigates the effectiveness of a proposed version of Meta’s LLaMA 3 model in detecting fake claims across bilingual (English and Romanian) datasets, focusing on a multi-class approach beyond traditional binary classifications in order
Externí odkaz:
https://doaj.org/article/77ea6b5d70f246878497267575aec64d
Autor:
Kaiwei Che, Zhaokun Zhou, Jun Niu, Zhengyu Ma, Wei Fang, Yanqi Chen, Shuaijie Shen, Li Yuan, Yonghong Tian
Publikováno v:
Frontiers in Neuroscience, Vol 18 (2024)
IntroductionThe integration of self-attention mechanisms into Spiking Neural Networks (SNNs) has garnered considerable interest in the realm of advanced deep learning, primarily due to their biological properties. Recent advancements in SNN architect
Externí odkaz:
https://doaj.org/article/6b628cdf73454985b990493c4060d449
Autor:
Chenchen Jiang, Huazhong Ren, Hong Yang, Hongtao Huo, Pengfei Zhu, Zhaoyuan Yao, Jing Li, Min Sun, Shihao Yang
Publikováno v:
International Journal of Applied Earth Observations and Geoinformation, Vol 130, Iss , Pp 103918- (2024)
Fusing multi-modal information from visible (VIS) and thermal infrared (TIR) images is crucial for object detection in fully adapting to varied lighting conditions. However, the existing models usually treat VIS and TIR images as independent informat
Externí odkaz:
https://doaj.org/article/a07c75c78dca4dd4ae2462f03041214c
Publikováno v:
IEEE Access, Vol 12, Pp 188664-188706 (2024)
Large Language Models (LLMs) represent a class of deep learning models adept at understanding natural language and generating coherent responses to various prompts or queries. These models far exceed the complexity of conventional neural networks, of
Externí odkaz:
https://doaj.org/article/1bbfa1bc449346e3b868f62421de3a77
Publikováno v:
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol 17, Pp 893-907 (2024)
Recent computer vision research has mainly focused on designing efficient network architectures, with limited exploration of high- and low-frequency information in the high-order frequency domain. This study introduces a novel approach utilizing spat
Externí odkaz:
https://doaj.org/article/e8d60baa83414735a625ee369eb56784
Autor:
Lalasa Mukku, Jyothi Thomas
Publikováno v:
IJAIN (International Journal of Advances in Intelligent Informatics), Vol 9, Iss 3, Pp 502-523 (2023)
Cervical cancer ranks as the fourth most prevalent malignancy among women globally. Timely identification and intervention in cases of cervical cancer hold the potential for achieving complete remission and cure. In this study, we built a deep learni
Externí odkaz:
https://doaj.org/article/56f67de54ab34003ac1e6c1c78bd902d
Publikováno v:
Journal of Marine Science and Engineering, Vol 12, Iss 9, p 1524 (2024)
The identification of ships in Synthetic Aperture Radar (SAR) imagery is critical for effective maritime surveillance. The advent of deep learning has significantly improved the accuracy of SAR ship classification and recognition. However, distinguis
Externí odkaz:
https://doaj.org/article/d901c89b69d14ae2a63bd7c22d83afdd
Autor:
Xiangyi Hu, Zhihao Zhang, Liping Zheng, Tailai Chen, Chao Peng, Yilin Wang, Ruiheng Li, Xinyang Lv, Shuo Yan
Publikováno v:
Plants, Vol 13, Iss 17, p 2348 (2024)
This paper proposes an advanced deep learning model that integrates the Diffusion-Transformer structure and parallel attention mechanism for the tasks of growth estimation and disease detection in jujube forests. Existing methods in forestry monitori
Externí odkaz:
https://doaj.org/article/954714fb08884bb880b2e823b719e3a3
Autor:
Alexandru Grigoraș, Florin Leon
Publikováno v:
Mathematics, Vol 12, Iss 16, p 2494 (2024)
A model for generating synthetic time series data using pre-trained large language models is proposed. Starting with the Google T5-base model, which employs an encoder–decoder transformer architecture, the model underwent pre-training on diverse da
Externí odkaz:
https://doaj.org/article/2845fd9caa2a417987e3584ab183be9c