Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Khalitov, Ruslan"'
Self-supervised pretraining (SSP) has been recognized as a method to enhance prediction accuracy in various downstream tasks. However, its efficacy for DNA sequences remains somewhat constrained. This limitation stems primarily from the fact that mos
Externí odkaz:
http://arxiv.org/abs/2405.08538
Sequential data naturally have different lengths in many domains, with some very long sequences. As an important modeling tool, neural attention should capture long-range interaction in such sequences. However, most existing neural attention models a
Externí odkaz:
http://arxiv.org/abs/2206.05852
Self-Attention is a widely used building block in neural modeling to mix long-range data elements. Most self-attention neural networks employ pairwise dot-products to specify the attention coefficients. However, these methods require $O(N^2)$ computi
Externí odkaz:
http://arxiv.org/abs/2204.10670
Classification of long sequential data is an important Machine Learning task and appears in many application scenarios. Recurrent Neural Networks, Transformers, and Convolutional Neural Networks are three major techniques for learning from sequential
Externí odkaz:
http://arxiv.org/abs/2201.02143
Square matrices appear in many machine learning problems and models. Optimization over a large square matrix is expensive in memory and in time. Therefore an economic approximation is needed. Conventional approximation approaches factorize the square
Externí odkaz:
http://arxiv.org/abs/2109.08184
Publikováno v:
In Neural Networks March 2024 171:466-473
Publikováno v:
In Neurocomputing 21 January 2023 518:50-59
Publikováno v:
In Neural Networks August 2022 152:160-168