Zobrazeno 1 - 10
of 4 211
pro vyhledávání: '"Yin, Lu"'
Autor:
Xiao, Qiao, Ma, Pingchuan, Fernandez-Lopez, Adriana, Wu, Boqian, Yin, Lu, Petridis, Stavros, Pechenizkiy, Mykola, Pantic, Maja, Mocanu, Decebal Constantin, Liu, Shiwei
The recent success of Automatic Speech Recognition (ASR) is largely attributed to the ever-growing amount of training data. However, this trend has made model training prohibitively costly and imposed computational demands. While data pruning has bee
Externí odkaz:
http://arxiv.org/abs/2406.18373
Autor:
Fernandez-Lopez, Adriana, Chen, Honglie, Ma, Pingchuan, Yin, Lu, Xiao, Qiao, Petridis, Stavros, Liu, Shiwei, Pantic, Maja
Pre-trained models have been a foundational approach in speech recognition, albeit with associated additional costs. In this study, we propose a regularization technique that facilitates the training of visual and audio-visual speech recognition mode
Externí odkaz:
http://arxiv.org/abs/2406.17614
The James Webb Space Telescope (JWST) is reporting unexpectedly massive high redshift galaxies that appear challenging from the $\Lambda$CDM perspective. Interpreted as a problem of cosmological origin, this necessitates Planck underestimating either
Externí odkaz:
http://arxiv.org/abs/2405.19953
The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training or fine-tuning. While parameter-efficient approach
Externí odkaz:
http://arxiv.org/abs/2405.18380
Autor:
Biswas, Anirban, Kar, Arpan, Lee, Bum-Hoon, Lee, Hocheol, Lee, Wonwoo, Scopel, Stefano, Velasco-Sevilla, Liliana, Yin, Lu
We provide a transparent discussion of the high temperature asymptotic behaviour of Cosmology in a dilaton-Einstein-Gauss-Bonnet (dEGB) scenario of modified gravity with vanishing scalar potential. In particular, we show that it has a clear interpret
Externí odkaz:
http://arxiv.org/abs/2405.15998
Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks, sparking interest in their application to professional domains with higher specialized requirements. However, restricted access to clo
Externí odkaz:
http://arxiv.org/abs/2405.04781
Autoregressive Large Language Models (e.g., LLaMa, GPTs) are omnipresent achieving remarkable success in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents signific
Externí odkaz:
http://arxiv.org/abs/2404.03865
Autor:
Wu, Boqian, Xiao, Qiao, Liu, Shiwei, Yin, Lu, Pechenizkiy, Mykola, Mocanu, Decebal Constantin, Van Keulen, Maurice, Mocanu, Elena
Deep neural networks have evolved as the leading approach in 3D medical image segmentation due to their outstanding performance. However, the ever-increasing model size and computation cost of deep neural networks have become the primary barrier to d
Externí odkaz:
http://arxiv.org/abs/2312.04727
In active learning for graph-structured data, Graph Neural Networks (GNNs) have shown effectiveness. However, a common challenge in these applications is the underutilization of crucial structural information. To address this problem, we propose the
Externí odkaz:
http://arxiv.org/abs/2312.04307
The deep neural network (DNN) has been proven effective in various domains. However, they often struggle to perform well on certain minority groups during inference, despite showing strong performance on the majority of data groups. This is because o
Externí odkaz:
http://arxiv.org/abs/2312.03044