Zobrazeno 1 - 10
of 145
pro vyhledávání: '"Gao, LingYu"'
Autor:
Gao, Lingyu
Text classification is crucial for applications such as sentiment analysis and toxic text filtering, but it still faces challenges due to the complexity and ambiguity of natural language. Recent advancements in deep learning, particularly transformer
Externí odkaz:
http://arxiv.org/abs/2408.15650
Autor:
Gao, Lingyu, Chaudhary, Aditi, Srinivasan, Krishna, Hashimoto, Kazuma, Raman, Karthik, Bendersky, Michael
In-context learning (ICL) i.e. showing LLMs only a few task-specific demonstrations has led to downstream gains with no task-specific fine-tuning required. However, LLMs are sensitive to the choice of prompts, and therefore a crucial research questio
Externí odkaz:
http://arxiv.org/abs/2309.07900
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models (LLMs), there is a heated debate about whether they are able t
Externí odkaz:
http://arxiv.org/abs/2305.15068
Pretrained language models have improved zero-shot text classification by allowing the transfer of semantic knowledge from the training data in order to classify among specific label sets in downstream tasks. We propose a simple way to further improv
Externí odkaz:
http://arxiv.org/abs/2305.02239
Autor:
Ma, Xiaomeng, Gao, Lingyu
Neural network models have been proposed to explain the grapheme-phoneme mapping process in humans for many alphabet languages. These models not only successfully learned the correspondence of the letter strings and their pronunciation, but also capt
Externí odkaz:
http://arxiv.org/abs/2303.12294
Autor:
Ma, Xiaomeng, Gao, Lingyu
There is an ongoing debate on whether neural networks can grasp the quasi-regularities in languages like humans. In a typical quasi-regularity task, English past tense inflections, the neural network model has long been criticized that it learns only
Externí odkaz:
http://arxiv.org/abs/2210.09167
We propose a type-controlled framework for inquisitive question generation. We annotate an inquisitive question dataset with question types, train question type classifiers, and finetune models for type-controlled question generation. Empirical resul
Externí odkaz:
http://arxiv.org/abs/2205.08056
Publikováno v:
In International Journal of Hydrogen Energy 19 November 2024 91:693-702
Autor:
Zhao, Liang, Wang, Huifang, Zhang, Yu, Shi, Yanze, Zhou, Chunbao, Yu, Minrui, Wang, Yanhu, Zhang, Liping, Xu, Zheng, Zhang, Ziying, Gao, Lingyu, Zhang, Jiyuan, Yang, Baopeng, Huang, Huihuang, Wang, Fu-Sheng
Publikováno v:
In Molecular Immunology September 2024 173:40-52
Autor:
Zhang, Mengfei, Gao, Lingyu, Yang, Lin, Shan, Guixuan, Wang, Yuxuan, Huo, Xinyi, Li, Wei, Zhang, Jinli
Publikováno v:
In Fuel 1 July 2024 367