Zobrazeno 1 - 10
of 159
pro vyhledávání: '"Ge, Jidong"'
Autor:
Fei, Zhiwei, Zhang, Songyang, Shen, Xiaoyu, Zhu, Dawei, Wang, Xiao, Cao, Maosong, Zhou, Fengzhe, Li, Yining, Zhang, Wenwei, Lin, Dahua, Chen, Kai, Ge, Jidong
While large language models (LLMs) have showcased impressive capabilities, they struggle with addressing legal queries due to the intricate complexities and specialized expertise required in the legal field. In this paper, we introduce InternLM-Law,
Externí odkaz:
http://arxiv.org/abs/2406.14887
Autor:
Guo, Qi, Li, Xiaohong, Xie, Xiaofei, Liu, Shangqing, Tang, Ze, Feng, Ruitao, Wang, Junjie, Ge, Jidong, Bu, Lei
The rise of code pre-trained models has significantly enhanced various coding tasks, such as code completion, and tools like GitHub Copilot. However, the substantial size of these models, especially large models, poses a significant challenge when it
Externí odkaz:
http://arxiv.org/abs/2404.01554
Pre-trained models (PTMs) have achieved great success in various Software Engineering (SE) downstream tasks following the ``pre-train then fine-tune'' paradigm. As fully fine-tuning all parameters of PTMs can be computationally expensive, a widely us
Externí odkaz:
http://arxiv.org/abs/2312.15614
Autor:
Fei, Zhiwei, Shen, Xiaoyu, Zhu, Dawei, Zhou, Fengzhe, Han, Zhuo, Zhang, Songyang, Chen, Kai, Shen, Zongwen, Ge, Jidong
Large language models (LLMs) have demonstrated strong capabilities in various aspects. However, when applying them to the highly specialized, safe-critical legal domain, it is unclear how much legal knowledge they possess and whether they can reliabl
Externí odkaz:
http://arxiv.org/abs/2309.16289
Autor:
Zhong, Wenkang, Li, Chuanyi, Liu, Kui, Xu, Tongtong, Bissyandé, Tegawendé F., Ge, Jidong, Luo, Bin, Ng, Vincent
To date, over 40 Automated Program Repair (APR) tools have been designed with varying bug-fixing strategies, which have been demonstrated to have complementary performance in terms of being effective for different bug classes. Intuitively, it should
Externí odkaz:
http://arxiv.org/abs/2309.08211
Large Language Models (LLMs) have demonstrated remarkable performance in code completion. However, due to the lack of domain-specific knowledge, they may not be optimal in completing code that requires intensive domain knowledge for example completin
Externí odkaz:
http://arxiv.org/abs/2308.09313
In formal procedure of civil cases, the textual materials provided by different parties describe the development process of the cases. It is a difficult but necessary task to extract the key information for the cases from these textual materials and
Externí odkaz:
http://arxiv.org/abs/2303.16751
While a large number of pre-trained models of source code have been successfully developed and applied to a variety of software engineering (SE) tasks in recent years, our understanding of these pre-trained models is arguably fairly limited. With the
Externí odkaz:
http://arxiv.org/abs/2302.04026
Autor:
Ge, Jidong, Liu, Yuxiang, Gui, Jie, Fang, Lanting, Lin, Ming, Kwok, James Tin-Yau, Huang, LiGuo, Luo, Bin
Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consiste
Externí odkaz:
http://arxiv.org/abs/2301.03041
Owing to the lack of corpora for low-resource languages, current works on dialogue generation have mainly focused on English. In this paper, we present mDIA, the first large-scale multilingual benchmark for dialogue generation across low- to high-res
Externí odkaz:
http://arxiv.org/abs/2208.13078