Zobrazeno 1 - 10
of 649
pro vyhledávání: '"Sun, Hailong"'
Autor:
Duan, Haodong, Yang, Junming, Qiao, Yuxuan, Fang, Xinyu, Chen, Lin, Liu, Yuan, Agarwal, Amit, Chen, Zhe, Li, Mo, Ma, Yubo, Sun, Hailong, Zhao, Xiangyu, Cui, Junbo, Dong, Xiaoyi, Zang, Yuhang, Zhang, Pan, Wang, Jiaqi, Lin, Dahua, Chen, Kai
We present VLMEvalKit: an open-source toolkit for evaluating large multi-modality models based on PyTorch. The toolkit aims to provide a user-friendly and comprehensive framework for researchers and developers to evaluate existing multi-modality mode
Externí odkaz:
http://arxiv.org/abs/2407.11691
Detecting defects and vulnerabilities in the early stage has long been a challenge in software engineering. Static analysis, a technique that inspects code without execution, has emerged as a key strategy to address this challenge. Among recent advan
Externí odkaz:
http://arxiv.org/abs/2406.08098
Numerous mobile apps have leveraged deep learning capabilities. However, on-device models are vulnerable to attacks as they can be easily extracted from their corresponding mobile apps. Existing on-device attacking approaches only generate black-box
Externí odkaz:
http://arxiv.org/abs/2402.05493
Autor:
Luo, Yin, Kong, Qingchao, Xu, Nan, Cao, Jia, Hao, Bao, Qu, Baoyu, Chen, Bo, Zhu, Chao, Zhao, Chenyang, Zhang, Donglei, Feng, Fan, Zhao, Feifei, Sun, Hailong, Yang, Hanxuan, Pan, Haojun, Liu, Hongyu, Guo, Jianbin, Du, Jiangtao, Wang, Jingyi, Li, Junfeng, Sun, Lei, Liu, Liduo, Dong, Lifeng, Liu, Lili, Wang, Lin, Zhang, Liwen, Wang, Minzheng, Wang, Pin, Yu, Ping, Li, Qingxiao, Yan, Rui, Zou, Rui, Li, Ruiqun, Huang, Taiwen, Wang, Xiaodong, Wu, Xiaofei, Peng, Xin, Zhang, Xina, Fang, Xing, Xiao, Xinglin, Hao, Yanni, Dong, Yao, Wang, Yigang, Liu, Ying, Jiang, Yongyu, Wang, Yungan, Wang, Yuqi, Wang, Zhangsheng, Yu, Zhaoxin, Luo, Zhen, Mao, Wenji, Wang, Lei, Zeng, Dajun
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artific
Externí odkaz:
http://arxiv.org/abs/2312.14862
With the widespread success of deep learning technologies, many trained deep neural network (DNN) models are now publicly available. However, directly reusing the public DNN models for new tasks often fails due to mismatching functionality or perform
Externí odkaz:
http://arxiv.org/abs/2311.04438
Autor:
Li, Li, Gao, Xiang, Sun, Hailong, Hu, Chunming, Sun, Xiaoyu, Wang, Haoyu, Cai, Haipeng, Su, Ting, Luo, Xiapu, Bissyandé, Tegawendé F., Klein, Jacques, Grundy, John, Xie, Tao, Chen, Haibo, Wang, Huaimin
Mobile software engineering has been a hot research topic for decades. Our fellow researchers have proposed various approaches (with over 7,000 publications for Android alone) in this field that essentially contributed to the great success of the cur
Externí odkaz:
http://arxiv.org/abs/2311.01311
We propose a neuralized undirected graphical model called Neural-Hidden-CRF to solve the weakly-supervised sequence labeling problem. Under the umbrella of probabilistic undirected graph theory, the proposed Neural-Hidden-CRF embedded with a hidden C
Externí odkaz:
http://arxiv.org/abs/2309.05086
Deep neural network (DNN) models have become increasingly crucial components in intelligent software systems. However, training a DNN model is typically expensive in terms of both time and money. To address this issue, researchers have recently focus
Externí odkaz:
http://arxiv.org/abs/2306.09376
Training deep neural network (DNN) models, which has become an important task in today's software development, is often costly in terms of computational resources and time. With the inspiration of software reuse, building DNN models through reusing e
Externí odkaz:
http://arxiv.org/abs/2304.00245
This paper explores the integration of symbolic logic knowledge into deep neural networks for learning from noisy crowd labels. We introduce Logic-guided Learning from Noisy Crowd Labels (Logic-LNCL), an EM-alike iterative logic knowledge distillatio
Externí odkaz:
http://arxiv.org/abs/2302.06337