Zobrazeno 1 - 10
of 10 662
pro vyhledávání: '"An, Keqin"'
Graph contrastive learning (GCL) has been widely applied to text classification tasks due to its ability to generate self-supervised signals from unlabeled data, thus facilitating model training. However, existing GCL-based text classification method
Externí odkaz:
http://arxiv.org/abs/2410.18130
Autor:
Li, Keqin, Chen, Jiajing, Yu, Denzhi, Dajun, Tao, Qiu, Xinyu, Jieting, Lian, Baiwei, Sun, Shengyuan, Zhang, Wan, Zhenyu, Ji, Ran, Hong, Bo, Ni, Fanghao
At present, in most warehouse environments, the accumulation of goods is complex, and the management personnel in the control of goods at the same time with the warehouse mobile robot trajectory interaction, the traditional mobile robot can not be ve
Externí odkaz:
http://arxiv.org/abs/2409.14972
Autor:
Wang, Peng, Bai, Shuai, Tan, Sinan, Wang, Shijie, Fan, Zhihao, Bai, Jinze, Chen, Keqin, Liu, Xuejing, Wang, Jialin, Ge, Wenbin, Fan, Yang, Dang, Kai, Du, Mengfei, Ren, Xuancheng, Men, Rui, Liu, Dayiheng, Zhou, Chang, Zhou, Jingren, Lin, Junyang
We present the Qwen2-VL Series, an advanced upgrade of the previous Qwen-VL models that redefines the conventional predetermined-resolution approach in visual processing. Qwen2-VL introduces the Naive Dynamic Resolution mechanism, which enables the m
Externí odkaz:
http://arxiv.org/abs/2409.12191
Autor:
Li, Keqin, Wang, Jin, Wu, Xubo, Peng, Xirui, Chang, Runmian, Deng, Xiaoyu, Kang, Yiwen, Yang, Yue, Ni, Fanghao, Hong, Bo
With the rapid growth of global e-commerce, the demand for automation in the logistics industry is increasing. This study focuses on automated picking systems in warehouses, utilizing deep learning and reinforcement learning technologies to enhance p
Externí odkaz:
http://arxiv.org/abs/2408.16633
In this paper, we prove a conjecture of the second author by evaluating the determinant $$\det\left[x+\left(\frac{i-j}p\right)+\left(\frac ip\right)y+\left(\frac jp\right)z+\left(\frac{ij}p\right)w\right]_{0\le i,j\le(p-3)/2}$$ for any odd prime $p$,
Externí odkaz:
http://arxiv.org/abs/2408.07034
In multi-label classification, machine learning encounters the challenge of domain generalization when handling tasks with distributions differing from the training data. Existing approaches primarily focus on vision object recognition and neglect th
Externí odkaz:
http://arxiv.org/abs/2408.05831
The age estimation task aims to use facial features to predict the age of people and is widely used in public security, marketing, identification, and other fields. However, the features are mainly concentrated in facial keypoints, and existing CNN a
Externí odkaz:
http://arxiv.org/abs/2407.16234
Since Multimodal Emotion Recognition in Conversation (MERC) can be applied to public opinion monitoring, intelligent dialogue robots, and other fields, it has received extensive research attention in recent years. Unlike traditional unimodal emotion
Externí odkaz:
http://arxiv.org/abs/2407.16714
Autor:
Yang, An, Yang, Baosong, Hui, Binyuan, Zheng, Bo, Yu, Bowen, Zhou, Chang, Li, Chengpeng, Li, Chengyuan, Liu, Dayiheng, Huang, Fei, Dong, Guanting, Wei, Haoran, Lin, Huan, Tang, Jialong, Wang, Jialin, Yang, Jian, Tu, Jianhong, Zhang, Jianwei, Ma, Jianxin, Yang, Jianxin, Xu, Jin, Zhou, Jingren, Bai, Jinze, He, Jinzheng, Lin, Junyang, Dang, Kai, Lu, Keming, Chen, Keqin, Yang, Kexin, Li, Mei, Xue, Mingfeng, Ni, Na, Zhang, Pei, Wang, Peng, Peng, Ru, Men, Rui, Gao, Ruize, Lin, Runji, Wang, Shijie, Bai, Shuai, Tan, Sinan, Zhu, Tianhang, Li, Tianhao, Liu, Tianyu, Ge, Wenbin, Deng, Xiaodong, Zhou, Xiaohuan, Ren, Xingzhang, Zhang, Xinyu, Wei, Xipin, Ren, Xuancheng, Liu, Xuejing, Fan, Yang, Yao, Yang, Zhang, Yichang, Wan, Yu, Chu, Yunfei, Liu, Yuqiong, Cui, Zeyu, Zhang, Zhenru, Guo, Zhifang, Fan, Zhihao
This report introduces the Qwen2 series, the latest addition to our large language models and large multimodal models. We release a comprehensive suite of foundational and instruction-tuned language models, encompassing a parameter range from 0.5 to
Externí odkaz:
http://arxiv.org/abs/2407.10671
Adapting Large Language Models (LLMs) for recommendation requires careful consideration of the decoding process, given the inherent differences between generating items and natural language. Existing approaches often directly apply LLMs' original dec
Externí odkaz:
http://arxiv.org/abs/2406.14900