Zobrazeno 1 - 10
of 1 052
pro vyhledávání: '"Sun Haifeng"'
Autor:
Wang, Chengsen, Qi, Qi, Wang, Jingyu, Sun, Haifeng, Zhuang, Zirui, Wu, Jinming, Zhang, Lei, Liao, Jianxin
Human experts typically integrate numerical and textual multimodal information to analyze time series. However, most traditional deep learning predictors rely solely on unimodal numerical data, using a fixed-length window for training and prediction
Externí odkaz:
http://arxiv.org/abs/2412.11376
As intellectual property rights, the copyright protection of deep models is becoming increasingly important. Existing work has made many attempts at model watermarking and fingerprinting, but they have ignored homologous models trained with similar s
Externí odkaz:
http://arxiv.org/abs/2411.00380
Autor:
Wang, Yuanyi, Sun, Haifeng, Wang, Chengsen, Zhu, Mengde, Wang, Jingyu, Tang, Wei, Qi, Qi, Zhuang, Zirui, Liao, Jianxin
Anomaly detection in multivariate time series (MTS) is crucial for various applications in data mining and industry. Current industrial methods typically approach anomaly detection as an unsupervised learning task, aiming to identify deviations by es
Externí odkaz:
http://arxiv.org/abs/2410.08877
Autor:
Wang, Chengsen, Qi, Qi, Wang, Jingyu, Sun, Haifeng, Zhuang, Zirui, Wu, Jinming, Liao, Jianxin
Time series forecasting has played a pivotal role across various industries, including finance, transportation, energy, healthcare, and climate. Due to the abundant seasonal information they contain, timestamps possess the potential to offer robust g
Externí odkaz:
http://arxiv.org/abs/2409.18696
Autor:
Wang, Jinguang, Yin, Yuexi, Sun, Haifeng, Qi, Qi, Wang, Jingyu, Zhuang, Zirui, Yang, Tingting, Liao, Jianxin
Quantizing the activations of large language models (LLMs) has been a significant challenge due to the presence of structured outliers. Most existing methods focus on the per-token or per-tensor quantization of activations, making it difficult to ach
Externí odkaz:
http://arxiv.org/abs/2406.18832
Autor:
Wang, Yuanyi, Tang, Wei, Sun, Haifeng, Zhuang, Zirui, Fu, Xiaoyuan, Wang, Jingyu, Qi, Qi, Liao, Jianxin
Weakly Supervised Entity Alignment (EA) is the task of identifying equivalent entities across diverse knowledge graphs (KGs) using only a limited number of seed alignments. Despite substantial advances in aggregation-based weakly supervised EA, the u
Externí odkaz:
http://arxiv.org/abs/2402.03025
Autor:
Wang, Yuanyi, Sun, Haifeng, Wang, Jiabo, Wang, Jingyu, Tang, Wei, Qi, Qi, Sun, Shaoling, Liao, Jianxin
In Multi-Modal Knowledge Graphs (MMKGs), Multi-Modal Entity Alignment (MMEA) is crucial for identifying identical entities across diverse modal attributes. However, semantic inconsistency, mainly due to missing modal attributes, poses a significant c
Externí odkaz:
http://arxiv.org/abs/2401.17859
Entity alignment (EA), a pivotal process in integrating multi-source Knowledge Graphs (KGs), seeks to identify equivalent entity pairs across these graphs. Most existing approaches regard EA as a graph representation learning task, concentrating on e
Externí odkaz:
http://arxiv.org/abs/2401.12798
Autor:
Miao, Yukai, Bai, Yu, Chen, Li, Li, Dan, Sun, Haifeng, Wang, Xizheng, Luo, Ziqiu, Ren, Yanyu, Sun, Dapeng, Xu, Xiuting, Zhang, Qi, Xiang, Chao, Li, Xinchi
Nowadays, the versatile capabilities of Pre-trained Large Language Models (LLMs) have attracted much attention from the industry. However, some vertical domains are more interested in the in-domain capabilities of LLMs. For the Networks domain, we pr
Externí odkaz:
http://arxiv.org/abs/2309.05557
Autor:
Wang, Huazheng, Cheng, Daixuan, Sun, Haifeng, Wang, Jingyu, Qi, Qi, Liao, Jianxin, Wang, Jing, Liu, Cong
Transformer-based pretrained language models (PLMs) have achieved great success in modern NLP. An important advantage of PLMs is good out-of-distribution (OOD) robustness. Recently, diffusion models have attracted a lot of work to apply diffusion to
Externí odkaz:
http://arxiv.org/abs/2307.13949