Zobrazeno 1 - 10
of 51 221
pro vyhledávání: '"Zhang-Min"'
Autor:
Duan-Jian Tao, Xin Zhao, Yanxin Wang, Xixi Liu, Hong-Ping Li, Zhang-Min Li, Yan Zhou, Ziliang Yuan, Zehui Zhang
Publikováno v:
Green Energy & Environment, Vol 7, Iss 5, Pp 1084-1092 (2022)
A superior carbocatalyst ultrahigh N-doped graphene (NG) was prepared by a novel self-sacrificial templating method of one-step annealing vitamin B9. The NG catalyst with pyrolysis temperature of 800 °C (abbreviated VB9-NG-800) has an ultrahigh nitr
Externí odkaz:
https://doaj.org/article/26598646fe6d44a69ded522d9cdeaf10
Local Differential Privacy (LDP) is widely adopted in the Industrial Internet of Things (IIoT) for its lightweight, decentralized, and scalable nature. However, its perturbation-based privacy mechanism makes it difficult to distinguish between uncont
Externí odkaz:
http://arxiv.org/abs/2412.15704
Autor:
Zhou, Xiabin, Wang, Wenbin, Zeng, Minyan, Guo, Jiaxian, Liu, Xuebo, Shen, Li, Zhang, Min, Ding, Liang
Efficient KV cache management in LLMs is crucial for long-context tasks like RAG and summarization. Existing KV cache compression methods enforce a fixed pattern, neglecting task-specific characteristics and reducing the retention of essential inform
Externí odkaz:
http://arxiv.org/abs/2412.14838
Autor:
Lu, Yifan, Zhou, Yigeng, Li, Jing, Wang, Yequan, Liu, Xuebo, He, Daojing, Liu, Fangming, Zhang, Min
Multi-hop question answering (MHQA) poses a significant challenge for large language models (LLMs) due to the extensive knowledge demands involved. Knowledge editing, which aims to precisely modify the LLMs to incorporate specific knowledge without n
Externí odkaz:
http://arxiv.org/abs/2412.13782
Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across diverse tasks. Despite great success, recent studies show that LVLMs encounter substantial limitations when engaging with visual graphs. To study the reason behind t
Externí odkaz:
http://arxiv.org/abs/2412.13540
Large language models (LLMs) based on generative pre-trained Transformer have achieved remarkable performance on knowledge graph question-answering (KGQA) tasks. However, LLMs often produce ungrounded subgraph planning or reasoning results in KGQA du
Externí odkaz:
http://arxiv.org/abs/2412.12643
Visual information has been introduced for enhancing machine translation (MT), and its effectiveness heavily relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we introduce a
Externí odkaz:
http://arxiv.org/abs/2412.12627
Autor:
Qiao, Ziheng, Zhou, Houquan, Liu, Yumeng, Li, Zhenghua, Zhang, Min, Zhang, Bo, Li, Chen, Zhang, Ji, Huang, Fei
One key characteristic of the Chinese spelling check (CSC) task is that incorrect characters are usually similar to the correct ones in either phonetics or glyph. To accommodate this, previous works usually leverage confusion sets, which suffer from
Externí odkaz:
http://arxiv.org/abs/2412.12863
Autor:
Wang, Jiaqi, Yu, Liutao, Huang, Liwei, Zhou, Chenlin, Zhang, Han, Song, Zhenxi, Zhang, Min, Ma, Zhengyu, Zhang, Zhiguo
The intrinsic dynamics and event-driven nature of spiking neural networks (SNNs) make them excel in processing temporal information by naturally utilizing embedded time sequences as time steps. Recent studies adopting this approach have demonstrated
Externí odkaz:
http://arxiv.org/abs/2412.12858
Large language models (LLMs) have demonstrated impressive multilingual understanding and reasoning capabilities, driven by extensive pre-training multilingual corpora and fine-tuning instruction data. However, a performance gap persists between high-
Externí odkaz:
http://arxiv.org/abs/2412.12499