Zobrazeno 1 - 10
of 21 302
pro vyhledávání: '"P Kam"'
Autor:
Lau, Yuk-Kam, Lee, Wonwoong
Let $d(n)$ be the number of divisors of $n$. We investigate the average value of $d(a_f(p))^r$ for $r$ a positive integer and $a_f(p)$ the $p$-th Fourier coefficient of a cuspidal eigenform $f$ having integral Fourier coefficients, where $p$ is a pri
Externí odkaz:
http://arxiv.org/abs/2411.17210
We introduce a novel quantum algorithm for determining graph connectedness using a constant number of measurements. The algorithm can be extended to find connected components with a linear number of measurements. It relies on non-unitary abelian gate
Externí odkaz:
http://arxiv.org/abs/2411.15015
We scheme an efficient and reusable approach to quantum teleportation that allows cyclic teleportation of a two-qubit graph state around a quantum hamster wheel -- a ring of qubits entangled as a one-dimensional line prepared on the 20-qubit Quantinu
Externí odkaz:
http://arxiv.org/abs/2411.13060
The realization of fault-tolerant quantum computers hinges on effective quantum error correction protocols, whose performance significantly relies on the nature of the underlying noise. In this work, we directly study the structure of non-Markovian c
Externí odkaz:
http://arxiv.org/abs/2410.23779
Autor:
Tong, Kam Hung
It is a classical result that the set $K\backslash G /B$ is finite, where $G$ is a reductive algebraic group over an algebraically closed field with characteristic not equal to two, $B$ is a Borel subgroup of $G$, and $K = G^{\theta}$ is the fixed po
Externí odkaz:
http://arxiv.org/abs/2410.19442
Autor:
Wang, Zezhong, Zeng, Xingshan, Liu, Weiwen, Li, Liangyou, Wang, Yasheng, Shang, Lifeng, Jiang, Xin, Liu, Qun, Wong, Kam-Fai
Supervised fine-tuning (SFT) is a common method to enhance the tool calling capabilities of Large Language Models (LLMs), with the training data often being synthesized. The current data synthesis process generally involves sampling a set of tools, f
Externí odkaz:
http://arxiv.org/abs/2410.18447
Autor:
Zhao, Yu, Du, Xiaotang, Hong, Giwon, Gema, Aryo Pradipta, Devoto, Alessio, Wang, Hongru, He, Xuanli, Wong, Kam-Fai, Minervini, Pasquale
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context. Such conflicts can lead to undesirable model behaviou
Externí odkaz:
http://arxiv.org/abs/2410.16090
Autor:
Zhao, Yu, Devoto, Alessio, Hong, Giwon, Du, Xiaotang, Gema, Aryo Pradipta, Wang, Hongru, He, Xuanli, Wong, Kam-Fai, Minervini, Pasquale
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context -- this phenomenon, known as \emph{context-memory know
Externí odkaz:
http://arxiv.org/abs/2410.15999
Autor:
Xue, Boyang, Wang, Hongru, Wang, Rui, Wang, Sheng, Wang, Zezhong, Du, Yiming, Liang, Bin, Wong, Kam-Fai
The tendency of Large Language Models (LLMs) to generate hallucinations raises concerns regarding their reliability. Therefore, confidence estimations indicating the extent of trustworthiness of the generations become essential. However, current LLM
Externí odkaz:
http://arxiv.org/abs/2410.12478
In Federated Learning (FL), anomaly detection (AD) is a challenging task due to the decentralized nature of data and the presence of non-IID data distributions. This study introduces a novel federated threshold calculation method that leverages summa
Externí odkaz:
http://arxiv.org/abs/2410.09284