Zobrazeno 1 - 10
of 1 104
pro vyhledávání: '"Zhu Zhihui"'
Quantum state tomography (QST) remains the gold standard for benchmarking and verification of near-term quantum devices. While QST for a generic quantum many-body state requires an exponentially large amount of resources, most physical quantum states
Externí odkaz:
http://arxiv.org/abs/2408.07115
Analyzing the similarity of internal representations within and across different models has been an important technique for understanding the behavior of deep neural networks. Most existing methods for analyzing the similarity between representations
Externí odkaz:
http://arxiv.org/abs/2406.14479
Autor:
Qin, Zhen, Zhu, Zhihui
Recently, a tensor-on-tensor (ToT) regression model has been proposed to generalize tensor recovery, encompassing scenarios like scalar-on-tensor regression and tensor-on-vector regression. However, the exponential growth in tensor complexity poses c
Externí odkaz:
http://arxiv.org/abs/2406.06002
The maximal coding rate reduction (MCR$^2$) objective for learning structured and compact deep representations is drawing increasing attention, especially after its recent usage in the derivation of fully explainable and highly effective deep network
Externí odkaz:
http://arxiv.org/abs/2406.01909
Existing angle-based contour descriptors suffer from lossy representation for non-starconvex shapes. By and large, this is the result of the shape being registered with a single global inner center and a set of radii corresponding to a polar coordina
Externí odkaz:
http://arxiv.org/abs/2404.08292
We study reinforcement learning in the presence of an unknown reward perturbation. Existing methodologies for this problem make strong assumptions including reward smoothness, known perturbations, and/or perturbations that do not modify the optimal p
Externí odkaz:
http://arxiv.org/abs/2401.05710
In this paper, we provide the first convergence guarantee for the factorization approach. Specifically, to avoid the scaling ambiguity and to facilitate theoretical analysis, we optimize over the so-called left-orthogonal TT format which enforces ort
Externí odkaz:
http://arxiv.org/abs/2401.02592
Autor:
Chen, Tianyi, Ding, Tianyu, Zhu, Zhihui, Chen, Zeyu, Wu, HsiangTao, Zharkov, Ilya, Liang, Luming
Compressing a predefined deep neural network (DNN) into a compact sub-network with competitive performance is crucial in the efficient machine learning realm. This topic spans various techniques, from structured pruning to neural architecture search,
Externí odkaz:
http://arxiv.org/abs/2312.09411
Autor:
Ding, Tianyu, Chen, Tianyi, Zhu, Haidong, Jiang, Jiachen, Zhong, Yiqi, Zhou, Jinxin, Wang, Guangzhi, Zhu, Zhihui, Zharkov, Ilya, Liang, Luming
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains, reshaping the artificial general intelligence landscape. However, the increasing computational and memory demands of these models present subst
Externí odkaz:
http://arxiv.org/abs/2312.00678
Autor:
Zhou, Jinxin, Ding, Tianyu, Chen, Tianyi, Jiang, Jiachen, Zharkov, Ilya, Zhu, Zhihui, Liang, Luming
We present DREAM, a novel training framework representing Diffusion Rectification and Estimation Adaptive Models, requiring minimal code changes (just three lines) yet significantly enhancing the alignment of training with sampling in diffusion model
Externí odkaz:
http://arxiv.org/abs/2312.00210