Zobrazeno 1 - 10
of 149
pro vyhledávání: '"Zhao Zhilin"'
Autor:
Chen, Hui, Liu, Hengyu, Li, Yaqiong, Fan, Xuhui, Zhao, Zhilin, Zhou, Feng, Quinn, Christopher John, Cao, Longbing
Temporal point processes (TPPs) are effective for modeling event occurrences over time, but they struggle with sparse and uncertain events in federated systems, where privacy is a major concern. To address this, we propose \textit{FedPP}, a Federated
Externí odkaz:
http://arxiv.org/abs/2410.05637
The recently proposed Bayesian Flow Networks~(BFNs) show great potential in modeling parameter spaces, offering a unified strategy for handling continuous, discretized, and discrete data. However, BFNs cannot learn high-level semantic representation
Externí odkaz:
http://arxiv.org/abs/2405.15268
Autor:
Lin, Kun-Yu, Ding, Henghui, Zhou, Jiaming, Tang, Yu-Ming, Peng, Yi-Xing, Zhao, Zhilin, Loy, Chen Change, Zheng, Wei-Shi
Building upon the impressive success of CLIP (Contrastive Language-Image Pretraining), recent pioneer works have proposed to adapt the powerful CLIP to video data, leading to efficient and effective video learners for open-vocabulary action recogniti
Externí odkaz:
http://arxiv.org/abs/2403.01560
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained, ensuring network robustness and reliability. This paper introduces OOD knowledge di
Externí odkaz:
http://arxiv.org/abs/2311.07975
Autor:
Zhao, Zhilin, Cao, Longbing
Real-life data are often non-IID due to complex distributions and interactions, and the sensitivity to the distribution of samples can differ among learning models. Accordingly, a key question for any supervised or unsupervised model is whether the p
Externí odkaz:
http://arxiv.org/abs/2310.01109
Autor:
Zhao, Zhilin, Cao, Longbing
To classify in-distribution samples, deep neural networks explore strongly label-related information and discard weakly label-related information according to the information bottleneck. Out-of-distribution samples drawn from distributions differing
Externí odkaz:
http://arxiv.org/abs/2206.09387
Deep neural networks for image classification only learn to map in-distribution inputs to their corresponding ground truth labels in training without differentiating out-of-distribution samples from in-distribution ones. This results from the assumpt
Externí odkaz:
http://arxiv.org/abs/2206.09385
The discrepancy between in-distribution (ID) and out-of-distribution (OOD) samples can lead to \textit{distributional vulnerability} in deep neural networks, which can subsequently lead to high-confidence predictions for OOD samples. This is mainly d
Externí odkaz:
http://arxiv.org/abs/2206.09380
The integrity of training data, even when annotated by experts, is far from guaranteed, especially for non-IID datasets comprising both in- and out-of-distribution samples. In an ideal scenario, the majority of samples would be in-distribution, while
Externí odkaz:
http://arxiv.org/abs/2206.09375
The distributions of real-life data streams are usually nonstationary, where one exciting setting is that a stream can be decomposed into several offline intervals with a fixed time horizon but different distributions and an out-of-distribution onlin
Externí odkaz:
http://arxiv.org/abs/2202.05996