Zobrazeno 1 - 10
of 117
pro vyhledávání: '"Ma, Wenxuan"'
This paper presents a Consensus-based Distributed Quantum Kernel Learning (CDQKL) framework aimed at improving speech recognition through distributed quantum computing.CDQKL addresses the challenges of scalability and data privacy in centralized quan
Externí odkaz:
http://arxiv.org/abs/2409.05770
Cross-modality transfer aims to leverage large pretrained models to complete tasks that may not belong to the modality of pretraining data. Existing works achieve certain success in extending classical finetuning to cross-modal scenarios, yet we stil
Externí odkaz:
http://arxiv.org/abs/2406.18864
Large-scale pretrained models have proven immensely valuable in handling data-intensive modalities like text and image. However, fine-tuning these models for certain specialized modalities, such as protein sequence and cosmic ray, poses challenges du
Externí odkaz:
http://arxiv.org/abs/2406.09003
Developing generalizable models that can effectively learn from limited data and with minimal reliance on human supervision is a significant objective within the machine learning community, particularly in the era of deep neural networks. Therefore,
Externí odkaz:
http://arxiv.org/abs/2311.08782
To improve the uncertainty quantification of variance networks, we propose a novel tree-structured local neural network model that partitions the feature space into multiple regions based on uncertainty heterogeneity. A tree is built upon giving the
Externí odkaz:
http://arxiv.org/abs/2212.12658
Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction for Uncertainty Quantification
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning. It incorporates adaptively flexible distribution prediction of $\mathbb{P}(\mathbf{y}|\mathbf{X}=x)$ in regression tasks. Th
Externí odkaz:
http://arxiv.org/abs/2211.14545
Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains. Most UDA approaches align features within a common embedding space and
Externí odkaz:
http://arxiv.org/abs/2208.01195
Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain. Most existing UDA approaches enable knowledge transfer via learning domain-invariant representation and sharing one classifier
Externí odkaz:
http://arxiv.org/abs/2111.12941
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain by reducing the cross-domain distribution discrepancy. Most prior DA approaches leverage complicated and powerful deep neural networks to imp
Externí odkaz:
http://arxiv.org/abs/2103.16403
Publikováno v:
In Measurement 30 November 2023 222