Zobrazeno 1 - 10
of 2 002
pro vyhledávání: '"Lin, Guang"'
We propose a novel fine-tuning method to achieve multi-operator learning through training a distributed neural operator with diverse function data and then zero-shot fine-tuning the neural network using physics-informed losses for downstream tasks. O
Externí odkaz:
http://arxiv.org/abs/2411.07239
Optimizing the learning rate remains a critical challenge in machine learning, essential for achieving model stability and efficient convergence. The Vector Auxiliary Variable (VAV) algorithm introduces a novel energy-based self-adjustable learning r
Externí odkaz:
http://arxiv.org/abs/2411.06573
Autor:
Zheng, Haoyang, Lin, Guang
Sparse Identification of Nonlinear Dynamical Systems (SINDy) is a powerful tool for the data-driven discovery of governing equations. However, it encounters challenges when modeling complex dynamical systems involving high-order derivatives or discon
Externí odkaz:
http://arxiv.org/abs/2411.01719
Autor:
Mollaali, Amirhossein, Zufferey, Gabriel, Constante-Flores, Gonzalo, Moya, Christian, Li, Can, Lin, Guang, Yue, Meng
This paper proposes a new data-driven methodology for predicting intervals of post-fault voltage trajectories in power systems. We begin by introducing the Quantile Attention-Fourier Deep Operator Network (QAF-DeepONet), designed to capture the compl
Externí odkaz:
http://arxiv.org/abs/2410.24162
Over 44 million Americans currently suffer from food insecurity, of whom 13 million are children. Across the United States, thousands of food banks and pantries serve as vital sources of food and other forms of aid for food insecure families. By opti
Externí odkaz:
http://arxiv.org/abs/2410.15420
Recent works have shown theoretically and empirically that redundant data dimensions are a source of adversarial vulnerability. However, the inverse doesn't seem to hold in practice; employing dimension-reduction techniques doesn't exhibit robustness
Externí odkaz:
http://arxiv.org/abs/2410.06921
This work introduces a novel and efficient Bayesian federated learning algorithm, namely, the Federated Averaging stochastic Hamiltonian Monte Carlo (FA-HMC), for parameter estimation and uncertainty quantification. We establish rigorous convergence
Externí odkaz:
http://arxiv.org/abs/2407.06935
Autor:
Lin, Guang, Zhao, Qibin
Over the past two years, the use of large language models (LLMs) has advanced rapidly. While these LLMs offer considerable convenience, they also raise security concerns, as LLMs are vulnerable to adversarial attacks by some well-designed textual per
Externí odkaz:
http://arxiv.org/abs/2405.20770
Replica exchange stochastic gradient Langevin dynamics (reSGLD) is an effective sampler for non-convex learning in large-scale datasets. However, the simulation may encounter stagnation issues when the high-temperature chain delves too deeply into th
Externí odkaz:
http://arxiv.org/abs/2405.07839
Diffusion models (DMs) based adversarial purification (AP) has shown to be the most powerful alternative to adversarial training (AT). However, these methods neglect the fact that pre-trained diffusion models themselves are not robust to adversarial
Externí odkaz:
http://arxiv.org/abs/2403.16067