Zobrazeno 1 - 10
of 1 866
pro vyhledávání: '"Wang, Rongrong"'
Autor:
Alkhouri, Ismail, Denmat, Cedric Le, Li, Yingjie, Yu, Cunxi, Liu, Jia, Wang, Rongrong, Velasquez, Alvaro
Combinatorial Optimization (CO) plays a crucial role in addressing various significant problems, among them the challenging Maximum Independent Set (MIS) problem. In light of recent advancements in deep learning methods, efforts have been directed to
Externí odkaz:
http://arxiv.org/abs/2406.19532
Autor:
Ghosh, Avrajit, Zhang, Xitong, Sun, Kenneth K., Qu, Qing, Ravishankar, Saiprasad, Wang, Rongrong
Publikováno v:
International Conference on Machine Learning (ICML 2024)
We introduce Optimal Eye Surgeon (OES), a framework for pruning and training deep image generator networks. Typically, untrained deep convolutional networks, which include image sampling operations, serve as effective image priors (Ulyanov et al., 20
Externí odkaz:
http://arxiv.org/abs/2406.05288
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Autor:
Liu, Guangliang, Afshari, Milad, Zhang, Xitong, Xue, Zhiyu, Ghosh, Avrajit, Bashyal, Bidhan, Wang, Rongrong, Johnson, Kristen
While task-agnostic debiasing provides notable generalizability and reduced reliance on downstream data, its impact on language modeling ability and the risk of relearning social biases from downstream task-specific data remain as the two most signif
Externí odkaz:
http://arxiv.org/abs/2406.04146
Autor:
Liu, Guangliang, Mao, Haitao, Cao, Bochuan, Xue, Zhiyu, Johnson, Kristen, Tang, Jiliang, Wang, Rongrong
Large Language Models (LLMs) can improve their responses when instructed to do so, a capability known as self-correction. When these instructions lack specific details about the issues in the response, this is referred to as leveraging the intrinsic
Externí odkaz:
http://arxiv.org/abs/2406.02378
Deep Neural Networks (DNNs) have achieved remarkable success in addressing many previously unsolvable tasks. However, the storage and computational requirements associated with DNNs pose a challenge for deploying these trained models on resource-limi
Externí odkaz:
http://arxiv.org/abs/2405.03089
We propose two provably accurate methods for low CP-rank tensor completion - one using adaptive sampling and one using nonadaptive sampling. Both of our algorithms combine matrix completion techniques for a small number of slices along with Jennrich'
Externí odkaz:
http://arxiv.org/abs/2403.09932
The ability of deep image prior (DIP) to recover high-quality images from incomplete or corrupted measurements has made it popular in inverse problems in image restoration and medical imaging including magnetic resonance imaging (MRI). However, conve
Externí odkaz:
http://arxiv.org/abs/2402.04097
In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples. Despite the encouraging empirical success, the und
Externí odkaz:
http://arxiv.org/abs/2402.02212
Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in
Externí odkaz:
http://arxiv.org/abs/2310.17588
Deep learning (DL) techniques have been extensively employed in magnetic resonance imaging (MRI) reconstruction, delivering notable performance enhancements over traditional non-DL methods. Nonetheless, recent studies have identified vulnerabilities
Externí odkaz:
http://arxiv.org/abs/2309.05794