Zobrazeno 1 - 10
of 58
pro vyhledávání: '"Liang, Youwei"'
Autor:
Collins, Katherine M., Kim, Najoung, Bitton, Yonatan, Rieser, Verena, Omidshafiei, Shayegan, Hu, Yushi, Chen, Sherol, Dutta, Senjuti, Chang, Minsuk, Lee, Kimin, Liang, Youwei, Evans, Georgina, Singla, Sahil, Li, Gang, Weller, Adrian, He, Junfeng, Ramachandran, Deepak, Dvijotham, Krishnamurthy Dj
Human feedback plays a critical role in learning and refining reward models for text-to-image generation, but the optimal form the feedback should take for learning an accurate reward function has not been conclusively established. This paper investi
Externí odkaz:
http://arxiv.org/abs/2406.16807
Pretrained Language Models (PLMs) have advanced Natural Language Processing (NLP) tasks significantly, but finetuning PLMs on low-resource datasets poses significant challenges such as instability and overfitting. Previous methods tackle these issues
Externí odkaz:
http://arxiv.org/abs/2403.12918
Autor:
Huo, Mingjia, Somayajula, Sai Ashish, Liang, Youwei, Zhang, Ruisi, Koushanfar, Farinaz, Xie, Pengtao
Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts. Watermarking is pivotal in this context, which involves embedding hidden
Externí odkaz:
http://arxiv.org/abs/2402.18059
The Segment Anything Model (SAM), a foundation model pretrained on millions of images and segmentation masks, has significantly advanced semantic segmentation, a fundamental task in computer vision. Despite its strengths, SAM encounters two major cha
Externí odkaz:
http://arxiv.org/abs/2402.16338
Autor:
Liang, Youwei, He, Junfeng, Li, Gang, Li, Peizhao, Klimovskiy, Arseniy, Carolan, Nicholas, Sun, Jiao, Pont-Tuset, Jordi, Young, Sarah, Yang, Feng, Ke, Junjie, Dvijotham, Krishnamurthy Dj, Collins, Katie, Luo, Yiwen, Li, Yang, Kohlhoff, Kai J, Ramachandran, Deepak, Navalpakkam, Vidhya
Recent Text-to-Image (T2I) generation models such as Stable Diffusion and Imagen have made significant progress in generating high-resolution images based on text descriptions. However, many generated images still suffer from issues such as artifacts
Externí odkaz:
http://arxiv.org/abs/2312.10240
Autor:
Li, Peizhao, He, Junfeng, Li, Gang, Bhargava, Rachit, Shen, Shaolei, Valliappan, Nachiappan, Liang, Youwei, Gu, Hongxiang, Ramachandran, Venky, Farhadi, Golnaz, Li, Yang, Kohlhoff, Kai J, Navalpakkam, Vidhya
Progress in human behavior modeling involves understanding both implicit, early-stage perceptual behavior such as human attention and explicit, later-stage behavior such as subjective preferences/likes. Yet, most prior research has focused on modelin
Externí odkaz:
http://arxiv.org/abs/2312.10175
Autor:
Cummins, Chris, Seeker, Volker, Grubisic, Dejan, Elhoushi, Mostafa, Liang, Youwei, Roziere, Baptiste, Gehring, Jonas, Gloeckle, Fabian, Hazelwood, Kim, Synnaeve, Gabriel, Leather, Hugh
We explore the novel application of Large Language Models to code optimization. We present a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size. The model takes as input unoptimized assembly and outputs a list
Externí odkaz:
http://arxiv.org/abs/2309.07062
A ChatGPT-like system for drug compounds could be a game-changer in pharmaceutical research, accelerating drug discovery, enhancing our understanding of structure-activity relationships, guiding lead optimization, aiding drug repurposing, reducing th
Externí odkaz:
http://arxiv.org/abs/2309.03907
Autor:
Liang, Youwei, Stone, Kevin, Shameli, Ali, Cummins, Chris, Elhoushi, Mostafa, Guo, Jiadong, Steiner, Benoit, Yang, Xiaomeng, Xie, Pengtao, Leather, Hugh, Tian, Yuandong
Finding the optimal pass sequence of compilation can lead to a significant reduction in program size and/or improvement in program efficiency. Prior works on compilation pass ordering have two major drawbacks. They either require an excessive budget
Externí odkaz:
http://arxiv.org/abs/2301.05104
Vision Transformers (ViTs) take all the image patches as tokens and construct multi-head self-attention (MHSA) among them. Complete leverage of these image tokens brings redundant computations since not all the tokens are attentive in MHSA. Examples
Externí odkaz:
http://arxiv.org/abs/2202.07800