Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Manjunatha, Varun"'
Autor:
Jiang, Yue, Lutteroth, Christof, Jain, Rajiv, Tensmeyer, Christopher, Manjunatha, Varun, Stuerzlinger, Wolfgang, Morariu, Vlad
Designing adaptive documents that are visually appealing across various devices and for diverse viewers is a challenging task. This is due to the wide variety of devices and different viewer requirements and preferences. Alterations to a document's c
Externí odkaz:
http://arxiv.org/abs/2410.15504
Autor:
Basu, Samyadeep, Rezaei, Keivan, Kattakinda, Priyatham, Rossi, Ryan, Zhao, Cherry, Morariu, Vlad, Manjunatha, Varun, Feizi, Soheil
Identifying layers within text-to-image models which control visual attributes can facilitate efficient model editing through closed-form updates. Recent work, leveraging causal tracing show that early Stable-Diffusion variants confine knowledge prim
Externí odkaz:
http://arxiv.org/abs/2405.01008
Autor:
Kim, Yekyung, Chang, Yapei, Karpinska, Marzena, Garimella, Aparna, Manjunatha, Varun, Lo, Kyle, Goyal, Tanya, Iyyer, Mohit
Publikováno v:
1st Conference on Language Modeling (COLM 2024)
While long-context large language models (LLMs) can technically summarize book-length documents (>100K tokens), the length and complexity of the documents have so far prohibited evaluations of input-dependent aspects like faithfulness. In this paper,
Externí odkaz:
http://arxiv.org/abs/2404.01261
Text-to-Image Diffusion Models such as Stable-Diffusion and Imagen have achieved unprecedented quality of photorealism with state-of-the-art FID scores on MS-COCO and other generation benchmarks. Given a caption, image generation requires fine-graine
Externí odkaz:
http://arxiv.org/abs/2310.13730
Autor:
Wang, Shufan, Song, Yixiao, Drozdov, Andrew, Garimella, Aparna, Manjunatha, Varun, Iyyer, Mohit
In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the KNN-LM, interpolate the LM's predicted distribution of the next word with a distribution formed fr
Externí odkaz:
http://arxiv.org/abs/2305.14625
Many language tasks (e.g., Named Entity Recognition, Part-of-Speech tagging, and Semantic Role Labeling) are naturally framed as sequence tagging problems. However, there has been comparatively little work on interpretability methods for sequence tag
Externí odkaz:
http://arxiv.org/abs/2210.14177
Autor:
Bansal, Arpit, Chiang, Ping-yeh, Curry, Michael, Jain, Rajiv, Wigington, Curtis, Manjunatha, Varun, Dickerson, John P, Goldstein, Tom
Publikováno v:
ICML 2022
Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio. Recently, watermarking methods have been extended to deep learning models -- in principle, the watermark should be preserved when an adversary t
Externí odkaz:
http://arxiv.org/abs/2207.07972
Autor:
Li, Peizhao, Gu, Jiuxiang, Kuen, Jason, Morariu, Vlad I., Zhao, Handong, Jain, Rajiv, Manjunatha, Varun, Liu, Hongfu
We propose SelfDoc, a task-agnostic pre-training framework for document image understanding. Because documents are multimodal and are intended for sequential reading, our framework exploits the positional, textual, and visual information of every sem
Externí odkaz:
http://arxiv.org/abs/2106.03331
Existing work on tabular representation learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT. While this joint pretraining improves tasks involving paired ta
Externí odkaz:
http://arxiv.org/abs/2105.02584
Autor:
Li, Kai, Wigington, Curtis, Tensmeyer, Chris, Morariu, Vlad I., Zhao, Handong, Manjunatha, Varun, Barmpalios, Nikolaos, Fu, Yun
Cross-Domain Detection (XDD) aims to train an object detector using labeled image from a source domain but have good performance in the target domain with only unlabeled images. Existing approaches achieve this either by aligning the feature maps or
Externí odkaz:
http://arxiv.org/abs/2104.08689