Zobrazeno 1 - 10
of 481
pro vyhledávání: '"Dukler A"'
Autor:
Tao, Chaofan, Kwon, Gukyeong, Gunjal, Varad, Yang, Hao, Cai, Zhaowei, Dukler, Yonatan, Swaminathan, Ashwin, Manmatha, R., Taylor, Colin Jon, Soatto, Stefano
We study the capability of Video-Language (VidL) models in understanding compositions between objects, attributes, actions and their relations. Composition understanding becomes particularly challenging for video data since the compositional relation
Externí odkaz:
http://arxiv.org/abs/2408.09511
Autor:
Zancato, Luca, Seshadri, Arjun, Dukler, Yonatan, Golatkar, Aditya, Shen, Yantao, Bowman, Benjamin, Trager, Matthew, Achille, Alessandro, Soatto, Stefano
We describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a-priori unknown bound while making efficient use of finite resources for inference. Current architectures use such resources to repres
Externí odkaz:
http://arxiv.org/abs/2407.06324
Autor:
Kaul, Prannay, Li, Zhizhong, Yang, Hao, Dukler, Yonatan, Swaminathan, Ashwin, Taylor, C. J., Soatto, Stefano
Mitigating hallucinations in large vision-language models (LVLMs) remains an open problem. Recent benchmarks do not address hallucinations in open-ended free-form responses, which we term "Type I hallucinations". Instead, they focus on hallucinations
Externí odkaz:
http://arxiv.org/abs/2405.05256
Autor:
Dukler, Yonatan, Bowman, Benjamin, Achille, Alessandro, Golatkar, Aditya, Swaminathan, Ashwin, Soatto, Stefano
We present Synergy Aware Forgetting Ensemble (SAFE), a method to adapt large models on a diverse collection of data while minimizing the expected cost to remove the influence of training samples from the trained model. This process, also known as sel
Externí odkaz:
http://arxiv.org/abs/2304.13169
Prompt learning is an efficient approach to adapt transformers by inserting learnable set of parameters into the input and intermediate representations of a pre-trained model. In this work, we present Expressive Prompts with Residuals (EXPRES) which
Externí odkaz:
http://arxiv.org/abs/2303.15591
Autor:
Dukler, Yonatan, Achille, Alessandro, Yang, Hao, Vivek, Varsha, Zancato, Luca, Bowman, Benjamin, Ravichandran, Avinash, Fowlkes, Charless, Swaminathan, Ashwin, Soatto, Stefano
We propose InCA, a lightweight method for transfer learning that cross-attends to any activation layer of a pre-trained model. During training, InCA uses a single forward pass to extract multiple activations, which are passed to external cross-attent
Externí odkaz:
http://arxiv.org/abs/2303.04105
Autor:
Dukler, Yonatan, Achille, Alessandro, Paolini, Giovanni, Ravichandran, Avinash, Polito, Marzia, Soatto, Stefano
We present a method to compute the derivative of a learning task with respect to a dataset. A learning task is a function from a training set to the validation error, which can be represented by a trained deep neural network (DNN). The "dataset deriv
Externí odkaz:
http://arxiv.org/abs/2111.09785
The success of deep neural networks is in part due to the use of normalization layers. Normalization layers like Batch Normalization, Layer Normalization and Weight Normalization are ubiquitous in practice, as they improve generalization performance
Externí odkaz:
http://arxiv.org/abs/2006.06878
We revisit the tears of wine problem for thin films in water-ethanol mixtures and present a new model for the climbing dynamics. The new formulation includes a Marangoni stress balanced by both the normal and tangential components of gravity as well
Externí odkaz:
http://arxiv.org/abs/1909.09898
We propose regularization strategies for learning discriminative models that are robust to in-class variations of the input data. We use the Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images, and define a
Externí odkaz:
http://arxiv.org/abs/1909.06860