Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Garrepalli, Risheek"'
Autor:
Bhardwaj, Kartikeya, Pandey, Nilesh Prasad, Priyadarshi, Sweta, Ganapathy, Viswanath, Esteves, Rafael, Kadambi, Shreya, Borse, Shubhankar, Whatmough, Paul, Garrepalli, Risheek, Van Baalen, Mart, Teague, Harris, Nagel, Markus
In this paper, we propose Sparse High Rank Adapters (SHiRA) that directly finetune 1-2% of the base model weights while leaving others unchanged, thus, resulting in a highly sparse adapter. This high sparsity incurs no inference overhead, enables rap
Externí odkaz:
http://arxiv.org/abs/2407.16712
Autor:
Bhardwaj, Kartikeya, Pandey, Nilesh Prasad, Priyadarshi, Sweta, Ganapathy, Viswanath, Esteves, Rafael, Kadambi, Shreya, Borse, Shubhankar, Whatmough, Paul, Garrepalli, Risheek, Van Baalen, Mart, Teague, Harris, Nagel, Markus
Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models adding no overhead during inference. However, from a mobile deployment
Externí odkaz:
http://arxiv.org/abs/2406.13175
Autor:
Borse, Shubhankar, Kadambi, Shreya, Pandey, Nilesh Prasad, Bhardwaj, Kartikeya, Ganapathy, Viswanath, Priyadarshi, Sweta, Garrepalli, Risheek, Esteves, Rafael, Hayat, Munawar, Porikli, Fatih
While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples
Externí odkaz:
http://arxiv.org/abs/2406.08798
Optical flow estimation is crucial to a variety of vision tasks. Despite substantial recent advancements, achieving real-time on-device optical flow estimation remains a complex challenge. First, an optical flow model must be sufficiently lightweight
Externí odkaz:
http://arxiv.org/abs/2404.08135
Autor:
Jeong, Jisoo, Cai, Hong, Garrepalli, Risheek, Lin, Jamie Menjay, Hayat, Munawar, Porikli, Fatih
The scarcity of ground-truth labels poses one major challenge in developing optical flow estimation models that are both generalizable and robust. While current methods rely on data augmentation, they have yet to fully exploit the rich information av
Externí odkaz:
http://arxiv.org/abs/2403.18092
Autor:
Yasarla, Rajeev, Singh, Manish Kumar, Cai, Hong, Shi, Yunxiao, Jeong, Jisoo, Zhu, Yinhao, Han, Shizhong, Garrepalli, Risheek, Porikli, Fatih
In this paper, we propose a novel video depth estimation approach, FutureDepth, which enables the model to implicitly leverage multi-frame and motion cues to improve depth estimation by making it learn to predict the future at training. More specific
Externí odkaz:
http://arxiv.org/abs/2403.12953
Autor:
Habibian, Amirhossein, Ghodrati, Amir, Fathima, Noor, Sautiere, Guillaume, Garrepalli, Risheek, Porikli, Fatih, Petersen, Jens
This work aims to improve the efficiency of text-to-image diffusion models. While diffusion models use computationally expensive UNet-based denoising operations in every generation step, we identify that not all operations are equally relevant for th
Externí odkaz:
http://arxiv.org/abs/2312.08128
We propose MAMo, a novel memory and attention frame-work for monocular video depth estimation. MAMo can augment and improve any single-image depth estimation networks into video depth estimation models, enabling them to take advantage of the temporal
Externí odkaz:
http://arxiv.org/abs/2307.14336
Autor:
Garrepalli, Risheek, Jeong, Jisoo, Ravindran, Rajeswaran C, Lin, Jamie Menjay, Porikli, Fatih
Recent advancements in neural network-based optical flow estimation often come with prohibitively high computational and memory requirements, presenting challenges in their model adaptation for mobile and low-power use cases. In this paper, we introd
Externí odkaz:
http://arxiv.org/abs/2306.05691
We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models by introducing realistic distractions to the input frames. Based on a mixing ratio, we combine one of the frames in the pair with a distractor im
Externí odkaz:
http://arxiv.org/abs/2303.14078