Zobrazeno 1 - 10
of 74
pro vyhledávání: '"Dubey, Avinava"'
We present the first linear time complexity randomized algorithms for unbiased approximation of the celebrated family of general random walk kernels (RWKs) for sparse graphs. This includes both labelled and unlabelled instances. The previous fastest
Externí odkaz:
http://arxiv.org/abs/2410.10368
Autor:
Kim, Sang Min, Kim, Byeongchan, Sehanobish, Arijit, Choromanski, Krzysztof, Shim, Dongseok, Dubey, Avinava, Oh, Min-hwan
Improving the efficiency and performance of implicit neural representations in 3D, particularly Neural Radiance Fields (NeRF) and Signed Distance Fields (SDF) is crucial for enabling their use in real-time applications. These models, while capable of
Externí odkaz:
http://arxiv.org/abs/2410.09771
Autor:
Wang, Kaiwen, Kidambi, Rahul, Sullivan, Ryan, Agarwal, Alekh, Dann, Christoph, Michi, Andrea, Gelmi, Marco, Li, Yunxuan, Gupta, Raghav, Dubey, Avinava, Ramé, Alexandre, Ferret, Johan, Cideron, Geoffrey, Hou, Le, Yu, Hongkun, Ahmed, Amr, Mehta, Aranyak, Hussenot, Léonard, Bachem, Olivier, Leurent, Edouard
Reward-based finetuning is crucial for aligning language policies with intended behaviors (e.g., creativity and safety). A key challenge is to develop steerable language models that trade-off multiple (conflicting) objectives in a flexible and effici
Externí odkaz:
http://arxiv.org/abs/2407.15762
Autor:
Sehanobish, Arijit, Dubey, Avinava, Choromanski, Krzysztof, Chowdhury, Somnath Basu Roy, Jain, Deepali, Sindhwani, Vikas, Chaturvedi, Snigdha
Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei et al., 2022). However, fine-tuning these models for downstream tasks is expensive due to their large parameter counts. Parameter-efficient
Externí odkaz:
http://arxiv.org/abs/2406.17740
Autor:
Chowdhury, Somnath Basu Roy, Choromanski, Krzysztof, Sehanobish, Arijit, Dubey, Avinava, Chaturvedi, Snigdha
Machine unlearning is the process of efficiently removing the influence of a training data instance from a trained machine learning model without retraining it from scratch. A popular subclass of unlearning approaches is exact machine unlearning, whi
Externí odkaz:
http://arxiv.org/abs/2406.16257
Autor:
Choromanski, Krzysztof, Sehanobish, Arijit, Chowdhury, Somnath Basu Roy, Lin, Han, Dubey, Avinava, Sarlos, Tamas, Chaturvedi, Snigdha
We present a new class of fast polylog-linear algorithms based on the theory of structured matrices (in particular low displacement rank) for integrating tensor fields defined on weighted trees. Several applications of the resulting fast tree-field i
Externí odkaz:
http://arxiv.org/abs/2406.15881
Autor:
Sachan, Mrinmaya, Dubey, Avinava, Hovy, Eduard H., Mitchell, Tom M., Roth, Dan, Xing, Eric P.
Publikováno v:
Computational Linguistics, Vol 45, Iss 4, Pp 627-665 (2020)
To ensure readability, text is often written and presented with due formatting. These text formatting devices help the writer to effectively convey the narrative. At the same time, these help the readers pick up the structure of the discourse and com
Externí odkaz:
https://doaj.org/article/dc8f38f4640e4e158191ebcbb5855a15
Autor:
Varley, Jake, Singh, Sumeet, Jain, Deepali, Choromanski, Krzysztof, Zeng, Andy, Chowdhury, Somnath Basu Roy, Dubey, Avinava, Sindhwani, Vikas
We present an embodied AI system which receives open-ended natural language instructions from a human, and controls two arms to collaboratively accomplish potentially long-horizon tasks over a large workspace. Our system is modular: it deploys state
Externí odkaz:
http://arxiv.org/abs/2404.03570
Autor:
Chowdhury, Somnath Basu Roy, Monath, Nicholas, Dubey, Avinava, Zaheer, Manzil, McCallum, Andrew, Ahmed, Amr, Chaturvedi, Snigdha
Extractive opinion summarization involves automatically producing a summary of text about an entity (e.g., a product's reviews) by extracting representative sentences that capture prevalent opinions in the review set. Typically, in online marketplace
Externí odkaz:
http://arxiv.org/abs/2401.08047
Autor:
Leal, Isabel, Choromanski, Krzysztof, Jain, Deepali, Dubey, Avinava, Varley, Jake, Ryoo, Michael, Lu, Yao, Liu, Frederick, Sindhwani, Vikas, Vuong, Quan, Sarlos, Tamas, Oslund, Ken, Hausman, Karol, Rao, Kanishka
We present Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT): a new paradigm for addressing the emerging challenge of scaling up Robotics Transformers (RT) for on-robot deployment. SARA-RT relies on the new method of fine-tuning prop
Externí odkaz:
http://arxiv.org/abs/2312.01990