Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Butala, Yash Parag"'
Autor:
Kapoor, Raghav, Butala, Yash Parag, Russak, Melisa, Koh, Jing Yu, Kamble, Kiran, Alshikh, Waseem, Salakhutdinov, Ruslan
For decades, human-computer interaction has fundamentally been manual. Even today, almost all productive work done on the computer necessitates human input at every step. Autonomous virtual agents represent an exciting step in automating many of thes
Externí odkaz:
http://arxiv.org/abs/2402.17553
Autor:
Nandy, Abhilash, Kapadnis, Manav Nitin, Patnaik, Sohan, Butala, Yash Parag, Goyal, Pawan, Ganguly, Niloy
In this paper, we propose $FastDoc$ (Fast Continual Pre-training Technique using Document Level Metadata and Taxonomy), a novel, compute-efficient framework that utilizes Document metadata and Domain-Specific Taxonomy as supervision signals to contin
Externí odkaz:
http://arxiv.org/abs/2306.06190
Autor:
Nandy, Abhilash, Kapadnis, Manav Nitin, Patnaik, Sohan, Butala, Yash Parag, Goyal, Pawan, Ganguly, Niloy
Pre-training Transformers has shown promising results on open-domain and domain-specific downstream tasks. However, state-of-the-art Transformers require an unreasonably large amount of pre-training data and compute. In this paper, we propose $FPDM$
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a882c5a74ffdb7fc5da1addc53965f17
The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024. The 2387 papers