Zobrazeno 1 - 10
of 3 844
pro vyhledávání: '"Chandar P"'
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
The growth in prominence of large language models (LLMs) in everyday life can be largely attributed to their generative abilities, yet some of this is also owed to the risks and costs associated with their use. On one front is their tendency to \text
Externí odkaz:
http://arxiv.org/abs/2410.17477
Autor:
Bouchoucha, Rached, Yahmed, Ahmed Haj, Patil, Darshan, Rajendran, Janarthanan, Nikanjam, Amin, Chandar, Sarath, Khomh, Foutse
Deep reinforcement learning (DRL) has shown success in diverse domains such as robotics, computer games, and recommendation systems. However, like any other software system, DRL-based software systems are susceptible to faults that pose unique challe
Externí odkaz:
http://arxiv.org/abs/2410.04322
Autor:
Nazarczuk, Michal, Catley-Chandar, Sibi, Tanay, Thomas, Shaw, Richard, Pérez-Pellitero, Eduardo, Timofte, Radu, Yan, Xing, Wang, Pan, Guo, Yali, Wu, Yongxin, Cai, Youcheng, Yang, Yanan, Li, Junting, Zhou, Yanghong, Mok, P. Y., He, Zongqi, Xiao, Zhe, Chan, Kin-Chung, Goshu, Hana Lebeta, Yang, Cuixin, Dong, Rongkang, Xiao, Jun, Lam, Kin-Man, Hao, Jiayao, Gao, Qiong, Zu, Yanyan, Zhang, Junpei, Jiao, Licheng, Liu, Xu, Purohit, Kuldeep
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their resp
Externí odkaz:
http://arxiv.org/abs/2409.15045
Autor:
Nazarczuk, Michal, Tanay, Thomas, Catley-Chandar, Sibi, Shaw, Richard, Timofte, Radu, Pérez-Pellitero, Eduardo
Recent developments in differentiable and neural rendering have made impressive breakthroughs in a variety of 2D and 3D tasks, e.g. novel view synthesis, 3D reconstruction. Typically, differentiable rendering relies on a dense viewpoint coverage of t
Externí odkaz:
http://arxiv.org/abs/2409.15041
Despite their widespread adoption, large language models (LLMs) remain prohibitive to use under resource constraints, with their ever growing sizes only increasing the barrier for use. One noted issue is the high latency associated with auto-regressi
Externí odkaz:
http://arxiv.org/abs/2408.08470
3D sensing is a fundamental task for Autonomous Vehicles. Its deployment often relies on aligned RGB cameras and LiDAR. Despite meticulous synchronization and calibration, systematic misalignment persists in LiDAR projected depthmap. This is due to t
Externí odkaz:
http://arxiv.org/abs/2407.19154
The increasing scale of Transformer models has led to an increase in their pre-training computational requirements. While quantization has proven to be effective after pre-training and during fine-tuning, applying quantization in Transformers during
Externí odkaz:
http://arxiv.org/abs/2407.11722
The widespread use of large language models has brought up essential questions about the potential biases these models might learn. This led to the development of several metrics aimed at evaluating and mitigating these biases. In this paper, we firs
Externí odkaz:
http://arxiv.org/abs/2406.05918
Autor:
Thakkar, Megh, Fournier, Quentin, Riemer, Matthew D, Chen, Pin-Yu, Zouaq, Amal, Das, Payel, Chandar, Sarath
Large language models are first pre-trained on trillions of tokens and then instruction-tuned or aligned to specific preferences. While pre-training remains out of reach for most researchers due to the compute required, fine-tuning has become afforda
Externí odkaz:
http://arxiv.org/abs/2406.04879