Zobrazeno 1 - 10
of 1 938
pro vyhledávání: '"Chandar P"'
Autor:
Thakkar, Megh, More, Yash, Fournier, Quentin, Riemer, Matthew, Chen, Pin-Yu, Zouaq, Amal, Das, Payel, Chandar, Sarath
There is a growing interest in training domain-expert LLMs that excel in specific technical fields compared to their general-purpose instruction-tuned counterparts. However, these expert models often experience a loss in their safety abilities in the
Externí odkaz:
http://arxiv.org/abs/2411.06824
The growth in prominence of large language models (LLMs) in everyday life can be largely attributed to their generative abilities, yet some of this is also owed to the risks and costs associated with their use. On one front is their tendency to \text
Externí odkaz:
http://arxiv.org/abs/2410.17477
Autor:
Bouchoucha, Rached, Yahmed, Ahmed Haj, Patil, Darshan, Rajendran, Janarthanan, Nikanjam, Amin, Chandar, Sarath, Khomh, Foutse
Deep reinforcement learning (DRL) has shown success in diverse domains such as robotics, computer games, and recommendation systems. However, like any other software system, DRL-based software systems are susceptible to faults that pose unique challe
Externí odkaz:
http://arxiv.org/abs/2410.04322
Autor:
Nazarczuk, Michal, Catley-Chandar, Sibi, Tanay, Thomas, Shaw, Richard, Pérez-Pellitero, Eduardo, Timofte, Radu, Yan, Xing, Wang, Pan, Guo, Yali, Wu, Yongxin, Cai, Youcheng, Yang, Yanan, Li, Junting, Zhou, Yanghong, Mok, P. Y., He, Zongqi, Xiao, Zhe, Chan, Kin-Chung, Goshu, Hana Lebeta, Yang, Cuixin, Dong, Rongkang, Xiao, Jun, Lam, Kin-Man, Hao, Jiayao, Gao, Qiong, Zu, Yanyan, Zhang, Junpei, Jiao, Licheng, Liu, Xu, Purohit, Kuldeep
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their resp
Externí odkaz:
http://arxiv.org/abs/2409.15045
Autor:
Nazarczuk, Michal, Tanay, Thomas, Catley-Chandar, Sibi, Shaw, Richard, Timofte, Radu, Pérez-Pellitero, Eduardo
Recent developments in differentiable and neural rendering have made impressive breakthroughs in a variety of 2D and 3D tasks, e.g. novel view synthesis, 3D reconstruction. Typically, differentiable rendering relies on a dense viewpoint coverage of t
Externí odkaz:
http://arxiv.org/abs/2409.15041
Despite their widespread adoption, large language models (LLMs) remain prohibitive to use under resource constraints, with their ever growing sizes only increasing the barrier for use. One noted issue is the high latency associated with auto-regressi
Externí odkaz:
http://arxiv.org/abs/2408.08470
3D sensing is a fundamental task for Autonomous Vehicles. Its deployment often relies on aligned RGB cameras and LiDAR. Despite meticulous synchronization and calibration, systematic misalignment persists in LiDAR projected depthmap. This is due to t
Externí odkaz:
http://arxiv.org/abs/2407.19154
The increasing scale of Transformer models has led to an increase in their pre-training computational requirements. While quantization has proven to be effective after pre-training and during fine-tuning, applying quantization in Transformers during
Externí odkaz:
http://arxiv.org/abs/2407.11722
The widespread use of large language models has brought up essential questions about the potential biases these models might learn. This led to the development of several metrics aimed at evaluating and mitigating these biases. In this paper, we firs
Externí odkaz:
http://arxiv.org/abs/2406.05918
Autor:
Thakkar, Megh, Fournier, Quentin, Riemer, Matthew D, Chen, Pin-Yu, Zouaq, Amal, Das, Payel, Chandar, Sarath
Large language models are first pre-trained on trillions of tokens and then instruction-tuned or aligned to specific preferences. While pre-training remains out of reach for most researchers due to the compute required, fine-tuning has become afforda
Externí odkaz:
http://arxiv.org/abs/2406.04879