Zobrazeno 1 - 10
of 93
pro vyhledávání: '"Gaurav, Ashish"'
Autor:
Liu, Guiliang, Xu, Sheng, Liu, Shicheng, Gaurav, Ashish, Subramanian, Sriram Ganapathi, Poupart, Pascal
Inverse Constrained Reinforcement Learning (ICRL) is the task of inferring the implicit constraints followed by expert agents from their demonstration data. As an emerging research topic, ICRL has received considerable attention in recent years. This
Externí odkaz:
http://arxiv.org/abs/2409.07569
When deploying Reinforcement Learning (RL) agents into a physical system, we must ensure that these agents are well aware of the underlying constraints. In many real-world problems, however, the constraints are often hard to specify mathematically an
Externí odkaz:
http://arxiv.org/abs/2206.09670
Inverse reinforcement learning (IRL) methods assume that the expert data is generated by an agent optimizing some reward function. However, in many settings, the agent may optimize a reward function subject to some constraints, where the constraints
Externí odkaz:
http://arxiv.org/abs/2206.01311
Publikováno v:
In Materials Today Chemistry July 2024 39
Autor:
Gaurav, Ashish, Das, Ankit, Paul, Ananta, Jain, Amrita, Boruah, Buddha Deka, Abdi-Jalebi, Mojtaba
Publikováno v:
In Journal of Energy Storage 30 May 2024 88
Carbon Quantum dots (CQD's) are nanoscale sp2 hybridized carbon particles. In this work, we present a simple one-step synthesis of CQDs from the electrochemical shredding method and technique to control its size during its growth process. A graphite
Externí odkaz:
http://arxiv.org/abs/2011.03217
Autor:
Vernekar, Sachin, Gaurav, Ashish, Abdelzad, Vahdat, Denouden, Taylor, Salay, Rick, Czarnecki, Krzysztof
By design, discriminatively trained neural network classifiers produce reliable predictions only for in-distribution samples. For their real-world deployments, detecting out-of-distribution (OOD) samples is essential. Assuming OOD to be outside the c
Externí odkaz:
http://arxiv.org/abs/1910.04241
Autor:
Ilievski, Marko, Sedwards, Sean, Gaurav, Ashish, Balakrishnan, Aravind, Sarkar, Atrisha, Lee, Jaeyoung, Bouchard, Frédéric, De Iaco, Ryan, Czarnecki, Krzysztof
We explore the complex design space of behaviour planning for autonomous driving. Design choices that successfully address one aspect of behaviour planning can critically constrain others. To aid the design process, in this work we decompose the desi
Externí odkaz:
http://arxiv.org/abs/1908.07931
Autor:
Vernekar, Sachin, Gaurav, Ashish, Denouden, Taylor, Phan, Buu, Abdelzad, Vahdat, Salay, Rick, Czarnecki, Krzysztof
Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In t
Externí odkaz:
http://arxiv.org/abs/1904.12220
Publikováno v:
International Conference on Quantitative Evaluation of Systems (QEST 2019)
Machine learning can provide efficient solutions to the complex problems encountered in autonomous driving, but ensuring their safety remains a challenge. A number of authors have attempted to address this issue, but there are few publicly-available
Externí odkaz:
http://arxiv.org/abs/1902.04118