Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Chauhan, Kushal"'
Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions. We extend CBMs to
Externí odkaz:
http://arxiv.org/abs/2212.07430
Reliable outlier detection is critical for real-world deployment of deep learning models. Although extensively studied, likelihoods produced by deep generative models have been largely dismissed as being impractical for outlier detection. First, deep
Externí odkaz:
http://arxiv.org/abs/2208.13579
The options framework in Hierarchical Reinforcement Learning breaks down overall goals into a combination of options or simpler tasks and associated policies, allowing for abstraction in the action space. Ideally, these options can be reused across d
Externí odkaz:
http://arxiv.org/abs/2206.05750
Deep networks often make confident, yet, incorrect, predictions when tested with outlier data that is far removed from their training distributions. Likelihoods computed by deep generative models (DGMs) are a candidate metric for outlier detection wi
Externí odkaz:
http://arxiv.org/abs/2108.08760
Autor:
Chauhan, Kushal, Gupta, Abhirut
Technical support problems are often long and complex. They typically contain user descriptions of the problem, the setup, and steps for attempted resolution. Often they also contain various non-natural language text elements like outputs of commands
Externí odkaz:
http://arxiv.org/abs/2005.11055
Autor:
Srivastava, Prakhar, Chauhan, Kushal, Aggarwal, Deepanshu, Shukla, Anupam, Dhar, Joydip, Jain, Vrashabh Prasad
Publikováno v:
ACM International Conference Proceeding Series; 12/21/2018, p1-6, 6p