Zobrazeno 1 - 10
of 466
pro vyhledávání: '"Shroff, Gautam"'
Large Language Models (LLMs) excel in diverse applications including generation of code snippets, but often struggle with generating code for complex Machine Learning (ML) tasks. Although existing LLM single-agent based systems give varying performan
Externí odkaz:
http://arxiv.org/abs/2411.07464
Robotic Process Automation (RPA) systems face challenges in handling complex processes and diverse screen layouts that require advanced human-like decision-making capabilities. These systems typically rely on pixel-level encoding through drag-and-dro
Externí odkaz:
http://arxiv.org/abs/2405.12842
Several tools have recently been proposed for assisting researchers during various stages of the research life-cycle. However, these primarily concentrate on tasks such as retrieving and recommending relevant literature, reviewing and critiquing the
Externí odkaz:
http://arxiv.org/abs/2403.04382
Autor:
Nabar, Omkar, Shroff, Gautam
Price movements in financial markets are well known to be very noisy. As a result, even if there are, on occasion, exploitable patterns that could be picked up by machine-learning algorithms, these are obscured by feature and label noise rendering th
Externí odkaz:
http://arxiv.org/abs/2310.11815
Autor:
Arora, Aseem, Bhaisaheb, Shabbirhussain, Nigam, Harshit, Patwardhan, Manasi, Vig, Lovekesh, Shroff, Gautam
Cross-domain and cross-compositional generalization of Text-to-SQL semantic parsing is a challenging task. Existing Large Language Model (LLM) based solutions rely on inference-time retrieval of few-shot exemplars from the training set to synthesize
Externí odkaz:
http://arxiv.org/abs/2308.02582
We model short-duration (e.g. day) trading in financial markets as a sequential decision-making problem under uncertainty, with the added complication of continual concept-drift. We, therefore, employ meta reinforcement learning via the RL2 algorithm
Externí odkaz:
http://arxiv.org/abs/2302.08996
Deep neural networks (DNN) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predicti
Externí odkaz:
http://arxiv.org/abs/2212.10005
Autor:
Shah, Vedant, Agrawal, Aditya, Vig, Lovekesh, Srinivasan, Ashwin, Shroff, Gautam, Verlekar, Tanmay
We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation
Externí odkaz:
http://arxiv.org/abs/2211.16047