Zobrazeno 1 - 10
of 5 295
pro vyhledávání: '"Subbarao, P"'
Previous work has attempted to boost Large Language Model (LLM) performance on planning and scheduling tasks through a variety of prompt engineering techniques. While these methods can work within the distributions tested, they are neither robust nor
Externí odkaz:
http://arxiv.org/abs/2411.14484
The ability to plan a course of action that achieves a desired state of affairs has long been considered a core competence of intelligent agents and has been an integral part of AI research since its inception. With the advent of large language model
Externí odkaz:
http://arxiv.org/abs/2410.02162
The ability to plan a course of action that achieves a desired state of affairs has long been considered a core competence of intelligent agents and has been an integral part of AI research since its inception. With the advent of large language model
Externí odkaz:
http://arxiv.org/abs/2409.13373
Autor:
Saldyt, Lucas, Kambhampati, Subbarao
Important tasks such as reasoning and planning are fundamentally algorithmic, meaning that solving them robustly requires acquiring true reasoning or planning algorithms, rather than shortcuts. Large Language Models lack true algorithmic ability prim
Externí odkaz:
http://arxiv.org/abs/2407.04899
Autor:
Gundawar, Atharva, Verma, Mudit, Guan, Lin, Valmeekam, Karthik, Bhambri, Siddhant, Kambhampati, Subbarao
As the applicability of Large Language Models (LLMs) extends beyond traditional text processing tasks, there is a burgeoning interest in their potential to excel in planning and reasoning assignments, realms traditionally reserved for System 2 cognit
Externí odkaz:
http://arxiv.org/abs/2405.20625
Autor:
Bhambri, Siddhant, Bhattacharjee, Amrita, Kalwar, Durgesh, Guan, Lin, Liu, Huan, Kambhampati, Subbarao
Reinforcement Learning (RL) suffers from sample inefficiency in sparse reward domains, and the problem is further pronounced in case of stochastic transitions. To improve the sample efficiency, reward shaping is a well-studied approach to introduce i
Externí odkaz:
http://arxiv.org/abs/2405.15194
The reasoning abilities of Large Language Models (LLMs) remain a topic of debate. Some methods such as ReAct-based prompting, have gained popularity for claiming to enhance sequential decision-making abilities of agentic LLMs. However, it is unclear
Externí odkaz:
http://arxiv.org/abs/2405.13966
From its inception, AI has had a rather ambivalent relationship with humans -- swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to
Externí odkaz:
http://arxiv.org/abs/2405.15804
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated with chain of thought prompting-a method of demonstrating solution procedures-with th
Externí odkaz:
http://arxiv.org/abs/2405.04776
Autor:
Patel, Jinaykumar, Subbarao, Kamesh
Publikováno v:
2023 AAS/AIAA Astrodynamics Specialist Conference, Big Sky, MT
This paper investigates the application of reachability analysis to the re-entry problem faced by vehicles entering Earth's atmosphere. The study delves into the time evolution of reachable sets for the system, particularly when subject to nonlinear
Externí odkaz:
http://arxiv.org/abs/2403.15294