Zobrazeno 1 - 10
of 548
pro vyhledávání: '"Panov, A. I."'
In this study, we address the issue of enabling an artificial intelligence agent to execute complex language instructions within virtual environments. In our framework, we assume that these instructions involve intricate linguistic structures and mul
Externí odkaz:
http://arxiv.org/abs/2407.09287
Object-centric architectures usually apply a differentiable module to the entire feature map to decompose it into sets of entity representations called slots. Some of these methods structurally resemble clustering algorithms, where the cluster's cent
Externí odkaz:
http://arxiv.org/abs/2311.04640
Autor:
Tsypin, Artem, Ugadiarov, Leonid, Khrabrov, Kuzma, Telepov, Alexander, Rumiantsev, Egor, Skrynnik, Alexey, Panov, Aleksandr I., Vetrov, Dmitry, Tutubalina, Elena, Kadurin, Artur
Molecular conformation optimization is crucial to computer-aided drug discovery and materials design. Traditional energy minimization techniques rely on iterative optimization methods that use molecular forces calculated by a physical simulator (orac
Externí odkaz:
http://arxiv.org/abs/2311.06295
Autor:
Ugadiarov, Leonid, Panov, Aleksandr I.
There have recently been significant advances in the problem of unsupervised object-centric representation learning and its application to downstream tasks. The latest works support the argument that employing disentangled object representations in i
Externí odkaz:
http://arxiv.org/abs/2310.17178
This paper presents a novel approach to address the challenge of online temporal memory learning for decision-making under uncertainty in non-stationary, partially observable environments. The proposed algorithm, Distributed Hebbian Temporal Memory (
Externí odkaz:
http://arxiv.org/abs/2310.13391
Recently, the use of transformers in offline reinforcement learning has become a rapidly developing area. This is due to their ability to treat the agent's trajectory in the environment as a sequence, thereby reducing the policy learning problem to s
Externí odkaz:
http://arxiv.org/abs/2306.09459
Autor:
Latyshev, Artem, Panov, Aleksandr I.
The reinforcement learning research area contains a wide range of methods for solving the problems of intelligent agent control. Despite the progress that has been made, the task of creating a highly autonomous agent is still a significant challenge.
Externí odkaz:
http://arxiv.org/abs/2301.10067
Autor:
Yudin, Dmitry, Solomentsev, Yaroslav, Musaev, Ruslan, Staroverov, Aleksei, Panov, Aleksandr I.
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment and loop detection in simultaneous localization and mapping. The loop detection sub-task is especially rele
Externí odkaz:
http://arxiv.org/abs/2212.14649
We introduce POGEMA (https://github.com/AIRI-Institute/pogema) a sandbox for challenging partially observable multi-agent pathfinding (PO-MAPF) problems . This is a grid-based environment that was specifically designed to be a flexible, tunable and s
Externí odkaz:
http://arxiv.org/abs/2206.10944
Autor:
Zholus, Artem, Skrynnik, Alexey, Mohanty, Shrestha, Volovikova, Zoya, Kiseleva, Julia, Szlam, Artur, Coté, Marc-Alexandre, Panov, Aleksandr I.
We present the IGLU Gridworld: a reinforcement learning environment for building and evaluating language conditioned embodied agents in a scalable way. The environment features visual agent embodiment, interactive learning through collaboration, lang
Externí odkaz:
http://arxiv.org/abs/2206.00142