Zobrazeno 1 - 10
of 110
pro vyhledávání: '"MOHAN, ADITYA"'
Autor:
Benjamins, Carolin, Cenikj, Gjorgjina, Nikolikj, Ana, Mohan, Aditya, Eftimov, Tome, Lindauer, Marius
Publikováno v:
GECCO 2024
Dynamic Algorithm Configuration (DAC) addresses the challenge of dynamically setting hyperparameters of an algorithm for a diverse set of instances rather than focusing solely on individual tasks. Agents trained with Deep Reinforcement Learning (RL)
Externí odkaz:
http://arxiv.org/abs/2407.13513
Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications. However, its practicality in addressing various real-worl
Externí odkaz:
http://arxiv.org/abs/2306.16021
Autor:
Tornede, Alexander, Deng, Difan, Eimer, Theresa, Giovanelli, Joseph, Mohan, Aditya, Ruhkopf, Tim, Segel, Sarah, Theodorakopoulos, Daphne, Tornede, Tanja, Wachsmuth, Henning, Lindauer, Marius
The fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years. In NLP, especially Large Language Models (LLMs) have experienced a rapid series of breakthroughs very r
Externí odkaz:
http://arxiv.org/abs/2306.08107
Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be
Externí odkaz:
http://arxiv.org/abs/2305.10964
Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (Aut
Externí odkaz:
http://arxiv.org/abs/2304.02396
Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-comput
Externí odkaz:
http://arxiv.org/abs/2206.03130
Autor:
Benjamins, Carolin, Eimer, Theresa, Schubert, Frederik, Mohan, Aditya, Döhler, Sebastian, Biedenkapp, André, Rosenhahn, Bodo, Hutter, Frank, Lindauer, Marius
While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model
Externí odkaz:
http://arxiv.org/abs/2202.04500
Autor:
GREWAL, JODY1 (AUTHOR), MOHAN, ADITYA2 (AUTHOR), PÉREZ‐CAVAZOS, GERARDO3 (AUTHOR) gperezcavazos@ucsd.edu
Publikováno v:
Journal of Accounting Research (John Wiley & Sons, Inc.). May2024, Vol. 62 Issue 2, p635-674. 40p.
Publikováno v:
In Materials Today: Proceedings 2022 64 Part 3:1539-1542
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.