Zobrazeno 1 - 10
of 54
pro vyhledávání: '"Gervasio, Melinda"'
Existing conformal prediction algorithms estimate prediction intervals at target confidence levels to characterize the performance of a regression model on new test samples. However, considering an autonomous system consisting of multiple modules, pr
Externí odkaz:
http://arxiv.org/abs/2309.12510
Autor:
Sequeira, Pedro, Gervasio, Melinda
In recent years, advances in deep learning have resulted in a plethora of successes in the use of reinforcement learning (RL) to solve complex sequential decision tasks with high-dimensional inputs. However, existing systems lack the necessary mechan
Externí odkaz:
http://arxiv.org/abs/2307.08933
In recent years, advances in deep learning have resulted in a plethora of successes in the use of reinforcement learning (RL) to solve complex sequential decision tasks with high-dimensional inputs. However, existing systems lack the necessary mechan
Externí odkaz:
http://arxiv.org/abs/2211.06376
Recent years have seen significant advances in explainable AI as the need to understand deep learning models has gained importance with the increased emphasis on trust and ethics in AI. Comprehensible models for sequential decision tasks are a partic
Externí odkaz:
http://arxiv.org/abs/2208.08552
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior. Our approach uses a variational autoencoder to train
Externí odkaz:
http://arxiv.org/abs/2207.07710
Publikováno v:
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 8958-8967
Existing calibration algorithms address the problem of covariate shift via unsupervised domain adaptation. However, these methods suffer from the following limitations: 1) they require unlabeled data from the target domain, which may not be available
Externí odkaz:
http://arxiv.org/abs/2104.00742
Autor:
Sequeira, Pedro, Gervasio, Melinda
We propose an explainable reinforcement learning (XRL) framework that analyzes an agent's history of interaction with the environment to extract interestingness elements that help explain its behavior. The framework relies on data readily available f
Externí odkaz:
http://arxiv.org/abs/1912.09007
Autor:
Sequeira, Pedro, Gervasio, Melinda
Publikováno v:
In Artificial Intelligence November 2020 288
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Myers, Karen, Gervasio, Melinda
Publikováno v:
2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT); 2016, p212-216, 5p