Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Caprio, Michele"'
An open question in \emph{Imprecise Probabilistic Machine Learning} is how to empirically derive a credal region (i.e., a closed and convex family of probabilities on the output space) from the available data, without any prior knowledge or assumptio
Externí odkaz:
http://arxiv.org/abs/2411.04852
Autor:
Caprio, Michele
We provide a version for lower probabilities of Monge's and Kantorovich's optimal transport problems. We show that, when the lower probabilities are the lower envelopes of $\epsilon$-contaminated sets, then our version of Monge's, and a restricted ve
Externí odkaz:
http://arxiv.org/abs/2410.03267
Autor:
Caprio, Michele
We introduce the concept of an imprecise Markov semigroup $\mathbf{Q}$. It is a tool that allows to represent ambiguity around both the initial and the transition probabilities of a Markov process via a compact collection of plausible Markov semigrou
Externí odkaz:
http://arxiv.org/abs/2405.00081
Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the
Externí odkaz:
http://arxiv.org/abs/2402.00957
In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distr
Externí odkaz:
http://arxiv.org/abs/2312.00995
Like generic multi-task learning, continual learning has the nature of multi-objective optimization, and therefore faces a trade-off between the performance of different tasks. That is, to optimize for the current task distribution, it may need to co
Externí odkaz:
http://arxiv.org/abs/2310.02995
Autor:
Dutta, Souradeep, Caprio, Michele, Lin, Vivian, Cleaveland, Matthew, Jang, Kuk Jin, Ruchkin, Ivan, Sokolsky, Oleg, Lee, Insup
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical approaches are constra
Externí odkaz:
http://arxiv.org/abs/2308.14815
In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise. They also giv
Externí odkaz:
http://arxiv.org/abs/2307.06831
Adequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence. As an alternative to representing uncertainty via one single probability meas
Externí odkaz:
http://arxiv.org/abs/2306.09586
Algorithms that balance the stability-plasticity trade-off are well-studied in the continual learning literature. However, only a few of them focus on obtaining models for specified trade-off preferences. When solving the problem of continual learnin
Externí odkaz:
http://arxiv.org/abs/2305.14782