A probabilistic view on predictive constructions for Bayesian learning
Autor: | Patrizia Berti, Emanuela Dreassi, Fabrizio Leisen, Luca Pratelli, Pietro Rigo |
---|---|
Přispěvatelé: | Berti Patrizia, Dreassi Emanuela, Leisen Fabrizio, Pratelli Luca, Rigo Pietro |
Rok vydání: | 2022 |
Předmět: |
Statistics and Probability
Stationarity FOS: Computer and information sciences Conditional identity in distribution General Mathematics Bayesian inference Probability (math.PR) Mathematics - Statistics Theory Statistics Theory (math.ST) Methodology (stat.ME) Predictive distribution Sequential prediction Exchangeability FOS: Mathematics Statistics Probability and Uncertainty Statistics - Methodology Mathematics - Probability |
DOI: | 10.48550/arxiv.2208.06785 |
Popis: | Given a sequence $X=(X_1,X_2,\ldots)$ of random observations, a Bayesian forecaster aims to predict $X_{n+1}$ based on $(X_1,\ldots,X_n)$ for each $n\ge 0$. To this end, in principle, she only needs to select a collection $\sigma=(\sigma_0,\sigma_1,\ldots)$, called ``strategy" in what follows, where $\sigma_0(\cdot)=P(X_1\in\cdot)$ is the marginal distribution of $X_1$ and $\sigma_n(\cdot)=P(X_{n+1}\in\cdot\mid X_1,\ldots,X_n)$ the $n$-th predictive distribution. Because of the Ionescu-Tulcea theorem, $\sigma$ can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence $X$ is determined. The strategies concern generalized P\'olya urns, random change points, covariates and stationary sequences. |
Databáze: | OpenAIRE |
Externí odkaz: |