Learning from conditionals: A critical assessment of recent proposals
Autor: | Lipp, Tobias |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: | |
DOI: | 10.5281/zenodo.6327434 |
Popis: | An agent who learns the indicative conditional ���If A, then B.��� should update her credences P over A and B. The central question is how should the update be. The learning is typically modelled in a probabilistic framework which allows to consider non-strict conditionals and partial believes and to apply the reasoning from Bayesian epistemiology. It is generally thought that if the conditional is non-strict, the problem is underdetermined by Bayesian norms. In this article we show that this is actually not the case. There is no need/space for additional principles. Jeffrey conditionalisation determines completely the update. The update is appropriate for the Judy Benjamin problem and for the more general Lena the scientist problem. {"references":["Bradley, R. (2005). Radical probabilism and bayesian conditioning. Philosophy of Sci- ence 72 (2), 342–364.","Douven, I. and J.-W. Romeijn (2011). A new resolution of the judy benjamin problem. Mind 120 (479), 637–670.","Eva, B., S. Hartmann, and S. R. Rad (2020). Learning from conditionals. Mind 129 (514), 461–508.","Hartmann, S. and U. Hahn (2021). How to revise beliefs from conditionals: A new proposal. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol- ume 43.","Popper, K. and D. Miller (1983). A proof of the impossibility of inductive probability. Nature 302 (5910), 687–688.","van Fraassen, B. C. (1981). A problem for relative information minimizers in probability kinematics. The British Journal for the Philosophy of Science 32 (4), 375–379."]} |
Databáze: | OpenAIRE |
Externí odkaz: |