Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences
Autor: | Lotem Elber-Dorozko, Oron Shagrir |
---|---|
Rok vydání: | 2019 |
Předmět: |
Cognitive science
Hierarchy Computational model Relation (database) Computer science 05 social sciences General Social Sciences 06 humanities and the arts 0603 philosophy ethics and religion 050105 experimental psychology Automaton Philosophy of language Philosophy Mechanism (philosophy) Component (UML) 060302 philosophy Reinforcement learning 0501 psychology and cognitive sciences |
Zdroj: | Synthese. 199:43-66 |
ISSN: | 1573-0964 0039-7857 |
DOI: | 10.1007/s11229-019-02230-9 |
Popis: | It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6). |
Databáze: | OpenAIRE |
Externí odkaz: |