Artificial Neural Network Simulations of Human Learning Suggest the Presence of Metastable Attractors in Visual Memory

Autor: Frederic Surre, Philippe Chassy
Rok vydání: 2020
Předmět:
Zdroj: Journal of Modeling and Optimization. 12:1-11
ISSN: 1759-7676
Popis: The attractor hypothesis states that knowledge is encoded as topologically-defined, stable configurations of connected cell assemblies. Irrespective to its original state, a network encoding new information will thus self-organize to reach the necessary stable state. To investigate memory structure, a multimodular neural network architecture, termed Magnitron, has been developed. Magnitron is a biologically-inspired cognitive architecture that simulates digit recognition. It implements perceptual input, human visual long-term memory in the ventral visual pathway and, to a lesser extent, working memory processes. To test the attractor hypothesis a Monte Carlo simulation of 10,000 individuals has been run. Each simulated learner was trained in recognizing the ten digits from novice to expert stage. The results replicate several features of human learning. First, they show that random connectivity in long-term visual memory accounts for novices’ performance. Second, the learning curves revealed that Magnitron simulates the well-known psychological power law of practice. Third, after learning took place, performance departed from chance level and reached a minimum target of 95% of correct hits; hence simulating human performance in children (i.e., when digits are learned). Magnitron also replicates biological findings. In line with research using voxel-based morphometry, Magnitron showed that matter density increases while training is taken place. Crucially, the spatial analysis of the connectivity patterns in long-term visual memory supported the hypothesis of a stable attractor. The significance of these results regarding memory theory is discussed.
Databáze: OpenAIRE