Autor: |
Koçillari L; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.; Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany., Lorenz GM; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.; Istituto Italiano di Tecnologia, Genova, Italy.; Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy., Engel NM; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany., Celotto M; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.; Istituto Italiano di Tecnologia, Genova, Italy., Curreli S; Istituto Italiano di Tecnologia, Genova, Italy., Malerba SB; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany., Engel AK; Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany., Fellin T; Istituto Italiano di Tecnologia, Genova, Italy., Panzeri S; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.; Istituto Italiano di Tecnologia, Genova, Italy. |
Abstrakt: |
Shannon Information theory has long been a tool of choice to measure empirically how populations of neurons in the brain encode information about cognitive variables. Recently, Partial Information Decomposition (PID) has emerged as principled way to break down this information into components identifying not only the unique information carried by each neuron, but also whether relationships between neurons generate synergistic or redundant information. While it has been long recognized that Shannon information measures on neural activity suffer from a (mostly upward) limited sampling estimation bias, this issue has largely been ignored in the burgeoning field of PID analysis of neural activity. We used simulations to investigate the limited sampling bias of PID computed from discrete probabilities (suited to describe neural spiking activity). We found that PID suffers from a large bias that is uneven across components, with synergy by far the most biased. Using approximate analytical expansions, we found that the bias of synergy increases quadratically with the number of discrete responses of each neuron, whereas the bias of unique and redundant information increase only linearly or sub-linearly. Based on the understanding of the PID bias properties, we developed simple yet effective procedures that correct for the bias effectively, and that improve greatly the PID estimation with respect to current state-of-the-art procedures. We apply these PID bias correction procedures to datasets of 53117 pairs neurons in auditory cortex, posterior parietal cortex and hippocampus of mice performing cognitive tasks, deriving precise estimates and bounds of how synergy and redundancy vary across these brain regions. |