Popis: |
One important goal of BRAIN projects is to crack the neural code — to understand how information is represented in patterns of electrical activity generated by ensembles of neurons. Yet the major stumbling block in the understanding of neural code isneuronal variability- neurons in the brain discharge their spikes with tremendous variability in both thecontrolresting states and across trials within the same experiments. Such on-going spike variability imposes a great conceptual challenge to the classic rate code and/or synchrony-based temporal code. In practice, spike variability is typically removed via over-the-trial averaging methods such as peri-event spike histogram. In contrast to view neuronal variability as a noise problem, here we hypothesize that neuronal variability should be viewed as theself-information processor. Under this conceptual framework, neurons transmit their information by conforming to the basic logic of the statistical Self-Information Theory: spikes with higher-probability inter-spike-intervals (ISI) contain less information, whereas spikes with lower-probability ISIs convey more information, termed assurprisal spikes. In other words, real-time information is encoded not by changes in firing frequency per se, but rather by spike’s variability probability. When these surprisal spikes occur as positive surprisals or negative surprisals in a temporally coordinated manner across populations of cells, they generate cell-assembly neural code to convey discrete quanta of information in real-time. Importantly, such surprisal code can afford not only robust resilience to interference, but also biochemical coupling to energy metabolism, protein synthesis and gene expression at both synaptic sites and cell soma. We describe how this neural self-information theory might be used as a general decoding strategy to uncover the brain’s various cell assemblies in an unbiased manner. |