Validation of neural spike sorting algorithms without ground-truth information.
Autor: | Barnett AH; Simons Center for Data Analysis, and Department of Mathematics, Dartmouth College, United States. Electronic address: ahb@math.dartmouth.edu., Magland JF; Simons Center for Data Analysis, and Department of Radiology, University of Pennsylvania, United States., Greengard LF; Simons Center for Data Analysis, and Courant Institute, New York University, United States. |
---|---|
Jazyk: | angličtina |
Zdroj: | Journal of neuroscience methods [J Neurosci Methods] 2016 May 01; Vol. 264, pp. 65-77. Date of Electronic Publication: 2016 Feb 28. |
DOI: | 10.1016/j.jneumeth.2016.02.022 |
Abstrakt: | Background: The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. New Method: We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. Results: We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Comparison With Existing Methods: Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Conclusions: Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. (Copyright © 2016 Elsevier B.V. All rights reserved.) |
Databáze: | MEDLINE |
Externí odkaz: |