Spike detection: Inter-reader agreement and a statistical Turing test on a large data set

Clin Neurophysiol. 2017 Jan;128(1):243-250. doi: 10.1016/j.clinph.2016.11.005. Epub 2016 Nov 14.

Abstract

Objective: Compare the spike detection performance of three skilled humans and three computer algorithms.

Methods: 40 prolonged EEGs, 35 containing reported spikes, were evaluated. Spikes and sharp waves were marked by the humans and algorithms. Pairwise sensitivity and false positive rates were calculated for each human-human and algorithm-human pair. Differences in human pairwise performance were calculated and compared to the range of algorithm versus human performance differences as a type of statistical Turing test.

Results: 5474 individual spike events were marked by the humans. Mean, pairwise human sensitivities and false positive rates were 40.0%, 42.1%, and 51.5%, and 0.80, 0.97, and 1.99/min. Only the Persyst 13 (P13) algorithm was comparable to humans - 43.9% and 1.65/min. Evaluation of pairwise differences in sensitivity and false positive rate demonstrated that P13 met statistical noninferiority criteria compared to the humans.

Conclusion: Humans had only a fair level of agreement in spike marking. The P13 algorithm was statistically noninferior to the humans.

Significance: This was the first time that a spike detection algorithm and humans performed similarly. The performance comparison methodology utilized here is generally applicable to problems in which skilled human performance is the desired standard and no external gold standard exists.

Keywords: Artificial neural network; Automated spike detection; EEG; Epileptiform; Inter-reader agreement; Noninferiority.

MeSH terms

  • Action Potentials / physiology*
  • Algorithms*
  • Brain / physiology*
  • Databases, Factual* / standards
  • Electroencephalography / methods*
  • Electroencephalography / standards
  • Female
  • Humans
  • Male
  • Retrospective Studies
  • Signal Processing, Computer-Assisted*