Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs: A Field Test

Behav Modif. 2015 Sep;39(5):691-720. doi: 10.1177/0145445515593512. Epub 2015 Jul 5.

Abstract

Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.

Keywords: effect size; single-case designs; software; split-middle; visual aids.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Audiovisual Aids*
  • Humans
  • Meta-Analysis as Topic
  • Probability*
  • Research Design / standards*
  • Statistics as Topic / methods*
  • Treatment Outcome*