Evaluation of the type I error rate when using parametric bootstrap analysis of a cluster randomized controlled trial with binary outcomes and a small number of clusters

Comput Methods Programs Biomed. 2022 Mar:215:106654. doi: 10.1016/j.cmpb.2022.106654. Epub 2022 Jan 21.

Abstract

Background: Cluster randomized controlled trials (cRCTs) are increasingly used but must be analyzed carefully. We conducted a simulation study to evaluate the validity of a parametric bootstrap (PB) approach with respect to the empirical type I error rate for a cRCT with binary outcomes and a small number of clusters.

Methods: We simulated a case study with a binary (0/1) outcome, four clusters, and 100 subjects per cluster. To compare the validity of the test with respect to error rate, we simulated the same experiment with K=10, 20, and 30 clusters, each with 2,000 simulated datasets. To test the null hypothesis, we used a generalized linear mixed model including a random intercept for clusters and obtained p-values based on likelihood ratio tests (LRTs) using the parametric bootstrap method as implemented in the R package "pbkrtest".

Results: The PB test produced error rates of 9.1%, 5.5%, 4.9%, and 5.0% on average across all ICC values for K=4, K=10, K=20, and K=30, respectively. The error rates were higher, ranging from 9.1% to 36.5% for K=4, in the models with singular fits (i.e., ignoring clustering) because the ICC was estimated to be zero.

Conclusion: Using the parametric bootstrap for cRCTs with a small number of clusters results in inflated error rates and is not valid.

Publication types

  • Randomized Controlled Trial

MeSH terms

  • Cluster Analysis
  • Computer Simulation
  • Humans
  • Linear Models
  • Research Design*
  • Sample Size