Purpose: Outcome misclassification in retrospective epidemiologic analyses has been well-studied, but little is known about such misclassification with respect to sequential statistical analysis during surveillance of medical product-associated risks, a planned capability of the US Food and Drug Administration's Sentinel System.
Methods: Using a vaccine example, we model and simulate sequential database surveillance in an observational data network using a variety of outcome detection algorithms. We consider how these algorithms, as characterized by sensitivity and positive predictive value, impact the length of surveillance and timeliness of safety signal detection. We show investigators/users of these networks how they can perform preparatory study design calculations that consider outcome misclassification in sequential database surveillance.
Results: Non-differential outcome misclassification generates longer surveillance times and less timely safety signal detection as compared with the case of no misclassification. Inclusive algorithms characterized by high sensitivity but low positive predictive value outperform more narrow algorithms when detecting rare outcomes. This decision calculus may change considerably if medical chart validation procedures were required.
Conclusions: These findings raise important questions regarding the design of observational data networks used for pharmacovigilance. Specifically, there are tradeoffs involved when choosing to populate such networks with component databases that are large as compared with smaller integrated delivery system databases that can more easily access laboratory or clinical data and perform medical chart validation.
Keywords: adverse drug event; bias (epidemiology); outcome measurement error; pharmacoepidemiology; pharmacovigilance; postmarketing product surveillance.
Copyright © 2014 John Wiley & Sons, Ltd.