Objectives: To evaluate an approach using automation and crowdsourcing to identify and classify randomized controlled trials (RCTs) for rheumatoid arthritis (RA) in a living systematic review (LSR).
Methods: Records from a database search for RCTs in RA were screened first by machine learning and Cochrane Crowd to exclude non-RCTs, then by trainee reviewers using a Population, Intervention, Comparison, and Outcome (PICO) annotator platform to assess eligibility and classify the trial to the appropriate review. Disagreements were resolved by experts using a custom online tool. We evaluated the efficiency gains, sensitivity, accuracy, and interrater agreement (kappa scores) between reviewers.
Results: From 42,452 records, machine learning and Cochrane Crowd excluded 28,777 (68%), trainee reviewers excluded 4,529 (11%), and experts excluded 7,200 (17%). The 1,946 records eligible for our LSR represented 220 RCTs and included 148/149 (99.3%) of known eligible trials from prior reviews. Although excluded from our LSRs, 6,420 records were classified as other RCTs in RA to inform future reviews. False negative rates among trainees were highest for the RCT domain (12%), although only 1.1% of these were for the primary record. Kappa scores for two reviewers ranged from moderate to substantial agreement (0.40-0.69).
Conclusion: A screening approach combining machine learning, crowdsourcing, and trainee participation substantially reduced the screening burden for expert reviewers and was highly sensitive.
Keywords: Automation; Crowdsourcing; Living systematic reviews; Machine learning; Randomized controlled trials (RCTs); Rheumatoid arthritis; Systematic reviews.
Copyright © 2023 The Authors. Published by Elsevier Inc. All rights reserved.