A major challenge in scaling-up psychological interventions worldwide is how to evaluate competency among new workforces engaged in psychological services. One approach to measuring competency is through standardized role plays. Role plays have the benefits of standardization and reliance on observed behavior rather than written knowledge. However, role plays are also resource intensive and dependent upon inter-rater reliability. We undertook a two-part scoping review to describe how competency is conceptualized in studies evaluating the relationship of competency with client outcomes. We focused on use of role plays including achieving inter-rater reliability and the association with client outcomes. First, we identified 4 reviews encompassing 61 studies evaluating the association of competency with client outcomes. Second, we identified 39 competency evaluation tools, of which 21 were used in comparisons with client outcomes. Inter-rater reliability (intraclass correlation coefficient) was reported for 15 tools and ranged from 0.53 to 0.96 (mean ICC = 0.77). However, we found that none of the outcome comparison studies measured competency with standardized role plays. Instead, studies typically used therapy quality (i.e., session ratings with actual clients) as a proxy for competency. This reveals a gap in the evidence base for competency and its role in predicting client outcomes. We therefore propose a competency research agenda to develop an evidence-base for objective, standardized role plays to measure competency and its association with client outcomes. OPEN SCIENCE REGISTRATION #: https://osf.io/nqhu7/.
Keywords: Common mental disorders; Competence; Developing countries; Paraprofessionals; Psychological treatments; Training.
Copyright © 2019 Elsevier Ltd. All rights reserved.