Assessing anatomy in a way that tests higher cognitive domains and clinical application is not always straightforward. The old "spotter" examination has been criticized for only testing low level "identify" knowledge, whereas other assessment modalities such as multiple choice questions do not reflect the three dimensional and application nature of clinical anatomy. Medical curricula are frequently integrated and subject specific examinations do not reflect the case based, spiral, integrative nature of the curricula. The integrated anatomy practical paper (IAPP) is a hybrid of the old "spotter" and an objective structured clinical examination but it demonstrates how higher levels of taxonomy can be assessed, together with clinical features and integrates well with other disciplines. Importantly, the IAPP has shown to be reliable and practical to administer. Data gathered from the Bachelor of Medicine five-year program over two academic years for four IAPP examinations, each being 40 minutes with (K = 60 items) based on 440 students revealed consistently strong reliability coefficients (Cronbach alpha) of up to 0.923. Applying Blooms taxonomy to questions has shown a marked shift resulting in an increase in the complexity level being tested; between 2009 and 2013 a reduction of 26% in the number of low level "remember knowledge" domain questions was noted with up to an increase of 15% in "understanding" domain and 12% increase in the "applying" knowledge domain. Our findings highlight that it is possible to test, based in a laboratory, anatomy knowledge and application that is integrated and fit for practice.
Keywords: assessment methods; gross anatomy education; integrated curricula; knowledge level; laboratory examination; learning approach; medical education; objectively structured practical examination; practical examination; spotter examination.
© 2014 American Association of Anatomists.