gms | German Medical Science

Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA)

25.09. - 27.09.2014, Hamburg

Impact of response shift bias on the results of an outcome-based evaluation tool

Poster

Suche in Medline nach

Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA). Hamburg, 25.-27.09.2014. Düsseldorf: German Medical Science GMS Publishing House; 2014. DocP343

doi: 10.3205/14gma118, urn:nbn:de:0183-14gma1182

Veröffentlicht: 11. September 2014

© 2014 Schiekirka et al.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielfältigt, verbreitet und öffentlich zugänglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Gliederung

Text

Introduction: Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias.

Methods: Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective.

Results: Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings (see Figure 1 [Fig. 1]). Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small.

Discussion/conclusion: Given the small effect sizes, using thentests is recommended in order to increase student response rates. Item presentation in the posttest does not significantly impact on results [1], [2], [3], [4].


References

1.
Schiekirka S, Reinhardt D, Beissbarth T, Anders S, Pukrop T, Raupach T. Estimating learning outcomes from pre- and posttest student self-assessments: a longitudinal study. Acad Med. 2013;88(3):369-375. DOI: 10.1097/ACM.0b013e318280a6f6 Externer Link
2.
Raupach T, Schiekirka S, Munscher C, Beissbarth T, Himmel W, Burckhardt G, Pukrop T. Piloting an outcome-based programme evaluation tool in undergraduate medical education. GMS Z Med Ausbild. 2012;29(3):Doc44. DOI: 10.3205/zma000814 Externer Link
3.
Lam TC. Do Self-Assessments Work to Detect Workshop Success? Am J Eval. 2009;30:93-105.
4.
Howard GS. Response-Shift Bias. Eval Rev. 1980;4:93-106.