gms | German Medical Science

Research in Medical Education – Chances and Challenges International Conference

20.05. - 22.05.2009, Heidelberg

Data collection and evaluation: does the instrument bias students' ratings of teaching quality? Some comparisons of various settings

Meeting Abstract

Search Medline for

  • corresponding author Volkhard Fischer - Medizinische Hochschule Hannover, Hannover, Germany
  • Nina Seibicke - Medizinische Hochschule Hannover, Hannover, Germany
  • Volker Paulmann - Medizinische Hochschule Hannover, Hannover, Germany

Research in Medical Education - Chances and Challenges 2009. Heidelberg, 20.-22.05.2009. Düsseldorf: German Medical Science GMS Publishing House; 2009. Doc09rmeD5

doi: 10.3205/09rme20, urn:nbn:de:0183-09rme206

Published: May 5, 2009

© 2009 Fischer et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en). You are free: to Share – to copy, distribute and transmit the work, provided the original author and source are credited.


Outline

Abstract

Research question: The conducted studies aim to answer the following questions:

1.
Are the results of online-evaluations biased compared to evaluations conducted in the lecture room?
2.
Are courses followed by easy exams better rated than courses followed by difficult exams?
3.
Is the data effected by the time of its collection (prior to vs. after the exam)?
4.
Is the overall assessment of a course different from day-to-day-evaluations that were done during the course?

Method:

  • Study 1 matches online-evaluations of 40 courses with the respective data of the lecture-room-evaluation.
  • Study 2 compares evaluations of courses that had a high failure rate (>10%) with courses where all students passed the exam.
  • Study 3 compares evaluations given prior to the exam with those given afterwards.

For the online-evaluation the program Evasys was used, while the evaluation in lecture rooms was done with the program Q-Exam. In study 1 and 2 the evaluations in the lecture rooms took place immediately after the exam. In study 3 an additional appointment was made to evaluate the course with Q-Exam prior to the oral exam. After the exam, an online-evaluation with Evasys was carried out. In study 4 the final evaluation of the course was conducted online with Evays and the single days were rated on paper.

Results: The response rate in the online-evaluation (Evasys) varied between 10 and 51%, in the lecture-room-evaluation (Q-Exam) between 60 and 97%. Apart from differences in the average rating in some courses there are no method-related means variations traceable. The number of variations is statistically not significant. Yet, there exist significant variations between courses in one term and between the same courses in different terms.

Conclusions: The conducted online-evaluations are – compared with the lecture-room based Q-Exam evaluations – not biased und representative, although the response rate is significantly lower. There are merely distribution variations in terms of the skewness of ratings. Furthermore, the hypothesis that a difficult exam has a negative impact on the following evaluation cannot be corroborated.