gms | German Medical Science

GMS Journal for Medical Education

Gesellschaft für Medizinische Ausbildung (GMA)

ISSN 2366-5017

Student Evaluation Scale for Medical Courses with Simulations of the Doctor-Patient Interaction (SES-Sim)

research article medicine

  • corresponding author Eva Neumann - University of Cologne, Department of Psychosomatics and Psychotherapy, Cologne, Germany
  • author Rainer Obliers - University of Cologne, Department of Psychosomatics and Psychotherapy, Cologne, Germany
  • author Christine Schiessl - University of Cologne, Centre for Palliative Medicine, Cologne, Germany
  • author Christoph Stosch - University of Cologne, Faculty of Medicine, Dekanat of Study, Cologne, Germany
  • Christian Albus - University of Cologne, Department of Psychosomatics and Psychotherapy, Cologne, Germany

GMS Z Med Ausbild 2011;28(4):Doc56

doi: 10.3205/zma000768, urn:nbn:de:0183-zma0007688

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zma/2011-28/zma000768.shtml

Received: December 1, 2010
Revised: July 14, 2011
Accepted: July 27, 2011
Published: November 15, 2011

© 2011 Neumann et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en). You are free: to Share – to copy, distribute and transmit the work, provided the original author and source are credited.


Abstract

Objective: Simulations of doctor-patient interactions have become a popular method for the training of medical skills, primarily communication skills. A new questionnaire for the measurement of students’ satisfaction with medical courses using this technique is presented, the Student Evaluation Scale for Medical Courses with Simulations of the Doctor-Patient Interaction (SES-Sim).

Method: A set of items focusing on the course quality and the core elements of simulations was created and presented to 220 medical students who had been trained with this method.

Results: Based on factor-analyses 18 items were selected for the final version of the scale, which represent five dimensions: learning success, actors, premises, tutors and students. The five dimensions are all significantly correlated with a 1-item-measure of the general satisfaction with the course.

Conclusion: The SES-Sim enables tutors to assess in an economic way whether the course has met the students’ needs and what can be done better.

Keywords: Medical skills, communication, doctor-patient interaction, simulation, student evaluation


Introduction

In medical education, innovative lectures are increasingly used, employing teaching methods simulating the doctor-patient interaction [1], [2], [3], [4]. In these simulations, students and actors enact interactions between doctors and patients as role play. The actors follow a script which prescribes the content requirements for representing a particular disease which they train beforehand. Students also usually receive guidance on the role they are to fulfil in the simulation. Simulations of doctor-patient interaction serve to practice practical medical skills. They appear particularly suitable for improving communication and social skills, but certain other skills such as physical examinations can also be practiced using this hands-on approach.

The advantages of this method are obvious. Because actors are more readily available than real patients, organising such a session is easy. The need to search for a patient who has the required disease and who will agree to participate in a teaching session is thereby removed. Using actors to simulate clinical conditions who can be employed in a predictable way offers the possibility for simulating the relevant disease pattern and for varying the severity of symptoms. As real patients are not involved any emotional stress is also avoided. Students are able to learn without fears as they will not harm a patient if they make a mistake. Unlike with real patient contact where students usually receive no feedback, simulations allow for feedback on their performance. The greatest strength of simulations is certainly their closeness to reality, caused primarily by the fact that students interact with a human (and not practicing with a doll, for example). This closeness to reality is likely to facilitate the transfer of skills to future professional practice.

Reports from previous courses accordingly show that students assess this form of teaching very positively [5], [6], [7]. The participants also state that the courses have increased their communication skills [8], [9].

Standardised measurement instruments for evaluating teaching with simulations of the doctor-patient interaction are not yet available. General questionnaires for student evaluation of teaching seem to be unsuitable for two reasons:

  • On the one hand, many of the forms were developed by universities and university hospitals themselves and have not been made public, at least not through journals with peer-review process. Therefore, the test accuracy of these methods cannot be evaluated.
  • Secondly, important aspects of simulations are not covered. Because this method is not used in conventional courses, student evaluation questionnaires generally do not query variables such as the performance of actors, the quality of feedback and improvement of communication skills.

In order to measure the students’ satisfaction with courses simulating the doctor-patient interaction, this project developed a new questionnaire, the Student Evaluation Scale for Courses with Simulations of the Doctor-Patient Interaction (SES-Sim).

SES-Sim was developed as part of the educational “PJ-Start-Block” project (key skills training and application in realistic daily routines) at the Medical Faculty of the University of Cologne. A course using simulations of the doctor-patient interaction as a central teaching method was designed and implemented in this project. It consists of a one-week block course at the end of the undergraduate degree in preparation for the Practical Year (PJ). It mimics everyday life on a hospital ward, with students assuming the roles of doctors who interact with actor-patients in different scenarios. The disease patterns of heart disease, appendicitis, ectopic pregnancy, chronic back pain, diabetes, urinary tract infection with fever and lung cancer are presented.

The role scripts are usually designed not only to present somatic symptoms but also psychosocial aspects which are relevant to the treatment of diseases. For example, the simulation of an ectopic pregnancy specifies that the patient comes from a Muslim background and has an understanding of diseases which cannot be reconciled with scientific concepts. The role of the female diabetic patient includes problems with insulin therapy which are due to the fact that a comorbid depressive reaction occurs. The students are therefore faced with the task of responding appropriately, not only to the somatic aspects but also to disease-related psychosocial issues.

In addition to the simulations, exercises are offered about medical tasks without patient contact (e.g. morning and afternoon meetings of the medical team, writing medical reports and requesting consultations).


Methods

Item Composition

To develop the new questionnaire, 33 items were compiled which mirror the central elements of the course contents. Some items were adapted from items taken from the questionnaires for student evaluation of university courses by Diehl [10]. Most items use new wording.

As a response format, a 5-point Likert scale with endpoints 1 (“strongly disagree”) and 5 (“strongly agree”) was defined.

Sample

The compilation of the items was presented to the first participants of the PJ-Start-Block in the winter semester 2009/10 and the summer semester 2010 on the last day of the block course. A total of 220 medical students filled in the questionnaire (66% women, 34% men), the average age being 26.

Statistical Analysis

Item selection was based on three factor analyses. The data of the samples from the winter and the summer semester were first separately subjected to a factor analysis. Based on the results of both factor analyses, items were selected for the final version of the scale. The item selection was statistically confirmed by subjecting the final version to another factor analysis, having combined the participants from the winter and summer semester into a single sample.


Results

(Note on the English version of this paper: Statistical analysis was done for the original version of SES-Sim which is in German. All statistical parameters presented in this paper are therefore valid for the German version, but not for the translation of items presented in Table 1 [Tab. 1]).

In the 33 item version which was presented to the students in the first round during the winter semester, nine factors with an eigenvalue > 1 appear in the factor analysis (eigenvalue distribution 8.61, 3.14, 2.65, 2.12, 1.57, 1.49, 1.22, 1.14, 1.02 , .98 ...). Of these nine factors, five can be meaningfully interpreted as relating to content; their content relates to the learning success, the actors, the premises, the tutors and the students. The remaining four factors either consist of a single item or have heterogeneous contents. For the five interpretable factors, 23 of the 33 items load in a unique way, i.e. they load high on one factor and low on the other.

For the second round during the summer semester, an abridged version of the questionnaire was created which consisted of the 23 items which could be mapped unambiguously. The factor analysis of the 23 item version showed seven factors with an eigenvalue > 1 (eigenvalue distribution: 5.37, 2.33, 1.99, 1.53, 1.49, 1.31, 1.16, .95 ...). The first five correspond to the interpretable factors found in the first factor analysis. The other two factors are of heterogeneous content. Of the 23 items, 17 load uniquely on the factor to which they had been mapped to in the first factor analysis. The other six items either have double loads or load on a different factor compared to the first round.

The final selection of items was carried out according to the following criteria: 16 of the 17 items which in both factor analyses could be clearly mapped to one of the five interpretable factors were included in the final version of the questionnaire. One of the 17 items was excluded despite clear factor loads because it has a strong semantic similarity with another item. (Both items are related to the transfer of skills and future professional practice; for reasons of economy, a decision was made not to evaluate the same thing twice.) Two items which according to the criterion of clear association in both factor analyses should have been excluded were left in the final selection for substantive reasons. The two items are used to evaluate the feedback, a central element of the course which should definitely be included in the new questionnaire. So overall 18 items were selected for the final version of the questionnaire.

The factor analysis for statistical confirmation of the 18-item version for the total sample confirmed the expected 5-factor structure (eigenvalue distribution: 4.39, 2.30, 2.07, 1.67, 1.28, .90 ...). The five factors together explain 65% of the variance. Table 1 [Tab. 1] shows the item parameters.

The 18 items are clearly mapped to the five factors. All items load high onto one factor and low to the other factors. The items continue to be selective, the item-scale correlations are usually well above the limit of .30.

Table 2 [Tab. 2] shows the parameters of the five subscales of SES-Sim. The mean agreement is high on all scales, which points to the course being assessed positively by the students. The internal consistencies range from satisfactory to good.

Table 3 [Tab. 3] shows how the five subscales of SES-Sim are related with the overall assessment of the block course. The overall evaluation was performed using German school grades on a scale from 1 (“very good”) to 5 (“poor”). Because the subscales of SES-Sim do not exhibit normal distribution (the distributions are right-skewed), non-parametric correlations were calculated (Spearman’s rank correlation coefficient).

It turns out that the overall evaluation of the course is above all related to the perceived learning success. This means that the more students have the impression of having learned something, the more they judge the course to be positive. The evaluations of the performance of all participating groups (tutors, actors and the students themselves) including the premises also significantly relate to the overall result. Thus, all dimensions of SES-Sim prove to be significant for the overall evaluation of the event.


Discussion

SES-Sim represents a measuring instrument which enables fast and economical surveys of student satisfaction with courses which use simulations of doctor-patient interactions. Kirkpatrick and Kirkpatrick [11] who distinguish between the four hierarchical levels of reaction, learning, behaviour and results in evaluations assign this form of evaluation to level one. Satisfaction measurement therefore gives information about the response of the participants to a training measure, but still says nothing about the extent of the knowledge increase, behaviour change and increased productivity or the quality of work.

The high degree of satisfaction which is reflected in the high average values of the scale corresponds to the experiences from other universities where students also rated this type of course very positively [5], [6], [7]. In addition to the simulations being highly practical, the novelty factor of this methodology in medical school is also bound to play a role. The intensity of support, due to small group sizes, which is not the case in other courses, is also likely to have a favourable impact on the evaluation.

The SES-Sim items relate directly to the central elements of courses using simulations of doctor-patient interactions. In addition, it was ensured that the content was formulated clearly. Therefore on the face of it, the new questionnaire can certainly be regarded as valid.

But construct validation through an external criterion would be difficult to perform. One reason for this is that the method of simulation trains complex skills which span multiple dimensions. For example, if one wanted to evaluate the learning progress in communicative competence, which according to Kirkpatrick and Kirkpatrick [11] represents an evaluation on the second of the four levels, the first step would require identification of the dimensions which constitute this ability. In the second step, appropriate operationalisations would have to be found for each dimension. This approach would be expensive and fraught with great uncertainty. Even determining whether all dimensions relevant for communicative competence have been identified would be difficult. For this reason it was decided not to conduct a construct validation of SES-Sim in this project.

Another limitation which not only applies to this questionnaire but to all measuring instruments for student evaluation of courses is that the satisfaction of students only partially reflects the actual quality of a course [12]. It turns out, for example, that medical students assess courses on basic subjects less positively than clinical subjects [13]. These judgements appear to be guided less by the didactic quality of the event but rather by what is viewed as useful for a future career. It is known in psychology that lectures and seminars on statistics regularly are judged less positively than classes in other subjects because this subject runs counter the concepts of psychology of freshmen in particular [10]. Student evaluations are highly dependent on the general popularity of a subject; less popular subjects are regularly assessed less positively than popular ones.

It would therefore be unwise to rely solely on student evaluation in order to assess the value of a course; in such cases additional criteria should be used, for example surveys of student progress. But if teaching staff wants to get a quick overview as to how their classes are judged by the students and what could be improved, then surveys such as SES-Sim can be helpful.


Acknowledgements

Our thanks Dr. Valentin Goede, Houda Hallal, Wencke Johannsen, Ortrun Kliche, Dr. Sabine Teschendorf and Christian Thrien, University Hospital Cologne and University of Cologne for their helpful feedback on this paper.

PJ-Start-Block is a teaching project by the University of Cologne in collaboration with the following bodies: Medical Faculty: Office of the Dean of Studies and Inter-professional Skills Lab and Simulation Centre Cologne (Prof. Lehmann, Dr. Boldt, Dr. h.c. (RUS) Stosch), Institute for History and Ethics of Medicine (Prof. Karenberg, Prof. Schäfer), Institute for Pharmacology (Prof. Herzig, PD Dr. Matthes), Clinic and Policlinic for Psychosomatics and Psychotherapy (PD Dr. Albus, Prof. Obliers, Dr. Koerfer), Centre for Palliative Medicine (Prof. Voltz, PD Dr. Schiessl) and Faculty of Humanities: Institute for Comparative Research in Education and Social Sciences (Prof. Allemann-Ghionda)


Competing interests

The authors declare that they have no competing interests.


References

1.
Swanson DB, Stillman PL. Use of standardized patients for teaching and assessing clinical skills. Eval Health Prof. 1990;13:79-103.
2.
Ainsworth MA, Rogers LP, Markus JF, Dorsey NK, Blackwell TA, Petrusa ER. Standardized patient encounters. A method for teaching and evaluation. J Amer Med Assoc. 1991;226:1390-1396.
3.
Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med. 1993;68(6):443-453.
4.
Ortwein H, Fröhmel A, Burger W. Einsatz von Simulationspatienten als Lehr-, Lern- und Prüfungsform. Psychother Psych Med. 2006;56:23-29.
5.
Bachmann C, Barzel A, Dunkelberg S, Schrom K, Ehrhardt M, van den Bussche H. Fachübergreifendes Kommunikationstraining mit Simulationspatienten: ein Pilotprojekt ins Curriculum. GMS Z Med Ausbild. 2008;25(1):Doc58. Zugänglich unter: http://www.egms.de/static/de/journals/zma/2008-25/zma000542.shtml External link
6.
Koerfer A, Köhle K, Obliers R, Sonntag B, Thomas W, Albus C. Training und Prüfung kommunikativer Kompetenz. Aus- und Fortbildungskonzepte zur ärztlichen Gesprächsführung. Gesprächsforsch Z verbal Interaktion. 2008;9:34-78.
7.
Nikendei C, Zipfel S, Roth C, Löwe B, Herzog W, Jünger J. Kommunikations- und Interaktionstraining im psychosomatischen Praktikum: Einsatz von standardisierten Patienten. Psychother Psych Med. 2003;53:440-445.
8.
Schildmann J, Härlein J, Burchardi N, Schlögl M, Vollmann J. Breaking bad news: evaluation study on self-perceived competences and views of medical and nursing students taking part in a collaborative study. Support Care Cancer. 2006;14(11):1157-1161.
9.
Simmenroth-Nayda A, Weiß C, Chenot J-F, Scherer M, Fischer T, Kochen MM, Himmel W. Verbessern Anamneseübungen die kommunikativen Fähigkeiten von Studierenden? Ein Prä-Post-Vergleich. GMS Z Med Ausbild. 2007;24(1):Doc22. Zugänglich unter: http://www.egms.de/static/de/journals/zma/2007-24/zma000316.shtml External link
10.
Diehl JM. Normierung zweier Fragebögen zur studentischen Beurteilung von Vorlesungen und Seminaren. Psychol Erz Unterr. 2003;50:27-42.
11.
Kirkpatrick DL, Kirkpatrick JD. Evaluating training programs: The four levels. 3rd ed. San Francisco: Berret-Koehler Publishers; 2006.
12.
Kromrey H. Wie erkennt man „gute Lehre“? Was studentische Vorlesungsbefragungen (nicht) aussagen. Empirische Pädagogik. 1994;8:153-168.
13.
van den Bussche H, Weidtmann K, Kohler N, Frost M, Kaduszkiewicz H. Evaluation der ärztlichen Ausbildung: Methodische Probleme der Durchführung und der Interpretation von Ergebnissen. GMS Z Med Ausbild. 2006;23(2):Doc37. Zugänglich unter: http://www.egms.de/static/de/journals/zma/2006-23/zma000256.shtml External link