gms | German Medical Science

GMS Journal for Medical Education

Gesellschaft für Medizinische Ausbildung (GMA)

ISSN 2366-5017

Critical appraisal of RCTs by 3rd year undergraduates after short courses in EBM compared to expert appraisal

article Evidence Based Medicine

  • corresponding author B. Buchberger - University of Duisburg-Essen, Faculty of Economics and Business Administration, Institute for Health Care Management and Research, Essen, Germany
  • J.T. Mattivi - University of Duisburg-Essen, Faculty of Economics and Business Administration, Institute for Health Care Management and Research, Essen, Germany
  • C. Schwenke - SCO:SSiS, Schwenke Consulting: Strategies and Solutions in Statistics, Berlin, Germany
  • C. Katzer - University of Duisburg-Essen, Faculty of Economics and Business Administration, Institute for Health Care Management and Research, Essen, Germany
  • H. Huppertz - University of Duisburg-Essen, Faculty of Economics and Business Administration, Institute for Health Care Management and Research, Essen, Germany
  • J. Wasem - University of Duisburg-Essen, Faculty of Economics and Business Administration, Institute for Health Care Management and Research, Essen, Germany

GMS J Med Educ 2018;35(2):Doc24

doi: 10.3205/zma001171, urn:nbn:de:0183-zma0011712

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zma/2018-35/zma001171.shtml

Received: March 21, 2017
Revised: November 25, 2017
Accepted: January 31, 2018
Published: May 15, 2018

© 2018 Buchberger et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Abstract

Introduction: An essential aim of courses in evidence-based medicine (EBM) is to improve the skills for reading and interpreting medical literature adequately. Regarding the conceptual framework, it is important to consider different educational levels.

Aim: Our primary aim was to investigate the applicability of different instruments for the assessment of methodological study quality by 3rd grade students after short courses in EBM. Our secondary outcomes were agreement with expert assessments and student’s knowledge and competences.

Methods: We conducted four short courses in EBM of 90 minutes each for health care management and medical students focused on critical appraisal of the literature. At the end, the students assessed five publications about randomized controlled trials (RCTs) using five different instruments; the results were compared to expert assessments.

Results: In total, 167 students participated in our EBM courses. Students’ assessments showed a non-systematic over- and underestimation of risk of bias compared to expert assessments with no clear direction. Agreement with expert assessments ranged between 66% to over 80%. Across RCTs, evidence was found that the choice of instrument had an impact on agreement rates between expert and student assessments (p=0.0158). Three RCTs showed an influence of the instrument on the agreement rate (p<0.05 each).

Discussion: Our results contrast sharply with those of many other comparable evaluations. Reasons may be a lack of students’ motivation due to the compulsory courses, and the comparison to a reference standard in addition to self-ratings causing objectivity.

Conclusion: Undergraduates should become familiar with the principles of EBM, including research methods, and the reading of scientific papers as soon as possible. For a deeper understanding, clinical experience seems to be an indispensable precondition. Based on our results, we would recommend an integration of lectures about EBM and critical appraisal at least twice during studies and with greater intensity shortly before graduation.

Keywords: Critical appraisal, evidence-based medicine, randomized controlled trial, training, undergraduates


1. Introduction

The level of awareness towards evidence-based medicine (EBM) is growing worldwide and the acceptance of its concept is increasing. In January 2007 the British Medical Journal conducted an online poll about the 15 most important medical milestones and EBM was in seventh place, right behind germ theory and oral contraceptive pill but ahead of computer and medical imaging [1], [2]. However, fostering an EBM culture and implementing it into practice requires the skills for identifying and appraising the literature critically [3], [4], [5]. A certain knowledge of probability and statistics is mandatory as well when accessing guidelines and evidence summaries, assessing marketing and advertising material from industry, interpreting the results of a screening test, or reading research publications for staying up to date with newly developed treatments; furthermore, knowledge of biostatistics is necessary for the analysis of numerical data, for informing patients about treatment risks, and last but not least for being prepared to the Internet-literature of varying quality presented by patients [6], [7]. Actually, the question is no longer whether to teach EBM but how to teach it [8] and when. Apart from various educational methods, e.g. on the job training, problem-based or self-directed learning [9], the EBM concept may be taught as a whole as well as some of the five steps separately, which are

1.
asking a clinical question,
2.
searching for the best evidence,
3.
critical appraisal of the evidence,
4.
applying evidence to patience,
5.
self-assessment [8].

There are quite a few courses introducing into database searches supported by librarians [9], or clubs as a format of training critical appraisal of the literature [9], [10], [11]. For measuring the increase of learners‘ competency by attending lectures in EBM, objective measurements are required rather than self-ratings leading to considerable overestimation [12], [13].

Previous studies evaluated the impact of EBM lectures mostly by self-reports of participants [4], [11] or kinds of question papers with multiple-choice questions [14]. In addition to self-assessments, the current study aims to achieve a certain objectivity by comparing students’ assessments with a reference standard created by expert assessments.

Our primary outcome was to investigate the applicability of different instruments for the assessment of methodological study quality by 3rd grade students after short courses in EBM. Our secondary outcomes were agreement with expert assessments and student’s knowledge and competences.


2. Methods

We included medical students directly after the preclinical phase at the faculty of medicine and students of the master’s program “health care management” at the faculty of economic sciences of the University of Duisburg-Essen, Germany. They were trained in the principles of EBM in four sessions of 90 minutes each which were held in the context of lectures on health economics (compulsory courses for medical students) by the Institute of Health Care Management and Research in the winter semester of 2013/2014. The number of participants was limited to 20 students per session. A glossary of terms for quick searches was handed out (see Attachment 1 [Attach. 1]).

2.1. Description of instruments and experts

Based on a Health Technology Assessment Report, we focused on generic component instruments published after the year 2000 to assess the quality of evidence [15]. A component instrument is a tool to assess all aspects which may introduce bias like randomization or type of blinding. Five instruments were used [16], [17], [18], [19], [20] which differed by the number of domains, questions within domains, and answer options within a question (see Table 1 [Tab. 1]).

In a second step, the students were asked to rate the overall risk of bias on a five point Likert scale (very low, low, moderate, high, very high), based on the outcomes of the risk assessment. Each student was asked to evaluate publications of five randomized controlled trials (RCTs). Assessing five RCTs by five instruments would lead to 25 combinations of RCTs and instruments. To reduce the efforts for the students, each RCT was assessed by one instrument only, which was randomly selected, resulting in five assessments per student. Permutation was used to ensure that each RCT was assessed with each instrument by the same number of students.

Then, five experts assessed the five RCTs with a single instrument, which was selected randomly (see Table 2 [Tab. 2] for assignment of instrument and RCT to expert). These assessments were used as the reference standard (‘gold standard’). Experts had to fullfil the following criteria: independence from the University of Duisburg-Essen, experience in critically appraising clinical studies for more than 10 years, and professional status including responsibility for assessments.

2.2. Training sessions

At the beginning, we presented the concept of EBM and its five steps by means of a specific clinical case and in detail. Essential terms were explained theoretically and in a traditional teaching approach: internal and external validity, quality parameters like randomization, concealment, blinding, drop-out/lost to follow-up, intention-to-treat (ITT) analysis, evidence levels of study designs, PICO-scheme and different kinds of bias. In addition, the structure of scientific publications was explained and therein text passages were indicated, where the description or discussion of validity aspects is most likely to be found. The students did a practical exercise, based on this information, afterwards. As an aid, a slide was shown containing the glossary of terms for quick searches (see Attachment 1 [Attach. 1]).

The second session started with a repetition and consolidation of the knowledge gained. For this purpose, the participants were asked to split into groups and to allocate key words to quality parameters. After internal discussion, the groups used a flip-chart for poster presentation, during which the assignment of the key words to a quality aspect had to be explained to the others. For a further understanding, a simulation of randomization, blinding, concealment, stratification, drop-out/lost to follow-up and different types of analyses (e.g. ITT) was carried out thereafter.

In the third session, the component systems for the quality assessment were introduced and the single questions of the instruments were discussed. The participants then applied the systems for quasi randomly selected RCTs about the most frequent chronic diseases defined by the WHO, and results were discussed.

In the final session, the students’ skills were tested by application of the component systems on another set of currently published RCT. Again, permutation was done for assigning instruments to RCTs. Analyses of the methodological study are based on these assessments.

2.3. Analysis

Frequencies of assessments given by medical students and health care management students were computed by RCT and instrument. In addition, agreement between student rating and the expert's assessment was defined as “agreement +/-1”: Agreement was considered as attained when the student rating was within a range of +/-1 point of the expert's assessment. Generalized estimation equations were used to investigate the influence of the instrument, the student group and the RCT on the agreement rate in a first analysis. In a second analysis, the effect of instrument and student group was assessed by RCT. In addition, influences of “experience in critically appraising” and “command of English” were investigated. Analyses were performed with SAS 9.2 (SAS Institute Inc., Cary, NC, USA).


3. Results

In total, 167 students took part in our EBM courses, of whom 142 were third year undergraduate medical students and 25 students of the master’s program “health care management” (see Table 3 [Tab. 3]).

The investigation of our primary outcome, the applicability of different instruments for the assessment of methodological study quality, did not provide evidence on the comprehensibility of instruments, instructions for using the instruments, if available, or duration of assessment procedure.

With regard to our secondary outcomes, we only report the significant findings on agreement with expert assessments. Figure 1 [Fig. 1] shows the percentages of medical students, who have given the five ratings by RCT and instrument. The rating with a black bar indicates the expert rating, e.g. for RCT 1 [21] and the IQWiG instrument, the expert assessed the study as having a high potential for bias, for RCT 2 [22], the expert found a low potential for bias, when using the IQWiG instrument. The height of the bar represents the proportion of students by rating in percent. The size of the black bar indicates, how many students got the same rate as the expert. The assessments of the same study by different experts with different instruments however show some variability. To reflect this, the agreement rate was assessed by comparing the students' assessments with a tolerance of +/-1 with the expert assessments. The majority of assessments shows an agreement in a range of 66% to over 80% and therefore an adequate rate.

Table 4 [Tab. 4] shows the agreement rates for “agreement +/-1” attained by student group in the various RCTs for the five instruments under investigation. Across RCTs, evidence was found that the choice of instrument had an impact (p=0.0158), while no difference for an influence of student group or RCT was found (p=0.3856 and p=0.2425, respectively). By RCT, evidence was found for an influence of the instrument on the agreement rate in RCTs 1, 2 and 3 (p=0.0146, p=0.0263 and p<0.0001, respectively). For the endpoint “agreement +/-1”, no evidence for an influence of “experience in critically appraising” or command of English was found. To note, the description of questions showed very different levels of detail in the publications, e.g. “the allocation code was concealed in sequentially numbered, opaque, sealed envelopes” [21] versus a simple mentioning of the term “randomization” without description of methodological details [23].


4. Discussion

We found a non-systematic over- and underestimation of risk of bias compared to the experts' assessments with no clear direction in the students’ assessments, what corresponds somehow to the answers given after the four short courses in EBM: 73% of the medical students rated their knowledge gained as weak or low. Our results contrast sharply with those of many other evaluations after lessons in EBM for undergraduates. In a before-after comparison, Weberschock et al. [14] observed a significant increase of performance in 124 year 3 medical undergraduates in Germany from a score of 2.37 points before the seminar and 9.85 points thereafter (99% CI [8.94; 10.77], p<0.001). Carrying out a controlled educational study, Ghali et al. [11] showed a significant difference in literature searching (p<0.002) and critical appraisal skills (p<0.0002) between third year medical students in Boston visiting either four sessions in EBM or receiving traditional didactic teaching in various clinical topics. A systematic review [4] about the impact of teaching critical appraisal skills including 10 clinical studies as well as a recently published review [9] including 14 RCTs about methods of teaching medical trainees EBM concluded an increase of learner competencies post-intervention across all studies. One reason for these differences compared to our results may be self-selection of highly motivated participants in contrary to our students visiting compulsory courses. In addition, the objectivity of evaluations wasn’t always strong, ranging between full self-perceptions, kinds of question papers with multiple-choice questions [14], and validated tests. As stated by Fritsche et al. 2002 [24] comparing the effects of EBM lectures between experts, postgraduate physicians and medical students, an objective evaluation of courses in EBM may be difficult but essential because there is a poor correlation between subjective perception of knowledge and its objective assessment [3].

In our study, students’ assessments were compared to a reference standard created by experts and therefore guaranteeing a certain objectivity, which is always prone to individual experience and individual perception; supplementary self-ratings of an increase in knowledge were recorded.

As each expert has rated each study by one instrument only, no assessment of reliability of experts was performed. It has to be mentioned here that even most of the existing assessment tools are not tested for validity and reliability [25]. With the exception of two [16], [20] this also applies for the included assessment instruments. Evaluating the Cochrane Collaboration’s risk of bias tool [16], Hartling et al. [26] found a wide range of inter-rater reliability between experts on individual domains from slight to substantial (weighted κ=0.13-0.74). Working in the same institution and review team, the authors assume a much higher variability across different research units. As an explanation for the wide range of inter-rater agreement, the authors discuss the need for clear and detailed instructions to improve reliability [26].

As seen in figure 1 [Fig. 1], the different instruments led the different experts to different assessements of the same study. One reason may be the different domains, additional information asked by some intruments as well as different depth of questions. This may lead to discordant conclusions, if certain aspects are not asked for in some instruments but asked for others. To cover the uncertainty, we used a less strict definition of agreement in terms, that an assessment with +/- one point is still regarded an agreement. Selection bias in the shape of publication bias and reporting bias can be assumed for all of the publications mentioned above which reported clearly positive effects of EBM lectures.

The extent and comprehensibility to which single questions were described within publications varied widely, sometimes leading to an impossible task for our unexperienced undergraduate students and a distortion in the analyses as well. It seems striking that the description of questions published in higher ranked journals (RCT 3 and 4 [27], [23]) requiring compliance with special statements concerning reporting quality was less understandable for our students than in publications with a smaller impact. The fact that critical appraisal always includes subjectivity by interpretation as well as scoring systems which only appear to be objective due to an explicit or implicit weighting without any empirical basis [28] is worth pointing out within this context.

The number of five different assessment instruments could have been too complex. On the other hand, the repetition of terms served for a greater familiarity, and we only addressed step 3 of the EBM concept in detail, therefore focusing the knowledge transfer very strongly. In particular, the training of step 1, formulating a research question which can be operationalized, may take a substantial amount of time for undergraduate students without experience in scientific work. This also applies to step 2, the literature searches in electronic databases.

Teaching critical appraisal separately, as we did, is very common and also known as journal club, meaning that participants have to read and appraise articles critically under the guidance of an expert [4]. For the reason of keeping up to date with new evidence, clinicians have to go through many articles in every day practice, and to do this effectively, training is necessary [29]. Therefore, and as suggested as a format of teaching EBM under certain conditions [10] we focused on step three of the EBM concept. However, this evokes other difficulties as critical appraisal integrates knowledge about epidemiology, information science and biostatistics [9].

Although lectures in epidemiology and psychology including statistics are compulsory in the first two years in Germany, 69% of our medical students reported a weak or low knowledge in statistics and/or epidemiology, showing that attitude and knowledge are not spread in the same manner, and teaching EBM must address the needs of different learners [8]. Maybe, using step 1 and 2 of the concept as an introduction is more appropriate to foster a scientific mindset. Alternatively and in order to escape the charge of isolation from clinical practice [30], teaching step 4, applying the evidence on individual cases, may be considered. However, this cannot succeed in undergraduates when practical experience is lacking, and the concrete objective only a vague idea. This is underlined by long-term experiences from Duke and Stanford resulting in a curriculum with the precondition of a clinical training prior to research experience because students were better prepared to understand the clinical and translational potential of their research projects [31]. For an open mind and a better assessment it would also be beneficial if students were familiar with the whole development process of a clinical trial and the impact of its single aspects [32].

In order to get and apply the best available evidence to clinical decision making, skills in finding and critically appraising medical literature are an essential prerequisite [13]. Without background knowledge in methodology and statistics, physicians are at a high risk of misinterpreting evidence, leading to medical errors and adverse effects [28].

Swift et al. [6] investigated the views of 130 physicians about training in statistics and its need in daily practice. As a student, more than half of the participants (60%) underestimated the value of these subjects as relevant to medical practice whilst the majority (73%) had realized its impact for their career over time. Despite the increasing conviction in the relevance of EBM there is evidence for continuing knowledge gaps in basic statistical concepts among practicing physicians and medical researchers [32]. Likewise, a sound of knowledge of key methodological EBM terms and sources seems to be lacking among the majority of health personal including physicians, translational researchers, nurses and other health professionals [12], [13]. To remedy this situation, even students must be helped to perceive these subjects as important to clinical practice [32]. It’s hoped that future physicians will better appraise research findings and contribute to furthering the clinical field by conducting research [33].


Acknowledgement

We thank Tobias Goeke, Angelika Gohlke, Anja Hagen, Beate Lux, and Monika Nothacker for their expert assessments.


Ethical approval

The seminar content and structure was approved by the office of the Dean. There was no contact with patients.


Competing interests

The authors declare that they have no competing interests.


References

1.
Godlee F. Milestones on the long road to knowledge. BMJ. 2007;344:s2. DOI: 10.1136/bmj.39062.570856.94 External link
2.
Meskó B. Medical Milestones on the long road to knowledge. [Internet]. 2007. [cited 2015 Sept 09]. Zugänglich unter/available from http://scienceroll.com/2007/01/18/medical-milestones-on-the-long-road-to-knowledge/ External link
3.
Khan KS, Awonuga A, Dwarakanath LS, Taylor R. Assessments in evidence-based medicine workshops: loose connection between perception of knowledge and its objective assessment. Med Teach. 2001;23(1):92-94. DOI: 10.1080/01421590150214654 External link
4.
Norman GR, Shannon SI. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. Can Med Assoc J. 1998;158(2):177-181.
5.
Horsley T, Hyde C, Santesso N, Parkes J, Milne R, Stewart T. Teaching critical appraisal skills in healthcare settings. Cochrane Database Syst Rev. 2011;(11):CD001270. DOI: 10.1002/14651858.CD001270pub2 External link
6.
Swift L, Miles S, Price GM, Shepstone L, Leinster SJ. Do doctors need statistics? Doctors' use of and attitudes to probability and statistics. Stat Med. 2009;28(15):1969-1981. DOI: 10.1002/sim.3608 External link
7.
Altman DG, Bland JM. Improving doctors' understanding of statistics. J Royal Stat Soc. 1991;154(2):223-267. DOI: 10.2307/2983040 External link
8.
Straus SE, Green ML, Bell DS, Badgett R, Davis D, Gerrity M, Ortiz E, Shaneyfeldt TM, Whelan C, Mangrulkar R; Society of General Internal Medicine Evidence-Based Medicine Task Force. Evaluating the teaching of evidence based medicine: conceptual framework. BMJ. 2004;329(7473):1029-1032. DOI: 10.1136/bmj.329.7473.1029 External link
9.
Ilic D, Maloney S. Methods of teaching medical trainees evidence-based medicine: a systematic review. Med Educ. 2014;48(2):124-135. DOI: 10.1111/medu.12288 External link
10.
Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicines changes anything? A systematic review. BMJ. 2004;329(7473):1017. DOI: 10.1136/bmj.329.7473.1017 External link
11.
Ghali WA, Saitz R, Eskew AH, Gupta M, Quan H, Hershman WY. Successful teaching in evidence-based medicine. Med Educ. 2000;34(1):18-22. DOI: 10.1046/j.1365-2923.2000.00402.x External link
12.
Young JM, Glasziou P, Ward JE. General practitioners' self rating of skills in evidence based medicine: validation study. BMJ. 2002;324(7343):950-951. DOI: 10.1136/bmj.324.7343.950 External link
13.
Ugolini D, Casanova G, Ceppi M, Mattei F, Neri M. Familiarity of physicians, translational researchers, nurses, and other health professionals with evidence-based medicines terms and resources. J Canc Educ. 2014;29(3):514-521. DOI: 10.1007/s13187-014-0631-0 External link
14.
Weberschock TB, Ginn TC, Reinhold J, Strametz R, Krug D, Bergold M, Schulze J. Change in knowledge and skills of Year 3 undergraduates in evidence-based medicine seminars. Med Educ. 2005;39(7):665-671. DOI: 10.1111/j.1365-2929.2005.02191.x External link
15.
Dreier M, Borutta B, Stahmeyer J, Krauth C, Walter U. Vergleich von Bewertungsinstrumenten für die Studienqualität von Primär- und Sekundärstudien zur Verwendung für HTA-Berichte im deutschsprachigen Raum. Schriftenreihe Health Technology Assessment, Bd. 102. Köln: Deutsches Institut für Medizinische Dokumentation und Information (DIMDI); 2010.
16.
Higgins JP, Green S. Cochrane Handbook for Systematic Reviews of Interventions 5.1.0. Oxford, UK: The Cochrane Collaboration; 2011.
17.
Hill CL, La Valley MP, Felson DT. Secular changes in the quality of published randomized clinical trials in Rheumatology. Arthritis Rheumatism. 2002;46(3):779-784. DOI: 10.1002/art.512 External link
18.
Huwiler-Müntener K, Jüni P, Junker C, Egger M. Quality of reporting of randomized trials as a measure of methodologic quality. JAMA. 2002;287(21):2801-2804. DOI: 10.1001/jama.287.21.2801 External link
19.
IQWiG. Früherkennungsuntersuchung von Sehstörungen bei Kindern bis zur Vollendung des 6. Lebensjahres. Abschlussbericht. Nr. 32. Köln: IQWiG-Berichte; 2008.
20.
Thomas BH, Ciliska D, Dobbins M, Micucci S. A process for systematically reviewing the literature: providing the research evidence for public health nursing interventions. Worldview Evidence-based Nurs. 2004;1(3):176-184. DOI: 10.1111/j.1524-475X.2004.04006.x External link
21.
Balegar VK, Kluckow M. Furosemide for packed red cell transfusion in preterm infants: a randomized controlled trial. J Pediatr. 2011;59(6):913-918. DOI: 10.1016/j.jpeds.2011.05.022 External link
22.
Vlug MS, Wind J, Hollmann MW, Ubbink DT, Cense HA, Engel A, Gerhards MF, van Wagensveld BA, van der Zaag ES, van Geloven AA, Sprangers MA, Cuesta MA, Bemelman WA, LAFA study group. Laparoscomy in combination with fast track multimodal management is the best perioperative strategy in patients undergoing colonic surgery. Ann Surg. 2011;254(6):868-875. DOI: 10.1097/SLA.0b013e31821fd1ce External link
23.
Mega JL, Braunwald E, Wiviott SD, Bassand JP, Bhatt DL, Bode C, Burton P, Cohen M, Cook-Bruns M, Fox KAA, Goto S, Murphy SA, Plotnikov AN, Schneider D, Sun X, Verheugt FW, Gibson CM. ATLAS ACS 2–TIMI 51 Investigators. N Engl J Med. 2012;366(1):9-19. DOI: 10.1056/NEJMoa1112277 External link
24.
Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ. 2002 325:1338-1341. DOI: 10.1136/bmj.325.7376.1338 External link
25.
Hartling L, Hamm M, Milne A, Vandermeer B, Santaguida PL, Ansari M, Tsertsvadze A, Hempel S, Shekelle P, Dryden DM. Validity and inter-rater reliability testing of quality assessment instruments. (Prepared by the University of Alberta Evidence-based Practice Center under Contract No. 290-2007-10021-I.) AHRQ Publication No. 12-EHC039-EF. Rockville, MD: Agency for Healthcare Research and Quality; 2012. Zugänglich unter/available from: www.effectivehealthcare.ahrq.gov/reports/final.cfm
26.
Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, Krebs Seida J, Klassen TP. Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ 2009;339:b4012. DOI: 10.1136/bmj.b4012 External link
27.
Martins N, Morris, Kelly PM. Food incentives to improve completion of tuberculosis treatment: randomized trial in DILI, Timor-Leste. BMJ. 2009;339:b4248. DOI: 10.1136/bmj.b4248 External link
28.
Buchberger B, von Elm E, Gartlehner G, Huppertz H, Antes G, Wasem J, Meerpohl JJ. Assessment of risk of bias in controlled studies. Bundesgesundheitsbl Gesundheitsforsch Gesundheitsschutz. 2014;57(12):1432-1438. DOI: 10.1007/s00103-014-2065-6 External link
29.
Kulier R, Gee H, Khan K. Five steps from evidence to effect: exercising clinical freedom to implement research findings. BJOG. 2008;115:1197-1202. DOI: 10.1111/j.1471-0528.2008.01821.x External link
30.
Kulier R, Gülmezoglu AM, Zamora J, Plana MN, Carroli G, Cecatti JG, Germar MJ, Pisake L, Mittal S, Pattinson R, Wolomby-Molondo JJ, Bergh AM, May W, Souza JP, Koppenhoefer S, Khan KS. Effectiveness of a clinically integrated e-learning course in evidence-based medicine for reproductive health training. JAMA. 2012;308(21):2218-2225. DOI: 10.1001/jama.2012.33640 External link
31.
Laskowitz DT, Drucker RP, Parsonnet J, Cross PC, Gesundheit N. Engaging students in dedicated research and scholarship during medical school: the long term experiences at Duke and Stanford. Acad Med. 2010;85(3):419-428. DOI: 10.1097/ACM.0b013e3181ccc77a External link
32.
Miles S, Price GM, Swift L, Shepstone L, Leinster SJ. Statistics teaching in medical school: opinion of practicing doctors. BMC Med Educ. 2010;10:75. DOI: 10.1186/1472-6920-10-75 External link
33.
Vereijken MW, Kruidering-Hall M, de Jong PG, de Beaufort AJ, Dekker FW. Scientific education early in the curriculum using a constructivist approach on learning. Perspect Med Educ. 2013;2(4):209-215. DOI: 10.1007/s40037-013-0072-1 External link
34.
Merenstein D, Murphy M, Fokar A, Hernandez RK, Park H, Nsouli H, Sanders ME, Davis BA, Niborski V, Tondu F, Shara NM. Use of fermented dairy probiotic drink containing lactobacillus casei (DN-114 001) to decrease the rate of illness in kids: the DRINK study. A patient-oriented, double-blind, cluster-randomized, placebo-controlled, clinical trial. Eur J ClinNutr. 2010;64(7):669-677. DOI: 10.1038/ejcn.2010.65 External link