Intended for healthcare professionals

Learning In Practice

Transferability of principles of evidence based medicine to improve educational quality: systematic review and case study of an online course in primary health care

BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7381.142 (Published 18 January 2003) Cite this as: BMJ 2003;326:142
  1. Trisha Greenhalgh (p.greenhalgh{at}pcps.ucl.ac.uk), professor of primary health carea,
  2. Peter Toon, senior clinical lecturera,
  3. Jill Russell, non-clinical lecturera,
  4. Geoff Wong, clinical lecturera,
  5. Liz Plumb, educational researcherb,
  6. Fraser Macfarlane, lecturer in health care managementc
  1. a Department of Primary Care and Population Sciences, University College London Medical School, London N19 5LW
  2. b Marketing Research and Evaluation Unit, University College London, London WC1
  3. c School of Management, University of Surrey, Guildford Surrey GU2 7XH
  1. Correspondence to: T Greenhalgh

    The success of evidence based medicine has led to pressure to make medical education more evidence based. Greenhalgh and colleagues tested the transferability of these principles when developing a postgraduate course

    Evidence based medicine advocates a structured and systematic approach to clinical decision making using a five point sequence (box 1). The same principles, linked to audit and performance review, have been used extensively in policy making 1 2 and quality improvement initiatives 3 4 in health care. They have also been advocated as an approach to improving the quality of education in general,5 and medical education in particular, 6 7 though others have strongly rejected such approaches.8 We explored the extent to which the five stage evidence based medicine sequence can be applied to developing and implementing quality standards in online education.

    Summary points

    It is widely believed that the education of health professionals should be more evidence based

    Good randomised controlled trials in education (especially postgraduate education) are hard to find

    A systematic review of evidence on online education found only one relevant randomised controlled trial

    Independent qualitative analysis of students' and staff experience on our online course was invaluable when testing the validity and transferability of published research evidence and quality standards

    Evidence in education should include not only formal, research derived knowledge but also tacit knowledge (informal knowledge, practical wisdom, and shared representations of practice)

    Box 1 : Sequence of evidence based medicine

    Frame a focused question

    Search thoroughly for research derived evidence

    Appraise the evidence for its validity and relevance

    Seek and incorporate the user's values and preferences

    Evaluate effectiveness through planned review against agreed success criteria

    Aims

    As the developers of an online degree course for health professionals, we aimed to:

    • Evaluate the use of an evidence based medicine framework in an educational development setting

    • Develop robust quality standards for the delivery of an online postgraduate course in primary health care

    • Draw general lessons about the transferability of the principles of evidence based medicine to educational practice.

    Research team

    We are a multidisciplinary, research oriented academic team comprising four general practitioners (one with a strong interest in information technology), a social scientist, a psychologist, and an educationalist; we work closely with an academic nursing unit. We had previously taught in undergraduate medicine, postgraduate short courses, and work based training, but we were new to the online environment.

    Educational context

    We established a part time MSc degree in primary health care at University College London in 1999. The course is entirely online except for an initial one week summer school. It caters for a diverse student group of general practitioners, public health physicians, community nurses, pharmacists, and managers drawn from the United Kingdom and mainland Europe, most of whom sign up to the course to achieve goals such as developing new services, establishing local research programmes, or developing and evaluating teaching and training initiatives.

    When we embarked on this project in 1997, University College London had a policy of discouraging distance learning because of concerns about quality. Hence, we found ourselves a test case for wider issues concerning the credibility and feasibility of online learning at our institution.

    Methods

    The study comprised three overlapping phases: secondary research, primary research, and synthesis, as shown in figures 1 and 2.

    Fig 1
    Fig 1

    Methods used in preparation of the quality framework

    Secondary research phase

    We did a systematic review of the literature on online education. Following the sequence in box 1, we framed focused questions and tried to select research designs, search strategies and data sources appropriate to each. Although we initially constructed these questions in terms of how the course affected student performance, our final list of questions was as follows:

    • What is a high quality online learning experience for postgraduate students of primary health care?

    • How can we provide that experience consistently and efficiently?

    • How can we reliably demonstrate the quality of our course to internal critics and external evaluators?

    • How can we best support, train, and supervise our staff?

    • How will we know when we are failing?

    • How can we improve our performance year on year?

    We applied a formal search strategy to online databases (notably ERIC (Educational Resources Information Centre) and PsycInfo). We also searched books, grey literature (especially dissertations and internal reports), and key journals by hand. We gained additional insights from attending conferences and courses (including online Open University courses), joining academic email lists, and making direct contact with experts in the field. Through these, we encountered many examples of existing online programmes, which we considered as case studies.

    We examined literature on the development of audit and quality assurance programmes from industry and the service sector (for example, ISO 9000, Investors in People, Royal College of General Practitioners Quality Practice Award) and identified official guidelines on distance education produced by the Quality Assurance Agency in the United Kingdom and comparable publications from other countries.

    We applied standard critical appraisal checklists to guidelines.9 For qualitative research papers (where difficulties in appraisal are well described10), we used a range of checklists 7 1113 to guide in-depth discussion of published studies and prompt contact with authors where necessary. These strategies and sources are described fully on bmj.com (appendix 1).

    Fig 2
    Fig 2

    Methods used to synthesise the quality framework

    Primary research phase

    An independent researcher (LP) from a separate department did an extensive primary research study of the experiences of students and staff on our course using a range of qualitative methods (see bmj.com for full details). Interviews and focus groups were audiotaped, transcribed in full, and analysed for themes—that is, LP developed a preliminary taxonomy of areas of concern, flagged critical incidents, and suggested explanations for particular behaviours or phenomena. LP periodically presented these themes, together with her impressions from shadowing and observing participants, to staff and students and modified the themes in response to feedback and discussion.

    Synthesis phase

    We held regular review meetings to consider the emerging results of the secondary and primary research, reframe questions where necessary, and formally reflect on our role as both researchers of, and participants in, the project. Over several such meetings, we developed and refined a first draft of a detailed quality framework for our course. We circulated this draft to our students, external examiners, and around 30 colleagues within and outside the college, some of whom were selected for their critical views of online learning. We presented the second draft at academic meetings and conferences and again invited feedback, but in practice made little subsequent modification.

    We tested the transferability of our quality framework to other courses, institutions, and contexts. One of us (FM) modified it at the University of Surrey to provide draft quality standards for placing course materials on line and running optional email discussions for students in conventionally taught MSc programmes. We also used a modified version of the framework in the development of a series of CD Rom based continuing professional development modules for general practitioners (see http://www.apollobmj.com/).

    Results of systematic review

    We found only one randomised controlled trial examining what works in online education in our subject. This was a small trial on the effect of online postgraduate programmes in primary health care.1 The full results of our search are available on bmj.com Of around 300 primary research articles and 700 reviews and editorials, we rejected around 95% as irrelevant or methodologically poor. Many original research papers had not been peer reviewed (some had been published exclusively on the internet), and most were limited to technical details or superficial case description. The studies of undergraduate medical education were the only ones whose sampling frames, interventions, and outcomes could be meaningfully compared in a summary table, and we have published a systematic review of these studies.14

    Of the 15 guidelines for online education, around half were relevant and potentially transferable, but validity was hard to assess. The recommendations from the UK Quality Assurance Agency generally seemed sensible, but the evidence base was not clear and there was little advice on dissemination, implementation, or local adaptation (see appendix 2 on bmj.com for details). Several US guidelines seemed more robust and flexible, but most of these still took an institutional focus and the practical lessons for people developing courses were unclear.

    Formulating a quality framework

    Combining our diverse secondary and primary sources to produce a clear vision for quality, a succinct set of standards, and a set of measurable success criteria for our own course was difficult and complex. It required repeated discussion and revisiting of concepts. Our primary research often provided rich case examples that enabled us to make sense of (or challenge) the published recommendations. Critical incidents proved particularly useful as triggers for action. Examples of all these and the final version of the quality framework are given on bmj.com (appendices 3 and 4).

    Despite the plethora of papers and guidelines on online education, we found no simple recipes for developing evidence based quality standards in our educational project. We repeatedly found that reflection on practical experience (rather than, say, the application of critical appraisal checklists) enabled us to test the validity and transferability of published evidence to our course.

    Applicability of evidence based medicine

    We believe there are four key differences between evidence based education and conventional evidence based medicine. Firstly, many questions relating to clinical practice fall into a simple and logical taxonomy (such as, prevalence, prognosis, or therapy). The different types of question have a corresponding preferred research design (survey, cohort study, randomised controlled trial, etc) with accepted criteria for assessing validity (the critical appraisal checklist13). Educational questions have a more complex taxonomy, a less direct link with particular preferred study designs, and no universally accepted criteria for assessing validity. 15 16

    Secondly, most of the definitive research questions generated for this project were qualitative—that is, they began with exploratory, open ended stems such as how or what. In the early stages of the project, we constructed questions in the format used in evidence based medicine (population, intervention, comparison, and outcome)—for example, “What proportion of students will pass an exam if all the teaching is online compared with the proportion that would pass if taught conventionally?” Implicit in this question is a behaviourist model of learning, in which the students are viewed as a population sample; the online course as an intervention; conventional education as the comparison; and student performance as the outcome. The validity of such assumptions is highly questionable, especially when (as in many postgraduate and continuing professional development courses) the goals of the course are humanistic rather than behaviourist (professional development, motivation, support, confidence) and (as with most part time adult learners) there is wide variation between students in terms of background, personal goals, life commitments, learning styles, ongoing circumstances, and so on. 17 18

    Thirdly, we found the online educational literature difficult to access and navigate. This is unlikely to be wholly due to our lack of technical familiarity with the databases, since the user interface for the ERIC and Psyclit databases is identical to that for Medline. Some key search terms (e-learning, computer mediated communication) have multiple synonyms, and others (quality, performance) have multiple meanings. Given the diverse nature of qualitative research, search filters intended to select out such studies are in reality neither sensitive nor specific.10

    Box 2 : Suggested sequence for evidence based educational development

    Frame a detailed question that fully reflects the context and complexity of the course being considered

    Search thoroughly for research derived evidence

    Appraise the evidence for its validity and relevance

    Seek practical know-how through personal contacts and networking

    Undertake rigorous, in-depth primary research on the experience of staff and students

    Integrate these diverse sources iteratively into a draft development plan

    Evaluate effectiveness through planned review against agreed success criteria

    RETURN TO TEXT

    Fourthly, and perhaps most importantly, we found that educational development requires practical wisdom and not merely research evidence. Although the theoretical knowledge we gained about online learning from published guidelines often scored well on objective measures of quality, it served to confuse as much as inform us. But the practical knowledge that we gleaned from conferences, academic mailing lists, expert contacts, Open University courses, and our portfolio of case examples from education, health care, and industry was invaluable in converting evidence into action.

    Conclusions

    Hammersley has accused the evidence based medicine movement of “making false and dangerous promises” for the transferability of its methods to education. In particular, he claims, it does not address how research evidence should be combined with other kinds of evidence in making practical judgments in educational development.8 We agree that the educational community must take care not to climb uncritically on the evidence based medicine bandwagon in the politically fashionable drive towards a focused and scientific approach. We propose an alternative decision making sequence (box 2) that better reflects the reality of evidence based education.

    In conclusion, the linear and formulaic link between evidence and practice implicit in evidence based medicine proved inadequate for the complexities of educational research. Conceptual models designed for multifaceted problems, which may be more appropriate, include cognitive restructuring theory,19 complexity (non-linearity) theory,20 activity theory (the relation between course developers, contexts, and tools),21 and the sharing of tacit knowledge in informal communities of practice.22

    Acknowledgments

    Further details of the course described in this paper can be viewed at www.ucl.ac.uk/openlearning/msc/index.html We thank the following people for help with developing of our quality framework: Gilly Salmon, Robin Mason, Ann Rossiter, Lewis Elton, Pat Cryer, David Perry, Gene Feder, Marcia Rigby, Angela Chesser, Ann Leyland, Will Coppola, and all students on our course. We also thank the referee, Janet Grant, for helpful and constructive comments.

    Contributors: TG, JR, and PT conceptualised the study. LP conducted the independent qualitative fieldwork including transcribing and analysing the data. TG, GW, and JR did the literature search. FM designed the format of the quality framework, which JR adapted and to which all authors subsequently contributed. TG wrote the paper and will act as guarantor.

    Footnotes

    • Funding This project was funded partly via an educational development grant from the University of London External System New Technologies Fund.

    • Competing interests None declared.

    • Embedded Image Further details of the systematic review and the quality framework are given on bmj.com

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.
    17. 17.
    18. 18.
    19. 19.
    20. 20.
    21. 21.
    22. 22.