Introduction
Health professions education (HPE) research can serve many purposes including, but not limited to, influencing education practice [
1]. Researchers who seek to influence education practice are frequently challenged in doing so [
2]. These challenges include research evidence never making it to the right people, and research evidence being seen as lacking relevance or utility [
3,
4]. In HPE there have been many calls to improve the translation of evidence into practice [
5,
6], which have been linked to concepts from implementation science [
7]. Implementation science employs theoretical frameworks and research methods to 1) identify the nature and magnitude of research-practice gaps; 2) identify the causes of those gaps, both individual and organizational; and 3) design and test the effectiveness of theory-driven and tailored interventions to reduce research to practice gaps [
8]. However, the limited exploration of implementation science in HPE has tended to consider translation primarily as a matter of how to best expose end-users to evidence [
8].
In this paper, we take a different perspective on implementation science by exploring how decisions are made in HPE. We outline how this approach can help researchers present their evidence in ways that can influence decision-making relevant to their knowledge claims (the WHAT of implementation). We end with a call to the HPE community to explore decision-informed knowledge translation [
7,
8] as an implementation science approach that can better connect education scholarship to education practice (the HOW of implementation). With these objectives in mind, we present this manifesto to advance thinking on how evidence and decision-making intersect in HPE, and to call for a deeper consideration of implementation science in our field.
Discussion
We drew on our direct experience of HPE in Canada and the UK, our many intersections with programs and schools around the world, and our knowledge of the field as a whole in preparing this paper. Given that the specifics vary, we might therefore consider much of the evidence that we generate as ‘middle-range evidence’ (with a nod to Merton’s concepts of ‘middle-range theory’) in that it is relevant to a particular set of contexts but not necessary to others.
We have argued that evidence seeking to influence educational practice should be targeted at the appropriate decision-making levels, stakeholders, and contexts. A systematic review of the medical literature for connections between evidence and decision-making in implementation science is beyond the scope of this paper; these connections have indeed been made, albeit in many different ways and at different levels. While individual clinical decision-making seems to dominate much of the literature, it has been observed in medicine that different kinds of decisions are made at different levels [
23], that different stakeholders are involved in different kinds of decisions [
24], and that evidence needs to be meaningful to them [
23,
25]. In this regard, the clinical and HPE contexts are arguably similar. However, we see two major differences. Firstly, it has been argued that the evidence base for many HPE practices is less well developed than in healthcare [
2]. Mapping new evidence to its relevant HPE contexts and decision-making levels early could help make it clearer what evidence is relevant in a given context. Secondly, given the stakes are often higher in healthcare practice contexts than in HPE, it is possible that the health professions educators see less of an imperative to change in the face of evidential claims they encounter [
2].
Our recommendations focused on how researchers might present their evidence to better impact their target audiences. We acknowledge, however, that implementation is a broader concern and that other stakeholders can play an important role. For instance, HPE leaders and those involved at different levels of governance could be more critically engaged with the role that evidence plays in their decisions and be more vocal in helping researchers in their implementation efforts. Entities that shape the HPE research environment, namely graduate training programs, scholarly journals and conferences, and research funding agencies, could also play a more active role in aligning research with the appropriate levels of decision-making that can translate evidence into practice.
We should be clear there is no universal method or algorithm for doing this; the process is complex and probabilistic at best. Nevertheless, we can consider strategies congruent with an
integrated implementation approach [
26,
27], which requires 1) that the right stakeholders be engaged in the research process; and 2) that stakeholders be involved from the outset and throughout the research process. Guidelines on how such an integrated approach might work can be found in Tab. 1 of the Electronic Supplementary Material. We do not mean these guidelines to be prescriptive. Rather, they should serve as food for thought when engaging stakeholders in decision-making, not least because participatory and collaborative approaches must, by definition, be grounded in adaptation and tailoring. These suggestions can and should be the subject of empirical examination to test their effectiveness in enhancing decision-making. More specifically, we have proposed that researchers should seek to identify which stakeholders are involved, at which levels and for what types of decisions that are relevant to the evidence they are generating. These will be different for, say, the implementation of a new teaching strategy compared with a new admissions procedure or a curriculum overhaul. While there are different ways in which this might be approached, techniques from activity theory [
28], cognitive task analysis [
29], and logic modeling [
30] could help in this regard. Realist inquiry with its focus on ‘what works for whom in what contexts’ could also be useful [
31].
Connecting evidence to decision-making should allow for better translation and replication, as well as for understanding how the alignment between evidence and decision-making may differ across levels and contexts of decision-making. Outcome evaluation is undoubtedly the most defensible way to justify the usefulness of the approach and the resources used to affect change. This requires that key level-specific outcomes be identified early and that methods be selected that can evaluate these outcomes. Implementation science researchers have developed numerous evaluation models and frameworks [
32,
33]. Tab.
2 provides recommendations for practice.
Table 2
Application of integrated implementation approaches to three aspects of decision-making (DM) in HPE
General principles | Identify and engage the right stakeholders for the evidence that is being implemented and its optimal point(s) of influence Make sure stakeholder engagement is meaningful, not tokenistic and/or only meeting researcher needs Ensure transparency and accountability in stakeholder selection | Engage stakeholders as early in the research process as possible Ensure that iterative and bidirectional feedback between stakeholders and researchers is encouraged Ensure transparency and accountability in how stakeholders are engaged Engage stakeholders in identifying target implementation audiences, what messages should be transferred, in what ways, by whom, and with what intended impacts |
Aspects of decision-making in HPE | Levels of decision-making | Identify stakeholders based on the level of DM and the kinds of evidence they use in their DM Decide who else should be involved and in what ways Ensure stakeholder engagement is meaningful and valuable | Invite stakeholders to decide which stages of the research process they will participate in and how their participation will help them and the research Seek stakeholder feedback at every stage on how the research relates to DM and how it might be adjusted to be more relevant to decision-makers Enable stakeholder participation through supports, incentives, and/or recognition meaningful to them Collaborate in designing and executing a knowledge translation strategy that align with their DM processes |
Context of decision-making | Engage stakeholders from the contexts from which the evidence was generated and where the evidence will be implemented Explore how contextual variation is (or might be) seen by stakeholders as a factor in who is involved in DM and how | Encourage stakeholder feedback from a range of similar appropriate DM contexts at each stage of the research to account for contextual variation. Explore with stakeholders how contexts can change the DM implications of the research Explore research limitations with stakeholders Design and adjust knowledge translation activities to be meaningful and accessible in different contexts and to reflect the needs and dynamics of different and evolving DM contexts |
Factors that compete with evidence | Select stakeholders who understand how priorities are set and conflicts are resolved in DM Engage stakeholders with varying conceptions of evidence and its legitimacy in DM processes Explore the nature of the evidence that may be contested and how competing priorities can be resolved Identify and manage conflicts of interest between researchers and stakeholders | Engage stakeholders in exploring how competing priorities might constrain knowledge translation activities and how the research design and execution might be adapted to be more useful and compelling in informing DM Engage stakeholders in ensuring that knowledge translation activities are meaningful, accessible, tractable, and practical for decision-makers when faced with competing priorities |
We have outlined key aspects of decision-making in HPE and the ways in which the connections between evidence and its impact can be developed. We have also been clear that these are not causal conditions; rather, that they will help to make translation to practice and influence on decision-making from evidence more likely.
We also note that there are practical and conceptual limits to which researchers can engage stakeholders, both in variety and scale. What a proportionate level of engagement and alignment will look like will depend on the nature of the evidence generated by the research, the kinds of impacts that are being sought, the kinds of decision-making contexts involved, and the resources (e.g., time) available to all concerned. We also acknowledge that the additional effort and expertise required in mapping evidence to its decision-making contexts and dynamics suggests we need further research into this topic alongside training for our researchers in implementation science techniques.
Contributing to organizational change may be a relatively new concept to researchers but it is, arguably, what leaders in HPE must frequently do. We are therefore in part advocating for more substantial and deliberate scholar-leadership; the leadership and organizational literature may help in this regard, whether it is the work of change scholars such as John Kotter [
34] or those who directly explore decision-making [
35].
We also acknowledge that not all researchers want to effect change, or at least not to effect specific programmatic changes. There is, after all, a tension in our field between communications aimed at other researchers and those aimed at influencing practice—even in an applied field, scholarly communications can vary reasonably and target other researchers [
36]. Either way, the argument for understanding and targeting an audience still applies, especially in regard to decision-making. While the need for researchers to influence decision-making is not new [
14], we have used an implementation science lens to argue for ways in which this gap can be closed and have made explicit the differences in the levels and kinds of decisions that are made in HPE.
Finally, we have presented a thesis that, while drawn from direct and indirect experiences and knowledge of the field, has not been rigorously tested in practice. We fully acknowledge that more research is needed to explore how our manifesto itself translates to practice.
By providing evidence of how to make decisions at different levels and with different actors, and by considering the consequences of different decisions and decision-making processes, we may find ourselves faced with a whole new science, a science of HPE decision-making. Our hope then is to generate a discourse on implementation science, one that considers actors, levels, culture, and compromise. In the absence of such a discourse and well thought out research agenda, our attempts at moving the science of decision-making forward, will be fragmented at best. Scholars [
5,
20] have planted the seeds for future empirical work and discussion of implementation science and evidence-informed HPE. We invite others to join us in making this manifesto a reality.