Context
This study focuses on a longitudinal learning activity aimed at supporting the development of clinical reasoning among medical students. This activity was designed and implemented in a new four-year competency-based undergraduate curriculum at the University de Sherbrooke, Québec, Canada.
When the project reported in this paper was designed and launched, the new curriculum had been approved and the planning committee was in the process of constructing the teaching and learning activities. The first author (MC), a faculty member with expertise in clinical reasoning, joined the curriculum planning committee to consider how a newly developed EEI could be integrated into the new curriculum to support students’ development of clinical reasoning skills.
Conceptual framework for the implementation
The KtA is a process framework designed to support the uptake of research-based knowledge into practice [
11]. It consists of several steps that can guide educators in the implementation of an EEI. KtA consists of two components: knowledge creation and knowledge application (action cycle). The seven steps of the action cycle are: 1) identify the know-do gap (the gap between research and practice) and review/select relevant research-based knowledge; 2) adapt this knowledge to local context; 3) assess barriers/facilitators to knowledge use; 4) select, tailor, and implement the intervention; 5) monitor knowledge use; 6) evaluate outcomes; and 7) sustain knowledge use. Within this framework and its different steps, the term
knowledge refers to research knowledge adapted to the context which, in our case, was the EEI.
The action cycle is dynamic and iterative. For instance, steps 3 and 4 may be repeated until the intervention is sufficiently customized to contextual specificities and users’ needs. Furthermore, the boundaries between the creation and application of knowledge are fluid—i.e., as new knowledge is created, it can inform the action cycle, and as new knowledge is implemented, teams can collect data on the implementation process, and can contribute to further refining existing knowledge or creating new knowledge. In this way, knowledge creation and knowledge application interact with and inform each other.
EEI development and KtA process
We now present how the EEI was developed following the first five steps of the KtA framework. Because we chose to assess barriers and facilitators iteratively and throughout the implementation process, we will describe Step 4 before Step 3 below. The ultimate goal of the implementation process is to design, implement, and assess the effectiveness of a longitudinal educational activity that can support the development of students’ clinical reasoning skills and build on and align with other planned teaching/learning activities.
This study was approved by our institution’s Education—Social Sciences, Research Ethics Board (Comité d’éthique de la recherche—Éducation et sciences sociales) (protocol number: 2017–1488). All participants consented to participate.
The EEI that we implemented combined SE and SR in a longitudinal activity. A full description of the SE-SR activity has been published elsewhere [
21].
We also designed an activity that was aligned with the other characteristics of the new curriculum structured around professional clinical situations of increasing complexity; with successive blocks of small-group learning sessions through which students acquired biomedical and clinical knowledge, history and physical examination skills, problem management knowledge relevant to the clinical situations; and recurrent integration weeks that provide students the opportunity to deepen and apply their knowledge [
21].
Qualitative data: student and stakeholder discussions
All first-year students (n = 206) were invited to participate in focus group discussions at two points in each year: mid year and end of the year. The protocol for student focus groups sought overall impressions of the activity; the barriers and facilitators to its implementation; whether or not (if yes, how) students changed the way they did the activity from one time to the next; whether or not (if yes, how) the strategies of the learning activity have transferred to other contexts; whether or not (if yes, how) the activity could be improved; and whether or not (if yes, how) the activity fostered the development of clinical reasoning. Five students participated in the first focus group (i.e., mid Year 1). Twenty-four students consented for the second (i.e., end Year 1) resulting in convening three focus groups consisting of eight participants at this time point. In Year 2, we recruited 15 participants divided in two focus groups for mid Year 2, and 11 participants in two focus groups at the end of Year 2.
Stakeholders (n = 15) also participated in focus groups. Stakeholders were individuals who played a role in SE-SR conception and implementation. For the stakeholders we aimed to seek impressions of the activity; barriers and facilitators to its implementation; whether (if yes, how) the activity could be improved; and whether or not (if yes, how) the activity fostered the development of clinical reasoning in learners. Because of scheduling difficulties, the first (i.e., mid Year 1) stakeholder focus group was transformed into three individual interviews. We recruited five stakeholders for the second focus group (i.e., end Year 1), and four for the third focus group (i.e., mid Year 3). For the last discussion (i.e., end Year 2), again because of scheduling difficulties, we conducted four individual interviews as well as a simultaneous interview with another two stakeholders.
All focus groups and/or individual interviews were facilitated by an experienced research assistant uninvolved in the program, were audio recorded, transcribed, and anonymized. We engaged in thematic analysis [
23] of the data to identify and describe barriers and facilitators. One team member (LB) conducted the initial coding process that involved minimal interpretation or abstraction of the data. This analysis aimed only to bring participant comments with similar content together into codes. These codes were then reviewed by a second team member (MC). Discussion between LB and MC led to consensus on the coding structure. A third member of the research team (AT) revised the codes and suggested elaboration, refinement, and extended several descriptions of the codes. A subsequent meeting with LB, MC, and AT led to a final coding structure which was applied to the entire dataset. The final coding structure was presented to the team for discussion and refinement of the themes (see Codebook in the Electronic Supplementary Material).
To examine the success of the implementation of this intervention, we focused on implementation outcomes. Specifically, we focused on six of Proctor et al.’s [
24] implementation outcomes that, while designed for clinical settings, are equally relevant to our educational context:
-
Fidelity: the alignment between the intervention’s actual implementation and its original intention;
-
Feasibility: the extent to which the intervention can be successfully used in the program;
-
Appropriateness: the perceived fit of the intervention for the program;
-
Acceptability: the perception of stakeholders that the intervention is satisfactory;
-
Adoption: the intention by the organization and the providers to employ the intervention;
-
Penetration: the integration of the intervention in the program.
Quantitative and qualitative data were integrated and aligned with these outcomes measures. Table
1 lists which data were used as evidence for each of the six outcomes.
Table 1
Outcomes and data alignment
Fidelity | Students’ mean total time spent doing the activity |
Feasibility | Students’ completion rate for each case Students’ verbatim Stakeholders’ verbatim |
Appropriateness | Students’ verbatim Stakeholders’ verbatim |
Acceptability | Students’ verbatim Stakeholders’ verbatim |
Adoption | Stakeholders’ verbatim |
Penetration | Students’ verbatim Stakeholders’ verbatim |