INTRODUCTION

Clinical performance assessment should occur in the context in which clinical care occurs.1 Workplace-based assessments typically address multiple competency domains and capture learners’ ability to synthesize their knowledge and skills to conduct work.2 Educators have embraced competency-based medical education (CBME) as a strategy to encourage individual developmental paths toward competence. However, meaningful implementation of CBME in the workplace has been challenging. Detailed competencies and milestones raise concerns that resulting checklists can fail to capture how learners actually perform in clinical settings. The workplace structure and pace can further impede performance assessment.3,4

Entrustable professional activities (EPAs) are a strategy to operationalize competencies and milestones in the workplace and focus supervisors and learners on key activities to be assessed.5,6 Graduate medical educators have started to implement EPAs as a framework for assessment across specialties.710 Undergraduate medical educators are beginning to consider how trust can inform student assessment, particularly in settings that afford longitudinal contact between students and supervisors and meaningful student roles.11,12 Nationally, the Association of American Medical Colleges (AAMC) has proposed EPAs for graduating students to ensure readiness for residency.13 However, to our knowledge, a process for developing student-level EPAs that address institutional program objectives, student competencies, and larger mandates for CBME has not been articulated. The purpose of this manuscript is (1) to describe how we used the vision for a new curriculum to develop EPAs, and (2) to outline a process that can be used by other institutions to establish content evidence14 for EPAs that encompass their program objectives, competencies and milestones, and overall assessment plan.

SETTING AND PARTICIPANTS

The University of California, San Francisco (UCSF) is an urban public medical school. The curriculum includes two years of foundational science coursework and some clinical preceptorships, a third core clinical year, and a fourth year of clinical electives and options for concentration. The Institutional Review Board approved this project.

PROGRAM DESCRIPTION

We followed a stepwise process based upon the Standards for Educational and Psychological Testing 14 to establish content validity of our EPAs. The Standards define sources of validity evidence that can confirm how well an assessment measures what it was designed to measure. Below, we outline the steps we employed to establish the content evidence for our EPAs. We describe the purpose of each step in the continuum of curricular and assessment change, and the result of that step. These steps can be applied in other settings to establish content evidence for new and existing EPAs at all educational levels.

Step 1: Define a Curricular Vision

We sought to frame our assessment process within our school’s new curricular vision. At UCSF, this vision was defined in the context of the health care system as a whole and named the UCSF Bridges Curriculum.15 The School’s leaders proposed that 21st century physicians must actively improve the health of communities by ensuring healthcare quality, access, and innovation. To meet the needs of a diverse population in an era of increasingly complex acute and chronic disease, the physician role must be redesigned to embrace interprofessional and interdisciplinary teamwork in advancing science and care delivery; to recognize the central role of complex systems and inquiry in medicine and biomedical science; and to leverage biologic, clinical, and outcomes data to enhance patients’ health. A central curricular feature is authentic workplace learning experiences in patient health and systems improvement.

With this in mind, the School convened a committee to envision the ideal physician graduate. This Vision committee comprised 40 faculty and students known as visionary thinkers and experts in their fields from university hospitals and affiliated community-based practices. The group was charged with characterizing the extent and causes of persistent gaps in quality, safety, equity, evidence, and patient-centeredness in today’s healthcare systems. They used this information to define the roles and competencies of physicians ideally suited to contribute to new models of healthcare and biomedical discovery.

Step 2: Define EPAs Based on the Curricular Vision

Next, we defined the constructs we aimed to assess with our EPAs. A Steering committee comprised of deans and teaching faculty with educational leadership roles, all familiar with EPAs, listed over 20 activities that characterize the previously defined roles and competencies of 21st century physicians. The Steering committee next described the essential functions necessary to perform each activity successfully. Because the curriculum vision focused on physicians in practice, so did the EPAs. We were mindful that students would first need to achieve foundational knowledge and skills, and would perform new EPAs at a lower level of independence than practicing physicians, according to Ten Cate’s ranking of entrustment.16

To authenticate this list of activities and translate it to candidate EPAs, 3 groups were targeted for feedback. First, the Vision committee (see step 1) provided feedback about the representativeness compared with envisioned physician activities and healthcare systems needs. The initial list was too long, redundant, and excluded some essential physician behaviors. Feedback clarified that behaviors on the list fell into two categories: enduring physician skills: i.e. doctor-patient communication skills and clinical reasoning, and emerging physician skills: i.e. functioning effectively within complex systems and working collaboratively in teams. Second, a Curriculum Re-design committee focused on foundational science and early clinical skills provided comments with emphasis on the importance of inquiry in advancing knowledge. Finally, at multiple points during the design process, we sought feedback from local assessment and EPA experts about whether the EPA wording would capture actual physician work.

Based on this feedback, the Steering committee defined 6 essential EPAs (Table 1) that they felt captured the range of physician work operationalizing these roles. Each EPA is characterized by enduring and emerging physician skills. The educational community approved this final list at a Vision committee meeting and larger educational retreat in Spring 2014.

Table 1 Six EPAs for Medical Students and Selected Example Essential Skills for Performing Each EPA

Step 3: Develop EPAs and Assessment Strategies

The School’s leadership charged a Student Assessment Committee to develop an assessment blueprint14 that defined each EPA, skills essential to successful completion, and student assessment strategies. This group also recommended assessment tools based on existing assessments as well as new strategies described in the assessment literature.16 They mapped the EPAs to the AAMC EPAs to ensure that key national recommendations were represented.

Step 4: Define Competencies and Milestones

Starting with the School’s previously defined competencies and milestones, (http://meded.ucsf.edu/ume/md-competencies) a subcommittee of the Assessment Committee led a process to produce “graduation milestones” that reflect Bridges curriculum aims. The School’s competency directors, the Assessment Committee, directors of the foundational science and early clinical skills committees, and other content experts critically reviewed drafts.

To ensure that no critical content was missing, we directly compared draft graduation milestones to AAMC general physician competencies.17 This resulted in the decision to add a new competency domain, Interprofessional Collaboration, to the School’s existing 6, and also prompted creation of several new milestones related to personal and professional development. Content experts compared draft graduation milestones to intern milestones in 4 large specialties: family medicine, internal medicine, pediatrics, and psychiatry. This crosscheck identified an area not represented in the draft graduation milestones related to requesting and providing consultative care. Defining the milestones required that we envision an expected level of competence for a graduate. We made this determination with input from clinical education leaders such as clerkship directors in the School, and through the review by residency program directors to ensure alignment between expectations at graduation and early in internship. Further edits resulted in 51 draft graduation milestones.

Step 5: Map Milestones to EPAs

We conducted two activities to map the graduation milestones to EPAs. First, 6 educators representing the foundational, clinical, and educational sciences participated in a Q-sort activity. Q-sorting is a method for building consensus on prioritization of milestones for EPAs.18 The educators worked in 2 teams to rate milestones based on importance (least [1] to most [5]) to the assessment of each EPA. The 2 teams compared results, noting overlapping or disparate milestones for each EPA. The group conducted this process for 4 EPAs in person, and due to time constraints, for 2 EPAs individually by email. Three of the 51 graduation milestones did not map to any EPA, and 4 mapped to all EPAs. An additional round of editing led to an updated set of 40 graduation milestones.

To provide additional evidence for content validity, we used these results in an online survey of local faculty experts in clinical systems, medical school clinical skills and assessment, using the Delphi technique. Twenty-four faculty were asked to rate milestones that were scored as 2 or higher in the Q-sort for each EPA; based on Q-sort results, there were 11 to 15 milestones for each EPA. The survey queried the importance of each candidate milestone for the corresponding EPA (least [1] to most [5] important). Of the 24 invited experts, 19 faculty responded to the Round 1 survey. Out of 19, 14 responded to both rounds (74 % return rate). We calculated content validity indices (CVI)19 (the percentage of respondents rating an item as important, defined as 4 or 5 on our scale) and mean importance ratings for candidate milestones for each EPA. A CVI of 78 % or greater signifies evidence of consensus and validity.19

PROGRAM EVALUATION

Table 2 (available online) provides CVIs and mean importance ratings for representative candidate milestones for each EPA. For EPA 1, 11 of 13 milestones had CVIs higher than 78 %, representing standard criteria for consensus and content validity. For EPA 2, one milestone did not meet the content validity standard, with mean rating of 3.6 and low CVI score (42.8 %). For both EPA 3 (15 milestones) and EPA 4 (11 milestones), all candidate milestones were rated as meeting the content validity criteria for consensus. EPA 5 (2 milestones) and EPA 6 (4 milestones) had the most milestones with lack of consensus among faculty experts on the validity of the candidate milestones. For EPA 5, 2 milestones fell below the 78 % standard for CVI. EPA 6 had the most milestones (n = 4) that did not reach the 78 % CVI score; only 66 % of the candidate milestones reached consensus.

DISCUSSION

Our experience demonstrates a stepwise, replicable procedure to provide evidence of EPA content validity in undergraduate medical education (UME). We used a curricular vision to guide the development and refinement of candidate EPAs and mapped graduation milestones to those EPAs.

The appropriate size and scope of individual EPAs is a topic actively debated in the literature. Our focus on the physician that our institution aims to produce led to EPAs similarly focused on practicing physicians. We mapped our school’s EPAs to the AAMC’s core EPAs for entering residency and found that the content overlapped significantly.13 Some of our school’s EPAs encompass multiple AAMC EPAs, which themselves have contributed to questions about how broad or focused an individual EPA should be.20 To be useful for assessment, an EPA must be observable, replicable, and understandable to learners and supervisors.6 , 21 Therefore, a necessary next step in our EPA development is to specify further details for each EPA, including information about the patient, context, and learner behaviors.22 Another approach is to define smaller units of activities that nest together within larger EPAs, as has been proposed with Observable Professional Activities.23

Our approach to EPA development and mapping to milestones has limitations. This work occurred at a single institution. Educators who completed consensus-building surveys may not represent the perspectives of educators elsewhere, and may have responded differently after greater experience with the EPAs. Strengths of our approach include the consistent emphasis on operationalizing a curricular vision with broad-based faculty involvement to promote buy-in, aligning with national resources, achieving an acceptable response rate, and employing a stepwise approach that we believe is feasible and replicable. Our application of a stringent CVI cut-point provides support for our findings. Additional work is needed to provide other evidence of validity.14

We plan to pilot individual EPAs with two student groups before larger implementation. Students between their first and second year of medical school conducting curriculum development and quality improvement projects will provide insight into EPA 5. Third-year students in a year-long integrated clerkship and their longitudinal preceptors will participate in piloting EPAs 1 and 2.

In summary, guided by a curricular vision, we defined EPAs in UME and used a structured process for mapping milestones to EPAs to develop evidence of content validity. This work will serve as the basis of student assessment at our institution and inform future work on the validity of our EPAs.