Skip to main content
Top
Gepubliceerd in: Perspectives on Medical Education 6/2021

Open Access 20-08-2021 | Eye-Opener

Excellence in medical training: developing talent—not sorting it

Auteurs: Gurpreet Dhaliwal, Karen E. Hauer

Gepubliceerd in: Perspectives on Medical Education | Uitgave 6/2021

share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail
insite
ZOEKEN

Abstract

Many medical schools have reconsidered or eliminated clerkship grades and honor society memberships. National testing organizations announced plans to eliminate numerical scoring for the United States Medical Licensing Examination Step 1 in favor of pass/fail results. These changes have led some faculty to wonder: “How will we recognize and reward excellence?” Excellence in undergraduate medical education has long been defined by high grades, top test scores, honor society memberships, and publication records. However, this model of learner excellence is misaligned with how students learn or what society values. This accolade-driven view of excellence is perpetuated by assessments that are based on gestalt impressions influenced by similarity between evaluators and students, and assessments that are often restricted to a limited number of traditional skill domains. To achieve a new model of learner excellence that values the trainee’s achievement, growth, and responsiveness to feedback across multiple domains, we must envision a new model of teacher excellence. Such teachers would have a growth mindset toward assessing competencies and learning new competencies. Actualizing true learner excellence will require teachers to change from evaluators who conduct assessments of learning to coaches who do assessment for learning. Schools will also need to establish policies and structures that foster a culture that supports this change. In this new paradigm, a teacher’s core duty is to develop talent rather than sort it.
Opmerkingen

Supplementary Information

The online version of this article (https://​doi.​org/​10.​1007/​s40037-021-00678-5) contains supplementary material, which is available to authorized users.
As medical schools have changed approaches to grading and awards (most notably, reconsidering or eliminating honors grades in clerkships [1, 2] and election to honor societies [3]), faculty have raised concerns: How will we recognize and reward excellence? And don’t we care about excellence? The decision to change to pass/fail reporting of United States Medical Licensing Examination Step 1 results has further accentuated this concern [4].
Excellence in undergraduate medical education has long been defined by high grades, top test scores, honor society memberships, and publication records. More accolades in relationship to peers has provided a signaling and sorting mechanism for schools and residency programs. This view of excellence is familiar to generations of physicians but is out of sync with the educational experience students deserve and the care that patients need. We propose a revised conceptualization of learner excellence that requires a new model of teacher excellence driven by instructors whose skill is developing talent not sorting it.

“I know it when I see it”

Faculty convey students’ performance through conversations, evaluations, and letters of recommendation. The most useful narratives describe directly observed skills with examples allowing readers to recognize dimensions of competence. However, many communications are short on details yet feature vague statements of praise and summations such as “top 10% in my career” or “best ever” that reflect a gestalt approach to classifying excellence or an “I know it when I see it” standard.
This pattern recognition approach to characterizing student work parallels pattern recognition in diagnosing illness. Preconditions to trustworthy pattern recognition include frequent exposure to the clinical situation, regular feedback on diagnostic decisions, and continual updates to knowledge about the disease (“illness script” in clinical reasoning parlance) [5]. When learner assessments are made without frequent direct observations of students, without feedback about students’ future performance, and with an outdated “script” of competencies, pattern recognition loses validity.
The traditional script frequently frames excellence along a single dimension (typically, cognitive or technical ability) instead of the multidimensional skills captured in modern competency frameworks. “I know it when I see it” also invites faculty to see what they want to see and invites bias along the way.

Biased by the familiar

Just as cognitive bias jeopardizes clinical decision-making [6], implicit biases can influence our judgements about learners and predispose teachers to favor some students over others (see Table S1 of the Electronic Supplementary Material). Decades of social psychology research have demonstrated a strong human tendency toward in-group bias, where we positively evaluate or favor our own group (people who resemble us) at the expense of the out-group [7]. Teachers are susceptible to being influenced by concordance (demographic or intellectual) with their learners [8]. We are more likely to see excellence in people who look like us, share our academic pedigree, or excel in areas that we valued during our formative years, which may have been technical proficiency over collaboration or knowledge recitation over skills in learning new content. Grading structures typically reflect these traditional priorities and values [9].
Students with a familiar profile can benefit from teachers’ affinity through subtly upgraded evaluations [10]. Slightly more generous narrative evaluations or scores yield higher grade designations which then open doors to residency programs and medical specialties [1115]. Students can feel pressure to indicate interest in the same field as their assessors in order to earn favorable evaluations or better learning opportunities [16]. When we are misaligned with students (different backgrounds, beliefs, or prioritized skills) or do not share the same race, ethnicity, or gender [1719], this cascade works against them. A system focused on categorization that uses gestalt coupled with outdated and biased benchmarking preordains a designation of “excellent” to a few instead of developing excellence for all.

The educational excellence students and society need

Time spent assessing a student relative to other students (e.g., trying to identify the “best” students) is a poor use of teachers’ abilities. Modern teachers serve students more meaningfully by devoting their energy to fostering each learner’s broad skillset. To do this, teachers need to cultivate their knowledge and skills on topics they may not have formally learned in their training. They must also examine their own ability to interact with increasingly diverse student and patient populations.
For example, students are expected—and are expecting—to become skilled in health advocacy by contributing their expertise and influence to improve the health of different patient populations [20]. This competency includes recognizing health inequities, understanding the needs of communities, speaking on behalf of others when required, and supporting the mobilization of resources to effect change [21]. Teachers cannot rely on their intuition regarding appropriate levels of advocacy. Instead, they must fulfill their commitment to their students by learning what is meant by advocacy, understanding specific milestones that students must meet as they progress in this competency, and seeking opportunities for direct observation [22].
Though advocacy may be new to teachers, assessing a student’s advocacy skills has parallels to assessing a student’s other skills such as doing a lumbar puncture or leading a family meeting. A teacher cannot assess the latter example by saying “I know good communication when I see it.” Instead of using this pattern recognition approach, faculty members must commit to understanding the construct being measured and the specific milestones and subskills that students must achieve as they progress through training [23].
Fostering excellence in advocacy requires faculty to broaden their perspective to incorporate a skillset they may have never considered fundamental to being a physician [24]. This growth process may include practicing perspective taking and openness to patient (and student) life experiences that they never contended with, such as taking multiple buses to an appointment, being denied access or resources based on personal identity, or having to decide between filling a prescription or feeding their family.
When coaching students in advocacy or any other competency, educators must commit to making assessments based on direct observation. Entrustable professional activities are pre-specified workplace tasks (e.g., performing an appendectomy) which allow teachers to observe students integrate multiple competencies in a relevant workplace activity [25]. Teachers who wish to advance their skills in promoting and assessing advocacy would need to prioritize observing a workplace activity such as their student collaborating with a social worker to arrange travel vouchers for a patient. These observations allow the supervisor to identify areas for targeted teaching and growth (e.g., “next time, check with the patient first regarding her preferred time of day for her appointments”). With each data point, the educator must become skilled at making an assessment for learning (to drive growth), not an assessment of learning (to classify students for an external scheme such as a grade, award, or residency) [26].
Most of us do not “know it when we see it” because we were not trained in an environment when “it” matched the needs of society. New medical curricula now emphasize not only patient advocacy, but also shared decision-making, interprofessional collaboration, social determinants of health, and high-value care. The COVID-19 pandemic highlights the need for teachers with adaptive expertise to train future providers who will be prepared to adapt and learn about emerging health threats and respond using knowledge and skills that may not have existed during their training [27, 28]. The goal of medical education is to develop students who are excellent across these domains, and it will take a new faculty mindset to do that.

Shifting to a growth mindset

Fostering excellence instead of classifying it entails teachers adopting the same attitude we encourage in learners: shifting from a fixed mindset (“I know excellence in a student when I see it”) to a growth mindset (“I can learn new ways to assess and promote student skill development in unfamiliar domains”) [29]. Schools must undertake several steps to guide faculty into the coaching business and out of the classifying business [30, 31].
Policy changes such as removing honors grade designations and student rankings allow faculty to conduct assessments that are low stakes and formative rather than high stakes and summative [32]. Instead of focusing repeatedly on ill-fated attempts at rater training (getting everyone to evaluate consistently), faculty development should emphasize feedback training (getting everyone to consistently observe, record, and coach) [33]. Training can also engage faculty in examining their own longstanding assumptions and biases [34]. Introducing a new value (e.g., social justice) along with a new role (e.g., coaching) cannot be accomplished through a single training session. It requires frequent communication from leaders, multiple channels of dissemination (e.g., videos, emails, podcasts), and champions within the student body and faculty to effect change gradually and steadily while unequivocally and relentlessly signaling its direction and importance.
Selection of new clinical teachers should emphasize their commitment to directly observing learners’ work and building their own skills to engage learners in feedback discussions [35]. Programs should seek and foster a teacher mindset that welcomes rather than dreads identification of students with weaknesses. Great teachers are not distinguished by their ability to make “top” learners reach even greater heights, but rather by their ability to bring the “not yet” learner onto a developmental trajectory toward competence. The organizational goals must also shift from upholding a reputation for recruiting and producing the “best” graduates toward a culture where improvement and a growth mindset is expected for all individuals and the institution itself [36].

Competency-based assessment: promising but not a panacea

The framework of competency-based assessment—including specified milestones, developmental trajectories, and direct observation—can guide teachers in their professional evolution. However, the shift to competency-based assessment does not eliminate or solve many long-standing challenges in assessment programs.
The same rater biases outlined earlier that affect summative judgements of performance, including cognitive shortcuts and pattern recognition, can influence what evaluators see and infer in direct observations of learners, particularly those who differ from them. Therefore, teachers who shift from graders to coaches must still educate themselves about these cognitive tendencies and whenever possible, seek countermeasures [37]. While these observations by individual faculty are still judgements [38], emerging literature suggests that the synthesis of multiple subjective assessments, grounded in direct observation of the learner and their work, paints an increasingly accurate picture of a trainee’s competency in the workplace [39]. Schools can mitigate the risks of bias by establishing systems where many evaluators provide input based on detailed observations (not impressions) and by instituting group decision-making—such as a grading or competency committee—where members with diverse backgrounds develop and use a shared mental model of excellence to synthesize data to make a competency assessment [4042].
Residency programs continue to report challenges with underprepared learners who graduate from medical school [43]. Competency-based assessment will not solve this problem unless the foundation of direct observation is tightly coupled with a plan for improvement and re-assessment. Teachers must commit to making high-quality observations of skills and to an additional step: coaching the student, ensuring that the next supervisor does so, or referring the student to the appropriate resources in the medical school. Teachers must be mindful of the potential to propagate bias based on limited time with a learner (e.g., only one day in clinic or the hospital) and must become skilled at formulating a learner handover for the next supervisor to help the student make progress along their longitudinal trajectory [44]. Schools must establish a centralized reporting system that ensures progress is being made. And for students whose growth is hampered by learner-supervisor discontinuity [4547], schools must support faculty time and skill development for longitudinal clinical experiences that enable them to coach and mentor.
Teachers must also modify their approach to the traditionally “high achieving” or “high performing” student in a competency-based assessment system. Without a firm commitment to examine all competencies in a milestone-directed way, teachers may fall prey to the halo effect [48]. Once the learner is identified as excellent in one domain (e.g., knowledge as determined by a test score), a teacher may underappreciate or exaggerate the learner’s performance in other domains. These problematic generalizations can lead to other areas (e.g., advocacy or communication) being overlooked or overrated.
As teachers commit to growing their skills in observation and assessing multiple domains, schools must signal to students and faculty that competence across all domains is the foundation of excellence and that improved patient population health and well-being is the objective of these efforts. Teachers and schools must also start preparing themselves to make competency assessments and coaching plans based on data that are connected to patient outcomes. Utilizing measures of performance linked to quality of patient care—e.g., resident-sensitive quality measures [49]—can strengthen educators’ ability to define excellence in service to patients.

“Improvement” as part of the excellence code

Faculty have a societal obligation to ensure students achieve competence in relevant domains. However, once the threshold of competence is crossed, faculty attention should shift from the degree of accomplishment to the rate of improvement. This means not worrying about whether a student’s knowledge is “excellent” versus “outstanding,” and instead devoting energy to examining the method of improvement each student employs. Learners working to improve must be rigorous in their practice, reflection, and incorporation of feedback [50]. Students who exhibit limited interest in new challenges should warrant greater concern than students who seek clinical cases at the edge of their comfort zone. Excellence can be defined by the learner’s rate of growth, not just their current level of proficiency.
Integrating lifelong learning as a marker of excellence is at odds with current rhetoric where narratives describing “improvement” are code for bad performance [51]. In the new paradigm, assessment of the student’s improvement and commitment to personal growth is a must-have—and the absence of a mention of improvement would be alarming.

All patients need excellent physicians

In systems with abundant uncategorized data, the brain will always seek simplified abstractions to deal with complexity. Traditional assessment systems fulfill this role for advisors, award committees, and residency programs, and do so in a reductionistic manner based on what academic physicians—not society—value.
The job of medical school is not to sort students for residency, but to develop doctors to meet patients’ and society’s needs [52]. Residencies have the same goal and need not be in the business of sorting for fellowships and clinical practices. We will fall short of this goal as long as we condone the current system that defines excellence using metrics that value trainees who follow narrowly in their predecessors’ footsteps and triage students among residencies and specialties accordingly.
Without categorization by tests, grades, and adjectives, educators anticipate immense difficulty in selecting students for residency programs. This worry reflects the difficulty in selecting residents as we have always done, in which traditionally “excellent” students gain entry into “excellent” programs. There is no reason to believe that this sorting system has optimized our workforce to meet societal demands or that it could not be improved upon. Holistic review processes reflect the capacity of schools and residencies to assess excellence across multiple domains and select candidates whose areas of focus, capabilities, approaches to learning, and values match those of the program and society [5355].
When we employ “I know it when I see it”, we endorse a static version of excellence that is outdated, inaccurate, and exclusionary. The excellence in learners that society needs is a product of teachers who continually grow in their ability to coach and assess across multiple domains. All patients need excellent physicians. It’s our job to develop them.

Conflict of interest

G. Dhaliwal and K.E. Hauer declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.
Literatuur
1.
go back to reference Hauer KE, Lucey CR. Core clerkship grading: the illusion of objectivity. Acad Med. 2019;94:469–72. Hauer KE, Lucey CR. Core clerkship grading: the illusion of objectivity. Acad Med. 2019;94:469–72.
2.
go back to reference Konopasek L, Norcini J, Krupat E. Focusing on the formative: building an assessment system aimed at student growth and development. Acad Med. 2016;91:1492–7. Konopasek L, Norcini J, Krupat E. Focusing on the formative: building an assessment system aimed at student growth and development. Acad Med. 2016;91:1492–7.
3.
go back to reference Lynch G, Holloway T, Muller D, Palermo AG. Suspending student selections to alpha omega alpha honor medical society: how one school is navigating the intersection of equity and wellness. Acad Med. 2020;95:700–3. Lynch G, Holloway T, Muller D, Palermo AG. Suspending student selections to alpha omega alpha honor medical society: how one school is navigating the intersection of equity and wellness. Acad Med. 2020;95:700–3.
5.
go back to reference Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol. 2009;64:515–26. Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol. 2009;64:515–26.
6.
go back to reference Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138.
7.
go back to reference Everett JAC, Faber NS, Crockett M. Preferences and beliefs in ingroup favoritism. Front Behav Neurosci. 2015;9:1–15. Everett JAC, Faber NS, Crockett M. Preferences and beliefs in ingroup favoritism. Front Behav Neurosci. 2015;9:1–15.
9.
go back to reference Hernandez CA, Daroowalla F, LaRochelle JS, et al. Determining grades in the internal medicine clerkship: results of a national survey of clerkship directors. Acad Med. 2021;96:249–55. Hernandez CA, Daroowalla F, LaRochelle JS, et al. Determining grades in the internal medicine clerkship: results of a national survey of clerkship directors. Acad Med. 2021;96:249–55.
10.
go back to reference Bates R. Liking and similarity as predictors of multi-source ratings. Pers Rev. 2002;31:540–52. Bates R. Liking and similarity as predictors of multi-source ratings. Pers Rev. 2002;31:540–52.
11.
go back to reference Teherani A, Hauer KE, Fernandez A, King TE Jr, Lucey C. How small differences in assessed clinical performance amplify to large differences in grades and awards: a cascade with serious consequences for students underrepresented in medicine. Acad Med. 2018;93:1286–92. Teherani A, Hauer KE, Fernandez A, King TE Jr, Lucey C. How small differences in assessed clinical performance amplify to large differences in grades and awards: a cascade with serious consequences for students underrepresented in medicine. Acad Med. 2018;93:1286–92.
12.
go back to reference Low D, Pollack SW, Liao ZC, et al. Racial/ethnic disparities in clinical grading in medical school. Teach Learn Med. 2019;31:487–96. Low D, Pollack SW, Liao ZC, et al. Racial/ethnic disparities in clinical grading in medical school. Teach Learn Med. 2019;31:487–96.
13.
go back to reference Colson ER, Pérez M, Blaylock L, et al. Washington university school of medicine in St. Louis case study: a process for understanding and addressing bias in clerkship grading. Acad Med. 2020;95(12S):S131–S5. Colson ER, Pérez M, Blaylock L, et al. Washington university school of medicine in St. Louis case study: a process for understanding and addressing bias in clerkship grading. Acad Med. 2020;95(12S):S131–S5.
14.
go back to reference Teherani A, Hauer KE, Lucey C. Can change to clerkship assessment practices create a more equitable clerkship grading process? Acad Med. 2019;94:1262–3. Teherani A, Hauer KE, Lucey C. Can change to clerkship assessment practices create a more equitable clerkship grading process? Acad Med. 2019;94:1262–3.
15.
go back to reference Boatright D, Ross D, O’Connor P, Moore E, Nunez-Smith M. Racial disparities in medical student membership in the alpha omega alpha honor society. JAMA Intern Med. 2017;177:659–65. Boatright D, Ross D, O’Connor P, Moore E, Nunez-Smith M. Racial disparities in medical student membership in the alpha omega alpha honor society. JAMA Intern Med. 2017;177:659–65.
16.
go back to reference Woolley DC, Moser SE, Davis NL, Bonaminio GA, Paolo AM. Treatment of medical students during clerkships based on their stated career interests. Teach Learn Med. 2003;15:156–62. Woolley DC, Moser SE, Davis NL, Bonaminio GA, Paolo AM. Treatment of medical students during clerkships based on their stated career interests. Teach Learn Med. 2003;15:156–62.
17.
go back to reference Low D, Pollack SW, Liao ZC, et al. Racial/ethnic disparities in clinical grading in medical school. Teach Learn Med. 2019;31:487–96. Low D, Pollack SW, Liao ZC, et al. Racial/ethnic disparities in clinical grading in medical school. Teach Learn Med. 2019;31:487–96.
18.
go back to reference Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017;177:651–7. Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017;177:651–7.
19.
go back to reference Mueller AS, Jenkins TM, Osborne M, Dayal A, O’Connor DM, Arora VM. Gender differences in attending physicians’ feedback to residents: a qualitative analysis. J Grad Med Educ. 2017;9:577–85. Mueller AS, Jenkins TM, Osborne M, Dayal A, O’Connor DM, Arora VM. Gender differences in attending physicians’ feedback to residents: a qualitative analysis. J Grad Med Educ. 2017;9:577–85.
20.
go back to reference Lai CJ, Jackson AV, Wheeler M, et al. A framework to promote equity in clinical clerkships. Clin Teach. 2020;17:298–304. Lai CJ, Jackson AV, Wheeler M, et al. A framework to promote equity in clinical clerkships. Clin Teach. 2020;17:298–304.
22.
go back to reference Hubinette M, Dobson S, Scott I, Sherbino J. Health advocacy. Med Teach. 2017;39:128–35. Hubinette M, Dobson S, Scott I, Sherbino J. Health advocacy. Med Teach. 2017;39:128–35.
23.
go back to reference Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33:206–14. Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33:206–14.
24.
go back to reference Lupton KL, O’Sullivan PS. How medical educators can foster equity and inclusion in their teaching: a faculty development workshop series. Acad Med. 2020;95(12S):S71–S6. Lupton KL, O’Sullivan PS. How medical educators can foster equity and inclusion in their teaching: a faculty development workshop series. Acad Med. 2020;95(12S):S71–S6.
26.
go back to reference Wiliam D. What is assessment for learning? Stud Educ Eval. 2011;37:3–14. Wiliam D. What is assessment for learning? Stud Educ Eval. 2011;37:3–14.
28.
go back to reference Lucey CR, Johnston SC. The transformational effects of COVID-19 on medical education. JAMA. 2020;324:1033–4. Lucey CR, Johnston SC. The transformational effects of COVID-19 on medical education. JAMA. 2020;324:1033–4.
29.
go back to reference Shapiro N, Dembitzer A. Faculty development and the growth mindset. Med Educ. 2019;53:958–60. Shapiro N, Dembitzer A. Faculty development and the growth mindset. Med Educ. 2019;53:958–60.
30.
go back to reference Sargeant J. Future research in feedback: how to use feedback and coaching conversations in a way that supports development of the individual as a self-directed learner and resilient professional. Acad Med. 2019;94(11S):S9–S10. Sargeant J. Future research in feedback: how to use feedback and coaching conversations in a way that supports development of the individual as a self-directed learner and resilient professional. Acad Med. 2019;94(11S):S9–S10.
31.
go back to reference McDonald JA, Lai CJ, Lin MYC, O’Sullivan PS, Hauer KE. “There is a lot of change afoot”: a qualitative study of faculty adaptation to elimination of tiered grades with increased emphasis on feedback in core clerkships. Acad Med. 2021;96:263–70. McDonald JA, Lai CJ, Lin MYC, O’Sullivan PS, Hauer KE. “There is a lot of change afoot”: a qualitative study of faculty adaptation to elimination of tiered grades with increased emphasis on feedback in core clerkships. Acad Med. 2021;96:263–70.
32.
go back to reference van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–14. van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–14.
33.
go back to reference Gingerich A, Kogan J, Yeates P, Govaerts M, Holmboe E. Seeing the ‘black box’ differently: assessor cognition from three research perspectives. Med Educ. 2014;48:1055–68. Gingerich A, Kogan J, Yeates P, Govaerts M, Holmboe E. Seeing the ‘black box’ differently: assessor cognition from three research perspectives. Med Educ. 2014;48:1055–68.
34.
go back to reference Gonzalez CM, Garba RJ, Liguori A, Marantz PR, McKee MD, Lypson ML. How to make or break implicit bias instruction: implications for curriculum development. Acad Med. 2018;93(11S):S74–S81. Gonzalez CM, Garba RJ, Liguori A, Marantz PR, McKee MD, Lypson ML. How to make or break implicit bias instruction: implications for curriculum development. Acad Med. 2018;93(11S):S74–S81.
35.
go back to reference Ramani S, Könings KD, Ginsburg S, van der Vleuten CP. Feedback redefined: principles and practice. J Gen Intern Med. 2019;34:744–9. Ramani S, Könings KD, Ginsburg S, van der Vleuten CP. Feedback redefined: principles and practice. J Gen Intern Med. 2019;34:744–9.
36.
go back to reference Osman NY, Hirsh DA. The organizational growth mindset: animating improvement and innovation in medical education. Med Educ. 2021;55:416–8. Osman NY, Hirsh DA. The organizational growth mindset: animating improvement and innovation in medical education. Med Educ. 2021;55:416–8.
37.
go back to reference Capers Q 4th. How clinicians and educators can mitigate implicit bias in patient care and candidate selection in medical education. ATS Sch. 2020;1:211–7. Capers Q 4th. How clinicians and educators can mitigate implicit bias in patient care and candidate selection in medical education. ATS Sch. 2020;1:211–7.
38.
go back to reference Ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med. 2019;94:333–7. Ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med. 2019;94:333–7.
39.
go back to reference Hodges B. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach. 2013;35:564–8. Hodges B. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach. 2013;35:564–8.
40.
go back to reference Frank AK, O’Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship grading committees: the impact of group decision-making for clerkship grading. J Gen Intern Med. 2019;34:669–76. Frank AK, O’Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship grading committees: the impact of group decision-making for clerkship grading. J Gen Intern Med. 2019;34:669–76.
41.
go back to reference Colbert CY, Dannefer EF, French JC. Clinical competency committees and assessment: changing the conversation in graduate medical education. J Grad Med Educ. 2015;7:162–5. Colbert CY, Dannefer EF, French JC. Clinical competency committees and assessment: changing the conversation in graduate medical education. J Grad Med Educ. 2015;7:162–5.
42.
go back to reference Edgar L, Jones MD, Harsy B, Passiment M, Hauer KE. Better decision-making: shared mental models and the clinical competency committee. J Grad Med Educ. 2021;13(2s):51–8. Edgar L, Jones MD, Harsy B, Passiment M, Hauer KE. Better decision-making: shared mental models and the clinical competency committee. J Grad Med Educ. 2021;13(2s):51–8.
43.
go back to reference Lyss-Lerman P, Teherani A, Aagaard E, Loeser H, Cooke M, Harper GM. What training is needed in the fourth year of medical school? Views of residency program directors. Acad Med. 2009;84:823–9. Lyss-Lerman P, Teherani A, Aagaard E, Loeser H, Cooke M, Harper GM. What training is needed in the fourth year of medical school? Views of residency program directors. Acad Med. 2009;84:823–9.
44.
go back to reference Humphrey-Murto S, LeBlanc A, Touchie C, et al. The influence of prior performance information on ratings of current performance and implications for learner handover: a scoping review. Acad Med. 2019;94:1050–7. Humphrey-Murto S, LeBlanc A, Touchie C, et al. The influence of prior performance information on ratings of current performance and implications for learner handover: a scoping review. Acad Med. 2019;94:1050–7.
45.
go back to reference Bernabeo EC, Holtman MC, Ginsburg S, Rosenbaum JR, Holmboe ES. Lost in transition: the experience and impact of frequent changes in the inpatient learning environment. Acad Med. 2011;86:591–8. Bernabeo EC, Holtman MC, Ginsburg S, Rosenbaum JR, Holmboe ES. Lost in transition: the experience and impact of frequent changes in the inpatient learning environment. Acad Med. 2011;86:591–8.
46.
go back to reference Williams DA, Kogan JR, Hauer KE, Yamashita T, Aagaard EM. The impact of exposure to shift-based schedules on medical students. Med Educ Online. 2015;20:27434. Williams DA, Kogan JR, Hauer KE, Yamashita T, Aagaard EM. The impact of exposure to shift-based schedules on medical students. Med Educ Online. 2015;20:27434.
47.
go back to reference Kerr J, Walsh AE, Konkin J, Tannenbaum D, et al. Continuity: middle C—a very good place to start. Can Fam Physician. 2011;57:1355–6. Kerr J, Walsh AE, Konkin J, Tannenbaum D, et al. Continuity: middle C—a very good place to start. Can Fam Physician. 2011;57:1355–6.
48.
go back to reference Sherbino J, Norman G. On rating angels: the halo effect and straight line scoring. J Grad Med Educ. 2017;9:721–3. Sherbino J, Norman G. On rating angels: the halo effect and straight line scoring. J Grad Med Educ. 2017;9:721–3.
49.
go back to reference Schumacher DJ, Martini A, Sobolewski B, et al. Use of resident-sensitive quality measure data in entrustment decision making: a qualitative study of clinical competency committee members at one pediatric residency. Acad Med. 2020;95:1726–35. Schumacher DJ, Martini A, Sobolewski B, et al. Use of resident-sensitive quality measure data in entrustment decision making: a qualitative study of clinical competency committee members at one pediatric residency. Acad Med. 2020;95:1726–35.
50.
go back to reference Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: a conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–5. Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: a conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–5.
51.
go back to reference Saudek K, Treat R, Goldblatt M, Saudek D, Toth H, Weisgerber M. Pediatric, surgery, and internal medicine program director interpretations of letters of recommendation. Acad Med. 2019;94(11S):S64–S8. Saudek K, Treat R, Goldblatt M, Saudek D, Toth H, Weisgerber M. Pediatric, surgery, and internal medicine program director interpretations of letters of recommendation. Acad Med. 2019;94(11S):S64–S8.
52.
go back to reference Lucey CR. Medical education: part of the problem and part of the solution. JAMA Intern Med. 2013;173:1639–43. Lucey CR. Medical education: part of the problem and part of the solution. JAMA Intern Med. 2013;173:1639–43.
53.
go back to reference Simone K, Ahmed RA, Konkin J, Campbell S, Hartling L, Oswald AE. What are the features of targeted or system-wide initiatives that affect diversity in health professions trainees? A BEME systematic review: BEME guide no. 50. Med Teach. 2018;40:762–80. Simone K, Ahmed RA, Konkin J, Campbell S, Hartling L, Oswald AE. What are the features of targeted or system-wide initiatives that affect diversity in health professions trainees? A BEME systematic review: BEME guide no. 50. Med Teach. 2018;40:762–80.
54.
go back to reference Aibana O, Swails JL, Flores RJ, Love L. Bridging the gap: holistic review to increase diversity in graduate medical education. Acad Med. 2019;94:1137–41. Aibana O, Swails JL, Flores RJ, Love L. Bridging the gap: holistic review to increase diversity in graduate medical education. Acad Med. 2019;94:1137–41.
Metagegevens
Titel
Excellence in medical training: developing talent—not sorting it
Auteurs
Gurpreet Dhaliwal
Karen E. Hauer
Publicatiedatum
20-08-2021
Uitgeverij
Bohn Stafleu van Loghum
Gepubliceerd in
Perspectives on Medical Education / Uitgave 6/2021
Print ISSN: 2212-2761
Elektronisch ISSN: 2212-277X
DOI
https://doi.org/10.1007/s40037-021-00678-5

Andere artikelen Uitgave 6/2021

Perspectives on Medical Education 6/2021 Naar de uitgave