Skip to main content
Top
Gepubliceerd in: Perspectives on Medical Education 5/2020

Open Access 01-10-2020 | Commentary

Is the proof in the PUDding? Reflections on previously undocumented data (PUD) in clinical competency committees

Auteurs: Daniel J. Schumacher, Benjamin Kinnear

Gepubliceerd in: Perspectives on Medical Education | Uitgave 5/2020

share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail
insite
ZOEKEN
In the current issue of the journal, Tam and colleagues explore the use of previously undocumented data, or PUD, about resident performance in making assessment decisions at the level of the clinical competency committee (CCC) at a relatively small postgraduate subspecialty program [1]. Tam and colleagues define previously undocumented data as any information contributing to CCC discussions that was not in documentation brought to the meeting. They provide four categories of this data: summary impressions, contextualizing factors, personal anecdotes and hearsay. While others have described use of such data in CCCs, [2, 3] Tam et al. elaborate on reasons for using this data during CCC meetings and methods for managing it during discussions. The authors suggest that given current limitations of most programs of assessment, there are likely benefits of using previously undocumented data in CCCs to make decisions, and they advocate for this as an acceptable practice. They argue that this practice can help fill gaps in assessment data that are often lacking in quantity, quality or clarity [1]. We agree whole-heartedly that suboptimal assessment data is a major barrier to CCCs making optimal and defensible decisions [4, 5] and that a better understanding of previously undocumented data can help CCCs manage it during meetings. However, Tam and colleagues also acknowledge the potential limitations of their findings. Building on this, we believe four issues warrant further exploration: 1) use of previously undocumented data as a symptom of suboptimal programmatic assessment that perhaps should not be used to justify its routine use, 2) the role of program size in the study’s findings, 3) the potential introduction of bias created when using previously undocumented data, and 4) the likely range of trainee acceptance regarding previously undocumented data use.
Tam et al. note that multiple barriers exist to capturing previously undocumented data in formal documentation, such as documentation of “idiosyncratic experiences,” limited time, and challenges in capturing complex constructs (e.g. professionalism) or contextual factors [1]. Rather than working around these barriers by using more of this type of data, we believe these short-comings should provide a call to improve programmatic assessment. Filling these gaps should include emphasizing the power of snapshot assessments which document rich, subjective, idiosyncratic experiences [6, 7]. Systems should also be designed to provide time for meaningful, timely documentation of observations rather than making valuable assessments ferment in fallible memories before being poured out in CCC discussions. In short, we believe that while previously undocumented data will always exist, and likely provides value in CCC decision-making, it should not be used as a work-around for suboptimal programmatic assessment systems. Rather, it should drive improvement efforts.
The study presented by Tam et al. focuses on a program of 6–8 total learners, with a CCC comprising seven faculty members who have “typically …had the benefit of individual supervisory experience with each trainee ” [1]. In this setting, previously undocumented data may comprise observational data that simply was not documented, and hence is conceptually similar to usual programmatic assessment data. While this supports the use of such data for this program, this may not be transferrable to larger programs or programs in which CCC members have not worked directly with the learners being reviewed. Without direct observation of learner performance, previously undocumented data may be skewed toward hearsay rather than discussions of contextualizing factors, personal anecdotes (based on direct observation), and summary impressions (the four types of previously undocumented data Tam et al. describe). Thus, the conclusions that Tam et al. draw may be more appropriate for smaller sized programs where trainees and CCC members know each other well and work together regularly than for larger programs.
While all observational data carry some risk of bias, [8] use of previously undocumented data in a small group setting may increase this risk. Several cognitive biases can influence CCC decisions [9]. Use of previously undocumented data in these conversations may introduce or worsen availability bias, reliance on gist, selection bias, recency bias, recall bias, and visceral bias. Dickey et al. provide a caution in this regard, illustrating a faculty member “disregarding six months’ worth of data in favor of one recent patient interaction ” [9]. Allowing a small group such as a CCC to interject previously undocumented data also opens the door to other biases such as those based on race, gender or age. This brings the need for diverse CCC membership and robust faculty development on implicit bias into focus. With these considerations in mind, integrating this type of data requires careful consideration and attention toward whether this practice worsens biases potentially at play. Here again, program size may have an influence, and it may be particularly important in larger programs in which biased previously undocumented data from an individual CCC member could be either amplified or left unchallenged by another CCC member not familiar with the trainee.
Finally, the authors posit that trainees are likely to be accepting of previously undocumented data when receiving this information [1]. Here again, we wonder if program and faculty size may play an important role. If the trainees know all the members of the CCC well, have trusting relationships with most or all of them, and believe that the CCC members can characterize their performance accurately, we agree that presenting this type of data to trainees may not be problematic. However, does this hold true with larger programs where trainees may not know who sits on the CCC and may not have worked with multiple members previously? In this situation, trusting relationships built through personal experiences may be the exception rather than the rule, even for senior-level trainees. Would previously undocumented data be trusted by trainees or is there a risk that this data could be seen as hearsay, anecdotal one-offs, and unfair [10]? Further studies on learner perceptions of this type of data are needed.
Previous studies have considered the role previously undocumented data serves in CCC discussions, emphasizing this as a traditional practice that warrants continued study and even noting, similar to Tam et al., that such data may be helpful in filling gaps in documented assessment data [2, 3]. Tam and colleagues advance our understanding in this area, making an important contribution to the literature. However, we believe their study may also emphasize the importance of considering the transferability of research findings to other settings and wonder if their findings may represent smaller training programs better than larger ones. This has important implications for how previously undocumented data is used by CCCs and presented to trainees following CCC deliberations based on training program size if true. Perhaps most importantly, viewing Tam et al.’s findings through a lens of continual quality improvement provides an opportunity to use previously undocumented data as a driver to advance programmatic assessment.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.
share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail
Literatuur
2.
go back to reference Schumacher DJ, Michelson C, Poynter S, et al. Thresholds and interpretations: how clinical competency committees identify pediatric residents with performance concerns. Med Teach. 2018;40:70–9.CrossRef Schumacher DJ, Michelson C, Poynter S, et al. Thresholds and interpretations: how clinical competency committees identify pediatric residents with performance concerns. Med Teach. 2018;40:70–9.CrossRef
3.
go back to reference Hauer KE, Chesluk B, Iobst W, et al. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Acad Med. 2015;90:1084–92.CrossRef Hauer KE, Chesluk B, Iobst W, et al. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Acad Med. 2015;90:1084–92.CrossRef
4.
go back to reference Pack R, Lingard L, Watling CJ, Chahine S, Cristancho SM. Some assembly required: tracing the interpretative work of Clinical Competency Committees. Med Educ. 2019;53:723–34.CrossRef Pack R, Lingard L, Watling CJ, Chahine S, Cristancho SM. Some assembly required: tracing the interpretative work of Clinical Competency Committees. Med Educ. 2019;53:723–34.CrossRef
5.
go back to reference Ekpenyong A, Baker E, Harris I, et al. How do clinical competency committees use different sources of data to assess residents’ performance on the internal medicine milestones?A mixed methods pilot study. Med Teach. 2017;39:1074–83.CrossRef Ekpenyong A, Baker E, Harris I, et al. How do clinical competency committees use different sources of data to assess residents’ performance on the internal medicine milestones?A mixed methods pilot study. Med Teach. 2017;39:1074–83.CrossRef
6.
go back to reference Hodges B. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach. 2013;35:564–8.CrossRef Hodges B. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach. 2013;35:564–8.CrossRef
7.
go back to reference Holmboe E, Durning SJ, Hawkins RE. Practical guide to the evaluation of clinical competence. Amsterdam: Elsevier; 2018. Holmboe E, Durning SJ, Hawkins RE. Practical guide to the evaluation of clinical competence. Amsterdam: Elsevier; 2018.
8.
go back to reference Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ. 2011;45:1048–60.CrossRef Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ. 2011;45:1048–60.CrossRef
9.
go back to reference Dickey CC, Thomas C, Feroze U, Nakshabandi F, Cannon B. Cognitive demands and bias: challenges facing clinical competency committees. J Grad Med Educ. 2017;9:162–4.CrossRef Dickey CC, Thomas C, Feroze U, Nakshabandi F, Cannon B. Cognitive demands and bias: challenges facing clinical competency committees. J Grad Med Educ. 2017;9:162–4.CrossRef
Metagegevens
Titel
Is the proof in the PUDding? Reflections on previously undocumented data (PUD) in clinical competency committees
Auteurs
Daniel J. Schumacher
Benjamin Kinnear
Publicatiedatum
01-10-2020
Uitgeverij
Bohn Stafleu van Loghum
Gepubliceerd in
Perspectives on Medical Education / Uitgave 5/2020
Print ISSN: 2212-2761
Elektronisch ISSN: 2212-277X
DOI
https://doi.org/10.1007/s40037-020-00621-0

Andere artikelen Uitgave 5/2020

Perspectives on Medical Education 5/2020 Naar de uitgave