Skip to main content


Swipe om te navigeren naar een ander artikel

01-04-2013 | Original Article | Uitgave 2/2013 Open Access

Perspectives on Medical Education 2/2013

Repeated evaluations of the quality of clinical teaching by residents

Perspectives on Medical Education > Uitgave 2/2013
Cornelia R. M. G. Fluit, Remco Feskens, Sanneke Bolhuis, Richard Grol, Michel Wensing, Roland Laan


Many studies report on the validation of instruments for facilitating feedback to clinical supervisors. There is mixed evidence whether evaluations lead to more effective teaching and higher ratings. We assessed changes in resident ratings after an evaluation and feedback session with their supervisors. Supervisors of three medical specialities were evaluated, using a validated instrument (EFFECT). Mean overall scores (MOS) and mean scale scores were calculated and compared using paired T-tests. 24 Supervisors from three departments were evaluated at two subsequent years. MOS increased from 4.36 to 4.49. The MOS of two scales showed an increase >0.2: ‘teaching methodology’ (4.34–4.55), and ‘assessment’ (4.11–4.39). Supervisors with an MOS <4.0 at year 1 (n = 5) all demonstrated a strong increase in the MOS (mean overall increase 0.50, range 0.34–0.64). Four supervisors with an MOS between 4.0 and 4.5 (n = 6) demonstrated an increase >0.2 in their MOS (mean overall increase 0.21, range −0.15 to 53). One supervisor with an MOS >4.5 (n = 13) demonstrated an increase >0.02 in the MOS, two demonstrated a decrease >0.2 (mean overall increase −0.06, range −0.42 to 0.42). EFFECT-S was associated with a positive change in residents’ ratings of their supervisors, predominantly in supervisors with relatively low initial scores.
Over dit artikel

Andere artikelen Uitgave 2/2013

Perspectives on Medical Education 2/2013Naar de uitgave