Skip to main content
Original Articles and Reviews

Academic Course Evaluation

Theoretical and Empirical Distinctions Between Self-Rated Gain in Competences and Satisfaction with Teaching Behavior

Published Online:https://doi.org/10.1027/1016-9040.14.4.297

This article contributes to the conceptual and empirical distinction between (the assessment of) appraisals of teaching behavior and (the assessment of) self-reported competence acquirement within academic course evaluation. The Bologna Process, the current higher-education reform in Europe, emphasizes education aimed toward vocationally oriented competences and demands the certification of acquired competences. Currently available evaluation questionnaires measure the students’ satisfaction with a lecturer’s behavior, whereas the “Evaluation in Higher Education: Self-Assessed Competences” (HEsaCom) measures the students’ personal benefit in terms of competences. In a sample of 1403 German students, we administered a scale of satisfaction with teaching behavior and the German version of the HEsaCom at the same time. Using confirmatory factor analysis, the estimated correlations between the various scales of self-rated competences and teaching behavior appraisals were moderate to strong, yet the constructs were shown to be empirically distinct. We conclude that the self-rated gains in competences are distinct from satisfaction with course and instructor. In line with the higher education reform, self-reported gains in competences are an important aspect of academic course evaluation, which should be taken into account in the future and might be able to restructure the view of “quality of higher education.” The English version of the HEsaCom is presented in the Appendix.

References

  • Adam, S. (2004). Using learning outcomes. A consideration of the nature, role, application, and implications for European education of employing “learning outcomes” at the local, national, and international levels. Edinburgh: UK Bologna Seminar. First citation in articleGoogle Scholar

  • Aleamoni, L.M. , Spencer, R.E. (1973). The Illinois Course Evaluation Questionnaire: A description of its development and a report of some of its results. Educational and Psychological Measurement, 33, 669–684. First citation in articleCrossrefGoogle Scholar

  • Baron, R.M. , Kenny, D.A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. First citation in articleCrossrefGoogle Scholar

  • Beauducel, A. , Wittmann, W.W. (2005). Simulation study on fit indexes in CFA based on data with slightly distorted simple structure. Structural Equation Modeling, 12, 41–75. First citation in articleCrossrefGoogle Scholar

  • Bloom, B.S. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1. New York: McKay. First citation in articleGoogle Scholar

  • Bologna Working Group on Qualifications Frameworks . (2005). A framework for qualifications of the European Higher Education Area. Greena/Copenhagen: Grefta Tryk A/S. Retrieved on August 22, 2009, from www.bologna-bergen2005.no/Docs/00-Main_doc/050218_QF_EHEA.pdf First citation in articleGoogle Scholar

  • Braun, E. , Gusy, B. , Leidner, B. , Hannover, B. (2008). Das Berliner Evaluationsinstrument für selbstgeschätzte studentische Kompetenzen (BEvaKomp) [Competence-based course evaluation: Berlin Evaluation Instrument for students’ competences]. Diagnostica, 54, 30–42. First citation in articleLinkGoogle Scholar

  • Braun, E. , Soellner, R. , Hannover, B. (2006). Ergebnisorientierte Lehrveranstaltungsevaluation [Outcome-oriented course evaluation]. In Hochschulrektorenkonferenz (HRK) (Ed.), Qualitätsentwicklung an Hochschulen – Erfahrungen und Lehren aus 10 Jahren Evaluation [Quality development in higher education institutions – 10 years of experience] (pp. 60–67). Bonn: HRK. First citation in articleGoogle Scholar

  • Brennan, J. (2001). Lifelong learning for equity and social cohesion a new challenge to higher education. Strasbourg: Final Conference Report. First citation in articleGoogle Scholar

  • Burdsal, C.A. , Bardo, J.W. (1986). Measuring student’s perceptions of teaching: Dimensions of evaluation. Educational and Psychological Measurement, 46, 63–9. First citation in articleCrossrefGoogle Scholar

  • Cashin, W.E. (1995). Student ratings of teaching: The research revisited. IDEA Paper No. 32. Manhattan: Kansas State University, Center for Faculty Evaluation and Development. First citation in articleGoogle Scholar

  • Coatsworth, J.D. , Masten, A. (1998). The development of competence in favorable and unfavorable environments: Lessons from research on successful children. American Psychologist, 53, 205–220. First citation in articleCrossrefGoogle Scholar

  • Collins, L.M. , Schafer, J.L. , Kam, C.K. (2001). A comparison of inclusive and restrictive strategies in modern missing-data procedures. Psychological Methods, 6, 330–351. First citation in articleCrossrefGoogle Scholar

  • Dochy, F.J.R.C. , Alexander, P.A. (1995). Mapping prior knowledge: A framework for discussion among researchers. European Journal of Psychology of Education, 10, 225–242. First citation in articleCrossrefGoogle Scholar

  • Erpenbeck, J. (2003). Einführung [Introduction]. In J. Erpenbeck, L. von, Rosenstiel (Eds.), Handbuch Kompetenzmessung. Erkennen, Verstehen und Bewerten von Kompetenzen in der betrieblichen, pädagogischen und psychologischen Praxis [Handbook of competence assessment. Identification, comprehension, and evaluation of competences in job-related, pedagogical, and psychological practice] (pp. 9–19). Stuttgart: Schaeffer-Poeschel. First citation in articleGoogle Scholar

  • Enders, C.K. (2006). Analyzing structural equation models with missing data. In G.R. Hancock, R.O. Mueller, (Eds.), Analyzing structural equation models with missing data (pp. 313–342). Greenwich, CT: Information Age. First citation in articleGoogle Scholar

  • Enders, C.K. , Peugh, J.L. (2004). Using an em covariance matrix to estimate structural equation models with missing data: Choosing an adjusted sample size to improve the accuracy of inferences. Structural Equation Modeling, 11, 1–19. First citation in articleCrossrefGoogle Scholar

  • Federico, C.M. , Golec, A. , Dial, J.L. (2005). The relationship between the need for closure and support for military action against Iraq: Moderating effects of national attachment. Personality and Social Psychology Bulletin, 31, 621–632. First citation in articleCrossrefGoogle Scholar

  • Fouladi, R.T. (2000). Performance of modified test statistics in covariance and correlation structure analysis under conditions of multivariate nonnormality. Structural Equation Modeling, 7, 356–410. First citation in articleCrossrefGoogle Scholar

  • Gosling, S.D. , John, O.P. , Craik, K.H. , Robins, R.W. (1998). Do people know how they behave? Self-reported act frequencies compared with on-line codings by observers. Journal of Personality and Social Psychology, 74, 1337–1349. First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. , Gillmore, G. (1998). How useful are student ratings? American Psychologist, 53, 1228–1229. First citation in articleCrossrefGoogle Scholar

  • Greenwald, A.G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52, 1182–1186. First citation in articleCrossrefGoogle Scholar

  • Hannover, B. (1998). The development of self-concept and interests. In L. Hoffmann, A. Krapp, K.A. Renninger, J. Baumert, (Eds.), Interest and learning (pp. 105–125). Kiel: IPN. First citation in articleGoogle Scholar

  • Heckert, T.M. , Latier, A. , Ringwald-Burton, A. , Drazen, C. (2006). Relations among student effort, perceived class difficulty appropriateness, and student evaluations of teaching: Is it possible to “buy” better evaluations through lenient grading? College Student Journal, 40, 588–596. First citation in articleGoogle Scholar

  • Jöreskog, K.G. (1993). Testing structural equation models. In K.A. Bollen, J.S. Long, (Eds.), Testing structural equation models (pp. 294–317). Newbury Park: Sage. First citation in articleGoogle Scholar

  • Klieme, E. , Artelt, C. , Stanat, P. (2002). Fächerübergreifende Kompetenzen [Interdisciplinary competences]. In F.E. Weinert, (Ed.), Leistungsmessungen in der Schule [Measurements of achievement in schools] (pp. 203–218). Weinheim: Beltz. First citation in articleGoogle Scholar

  • Kolitch, E. , Dean, A.V. (1999). Student ratings of instruction in the USA: Hidden assumptions and missing conceptions about “good” teaching. Studies in Higher Education, 24, 27–42. First citation in articleCrossrefGoogle Scholar

  • L’Hommedieu, R. , Menges, R.J. , Brinko, K.T. (1990). Methodological explanations for the modest effect of feedback from student ratings. Journal of Educational Psychology, 82, 232–241. First citation in articleCrossrefGoogle Scholar

  • Locke, E.A. (1976). The nature and causes of job satisfaction. In M.D. Dunnette, (Ed.), Handbook of industrial and organizational psychology (pp. 1297–1349). Chicago: Rand McNally. First citation in articleGoogle Scholar

  • Lucas, R.E. , Baird, B.M. (2006). Global self-assessment. In M. Eid, E. Diener, (Eds.), Handbook of multimethod measurement in psychology (pp. 29–42). Washington, DC: American Psychological Association. First citation in articleCrossrefGoogle Scholar

  • Marsh, H.W. (1984). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707–754. First citation in articleCrossrefGoogle Scholar

  • Marsh, H.W. (1994). Using the national longitudinal study of 1988 to evaluate theoretical models of self-concept: The self-description questionnaire. Journal of Educational Psychology, 86, 439–456. First citation in articleCrossrefGoogle Scholar

  • Marsh, H.W. , Balla, J.R. , McDonald, R.P. (1988). Goodness of fit in confirmatory factor analysis: The effect of sample size. Psychological Bulletin, 103, 391–410. First citation in articleCrossrefGoogle Scholar

  • Marsh, H.W. , Hau, K.T. , Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling, 11, 320–341. First citation in articleGoogle Scholar

  • Marsh, H.W. , Roche, L.A. (1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187–1197. First citation in articleCrossrefGoogle Scholar

  • Marsh, H.W. , Roche, L.A. (2000). Effects of grading leniency and low workload on students’ evaluations of teaching: Popular myth, bias, validity, or innocent bystanders? Journal of Educational Psychology, 92, 202–228. First citation in articleCrossrefGoogle Scholar

  • OECD . (1999). Measuring student knowledge and skills. A new framework for assessment. Paris: OECD Publishing. First citation in articleCrossrefGoogle Scholar

  • Rindermann, H. (2001). Lehrevaluation. Einführung und Überblick zu Forschung und Praxis der Lehrevaluation an Hochschulen mit einem Beitrag zur Evaluation computerbasierten Unterrichts [Course evaluation: Introduction and overview on research into and practice of course evaluation in higher education, including a contribution on the evaluation of computer-based teaching]. Landau: Verlag Empirische Pädagogik. First citation in articleGoogle Scholar

  • Roche, L.A. , Marsh, H.W. (1998). Workload, grades, and students’ evaluations of teaching: Clear understanding sometimes requires more patient explanations. American Psychologist, 53, 1230–1231. First citation in articleCrossrefGoogle Scholar

  • Schafer, J.L. (1997). Analysis of incomplete multivariate data. London: Chapman and Hall/CRC Press. First citation in articleCrossrefGoogle Scholar

  • Schafer, J.L. , Graham, J. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147–177. First citation in articleCrossrefGoogle Scholar

  • Schmidt, H.G. , Dolmans, D.H.J.M. , Gijselaers, W.H. , Des Marchais, J.E. (1995). Theory-guided design of a rating scale for course evaluation in problem-based curricula. Teaching and Learning in Medicine, 7, 82–91. First citation in articleCrossrefGoogle Scholar

  • Sobel, M.E. (1982). Asymptotic intervals for indirect effects in structural equations models. In S. Leinhart, (Ed.), Sociological methodology (pp. 290–312). San Francisco: Jossey-Bass. First citation in articleCrossrefGoogle Scholar

  • Spain, J.S. , Eaton, L.G. , Funder, D.C. (2000). Perspective on personality: The relative accuracy of self versus others for the prediction of emotion and behavior. Journal of Personality, 68, 837–867. First citation in articleCrossrefGoogle Scholar

  • Sluis, S. van der , Dolan, C. , Stoel, R.D. (2005). A note on testing perfect correlations in SEM. Structural Equation Modeling, 12, 551–577. First citation in articleCrossrefGoogle Scholar

  • Wende, M.C. van der (2000). The Bologna Declaration: Enhancing the transparency and competitiveness of European higher education. Higher Education in Europe, 25, 205–310. First citation in articleGoogle Scholar

  • Wentzel, K.R. (1999). Social-motivational processes and interpersonal relationships: Implications for understanding students’ academic success. Journal of Educational Psychology, 91, 76–97. First citation in articleCrossrefGoogle Scholar

  • Weinert, F.E. (2001). Concept of competence: A conceptual clarification. In D. Rychen, L Salganik, (Eds.), Defining and selecting key competences (pp. 17–31). Göttingen: Hogrefe & Huber. First citation in articleGoogle Scholar

  • Westermann, R. , Heise, E. , Spies, K. , Trautwein, U. (1996). Identifikation und Erfassung von Komponenten der Studienzufriedenheit [Identification and measurement of elements of students’ satisfaction]. Psychologie in Unterricht und Erziehung, 43, 1–22. First citation in articleGoogle Scholar

  • Westermann, R. , Spies, K. , Heise, E. , Wollburg-Claar, S. (1998). Bewertung von Lehrveranstaltungen und Studienbedingungen durch Studierende: Theorieorientierte Entwicklung von Fragebögen [Students’ evaluation of courses and study conditions: Theory-oriented development of questionnaires]. Empirische Pädagogik, 12, 133–166. First citation in articleGoogle Scholar