Skip to main content
Original Article

Positive First or Negative First?

Effects of the Order of Answering Categories on Response Behavior

Published Online:https://doi.org/10.1027/1614-2241/a000013

To examine whether starting a response scale with the positive or the negative categories affects response behavior, a split-ballot design using reverse forms of an 8-point scale assessing the subjective importance of job characteristics was used. Response behavior varied according to the scale format employed. Responses were more positive on the scale starting with the category “very important” (split 2). By contrast, the scale starting with the category “not at all important” (split 1) did not elicit more negative responses, but rather less positive ones. However, differences in response behavior did not systematically reflect the direction of the respective scales. Starting with the differences between the two split versions, the factorial structure of indicators assessing two dimensions of job motivation was tested for each scale type separately and then for both scale types simultaneously. Finally, models placing increasingly severe equality constraints on both scale types were tested. The paper concludes with a discussion of the results and desiderata for further research.

References

  • Alwin, D. F. (1997). Feeling thermometers versus 7-point scales. Which are better?. Sociological Methods and Research, 25, 318–341. First citation in articleCrossrefGoogle Scholar

  • Alwin, D. F. , Krosnick, J. A. (1991). The reliability of survey attitude measurement: The influence of question and respondent attributes. Sociological Methods and Research, 20, 139–181. First citation in articleCrossrefGoogle Scholar

  • Andrews, F. M. (1984). Construct validity and error components of survey measures. A structural equation approach. Public Opinion Quarterly, 48, 409–442. First citation in articleCrossrefGoogle Scholar

  • Frieze, I. H. , & Olson, J. E. (2006). Economic influences on ideals about future jobs in young adults in former socialist countries and the United States. Cross-Cultural Research, 40, 352–376. First citation in articleCrossrefGoogle Scholar

  • Gescheider, G. A. (1997). Psychophysics: The fundamentals. Mahwah, NJ [u.a.]: Erlbaum. First citation in articleGoogle Scholar

  • Groves, R. M. , Fowler, F. J. Jr. , Couper, M. P. , Lepkowski, J. M. , Singer, E. , Tourangeau, R. (2004). Survey Methodology. Hoboken, NJ: Wiley. First citation in articleGoogle Scholar

  • Gutebrock, T. M. , & Meekins, B. J. (2002). Theme and variation in a scale of five: The effect of verbal anchors in bipolar satisfaction scales. Florida: St. Petersburg Beach Paper presented at the AAPOR (American Association for Public Opinion Research)-Conference 2002. First citation in articleGoogle Scholar

  • Költringer, R. (1995). Measurement quality in Austrian personal interview surveys. In W. E. Saris, A. Munnick, (Eds.), The multitrait-multimethod approach to evaluate measurement instruments (pp. 207–225). Budapest, Hungary: Eötvös Univ. Press. First citation in articleGoogle Scholar

  • Koriat, A. , Lichtenstein, S. , Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107–118. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. (1991). Response strategies for coping with the cognitive demands of attitude measurement in surveys. Applied Cognitive Psychology, 5, 213–236. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. , & Alwin, D. F. (1987). An evaluation of a cognitive theory of response-order effects in survey measurement. Public Opinion Quarterly, 51, 201–219. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. , & Berent, M. K. (1993). Comparison of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 37, 941–964. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. , & Fabrigar, L. R. (1997). Designing rating scales for effective measurement in surveys. In L. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, D. Trewin, (Eds.), Survey Measurement and Process Quality (pp. 141–164). New York: Wiley. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. (1999). Survey research. American Review of Psychology, 50, 537–567. First citation in articleCrossrefGoogle Scholar

  • Rammstedt, B. , & Krebs, D. (2007). Does response scale format affect the answering of personality scales? Assessing the Big Five dimensions of personality with different response scales in a dependent sample. European Journal of Psychological Assessment, 23, 32–38. First citation in articleLinkGoogle Scholar

  • Robinson, J. P. , Shaver, P. R. , Wrightsman, L. S. (1991). Measures of personality and social psychological attitudes. Vol. 1 of measures of social psychological attitudes series. San Diego, New York: Academic Press. First citation in articleGoogle Scholar

  • Robinson, J. P. , Shaver, P. R. , Wrightsman, L. S. (1999). Measures of political attitudes. Vol. 2 of measures of social psychological attitudes series. San Diego, London: Academic Press. First citation in articleGoogle Scholar

  • Saris, W. E. , Gallhofer, I. (2007). Design, evaluation and analysis of questionnaires in survey research. New York: Wiley. First citation in articleCrossrefGoogle Scholar

  • Saris, W. E. , van der Veld, W. , Gallhofer, I. (2004). Development and improvement of questionnaires using predictions of reliability and validity. In S. Presser, J. M. Ruthgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin, E. Singer, (Eds.), Methods for testing and evaluation survey questionnaires (pp. 275–297). Hoboken, NJ: Wiley. First citation in articleCrossrefGoogle Scholar

  • Scherpenzeel, A. C. , & Saris, W. E. (1997). The reliability and validity of survey questions. A meta-analysis of MTMM-studies. Sociological Methods and Research, 25, 341–383. First citation in articleCrossrefGoogle Scholar

  • Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29–39. First citation in articleCrossrefGoogle Scholar

  • Simon, H. A. (1979). Models of thought. New Haven, CT: Yale University Press. First citation in articleGoogle Scholar

  • Sudman, S. , Bradburn, N. M. , Schwarz, N. (1996). Thinking about answers. The application of cognitive processes to survey methodology. San Francisco: Jossey Bass. First citation in articleGoogle Scholar

  • Tourangeau, R. , Rips, L. J. , Raisinski, K. (2000). The psychology of survey response. Cambridge, England: Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • Vandenberg, R. J. , & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices and recommendations for organizational research. Organizational Research Methods, 3, 4–69. First citation in articleGoogle Scholar

  • Weisberg, H. F. , Krosnick, J. A. , Bowen, B. D. (1996). An introduction to survey research, polling and data analysis. Thousand Oaks, CA: Sage. First citation in articleGoogle Scholar

  • Weng, L.-J. (2004). Impact of the number of response categories and anchor labels on coefficient Alpha and test-retest reliability. Educational and Psychological Measurement, 64, 956–972. First citation in articleCrossrefGoogle Scholar