Skip to main content
Log in

Multiple indicators: Internal consistency or no necessary relationship?

  • Note
  • Published:
Quality and Quantity Aims and scope Submit manuscript

Conclusions

The preceding discussion demonstrates the importance of having an explicit measurement model before analyzing measures. It is not valid to make any blanket statement on whether or not indicators should correlate until we know what type of indicators they are. If they are effect-indicators that have “well-behaved” errors and are positive measures of a single latent variable, then the internal-consistency view is appropriate and positive correlations of the indicators should occur. If cause-indicators are used then the NNR view is correct; indicator intercorrelations may be positive, negative, or zero. Finally, in general MIMIC models, cause-indicators have NNR while effect-indicators should be positively related under the assumptions of the model. In general, a cause- or an effect-indicator may have any type of relation.

Given the dominance of the internal-consistency perspective these simple results have serious implications. The empirical practice of factor-analyzing items to determine which measures “hang together” makes little sense if some of the indicators are cause-indicators. Similarly, computing “item-total” (cf. Nunnally, 1978, pp. 279–287) correlations as a means to select items for an index is not valid if cause-indicators are present. It seems quite possible that a number of items (or indicators) have not been used in research because of their low or negative correlation with other indicators designed to measure the same concept. If some of these are cause-indicators, researchers may have unknowingly removed valid measures.

On the other hand, these findings are not an excuse to include any indicators of interest in a measure. Ideally, the researcher should decide in advance which are effect- and which are cause-indicators. On the basis of the assumed measurement model, the expected associations may be predicted and tested.

In sum, the advice of Blalock seems particularly appropriate: “One should be especially on guard against procedures that supposedly permit one to appraise the ‘validity’ of an indicator on the basis of magnitudes of correlation coefficients, without the benefit of a specific theoretical model” (Namboodiri et al., 1975, p. 600).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Babbie, E. (1983). The Practice of Social Research. Belmont, CA: Wadsworth.

    Google Scholar 

  • Blalock, H.M. (1964). Causal Inferences in Nonexperimental Research. Chapel Hill, NC: University of North Carolina Press.

    Google Scholar 

  • Blalock, H.M. (1971). “Causal models involving unmeasured variables in stimulus-response situations”, pp. 335–347 in H.M. Blalock, Jr., ed., Causal Models in the Social Sciences. Chicago, IL: Aldine Press.

    Google Scholar 

  • Curtis, R.F. and Jackson, E.F. (1962). “Multiple indicators in survey research”, American Journal of Sociology 68: 195–204.

    Google Scholar 

  • Hauser, R.M. (1973). “Disaggregating a social-psychological model of educational attainment”, pp. 255–284 in A.S. Goldberger and O.D. Duncan, eds., Structural Equation Models in the Social Sciences. New York: Seminar Press.

    Google Scholar 

  • Hauser, R.M. and Goldberger, A.S. (1971). “The treatment of unobservable vairables in path analysis”, pp. 81–117 in H.L. Costner, ed., Sociological Methodology 1971. San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Jöreskog, K.G. and Goldberger, A.S. (1975). “Estimation of a model with multiple indicators and multiple causes of a single latent variable”, Journal of the American Statistical Association 70: 631–639.

    Google Scholar 

  • Miller, A.D. (1971). “Logic of causal analysis: from experimental to nonexperimental designs”, pp. 273–294 in H.M. Blalock, Jr., ed., Causal Models in the Social Sciences. Chicago, IL: Aldine Press.

    Google Scholar 

  • Namboodiri, N.K., Carter, L.F. and Blalock, H.M., Jr. (1975). Applied Multivariate Analysis and Experimental Design. New York: McGraw-Hill.

    Google Scholar 

  • Nunnally, J.C. (1978). Psychometric Theory. New York: McGraw-Hill.

    Google Scholar 

  • Robins, P.K. and West, R.W. (1977). “Measurement errors in the estimation of home value”, Journal of the American Statistical Association 72: 290–294.

    Google Scholar 

  • Zeller, R.A. and Carmines, E.G. (1980). Measurement in the Social Sciences. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bollen, K.A. Multiple indicators: Internal consistency or no necessary relationship?. Qual Quant 18, 377–385 (1984). https://doi.org/10.1007/BF00227593

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00227593

Keywords

Navigation