Abstract
Lack of careful consideration of common method effects in empirical research can lead to several negative consequences for the interpretation of research outcomes, such as biased estimates of the validity and reliability of the measures employed as well as bias in the estimates of the relationships between constructs of interest, which in turn can affect hypothesis testing. Taken together, these make it very difficult to make any interpretations of the results when those are affected by substantive common method effects. In the literature, there are several preventive, detective, and corrective techniques that can be employed to assuage concerns about the possibility of common method effects underlying observed results. Among these, the most popular has been Harman's Single-Factor Test. Though researchers have argued against its effectiveness in the past, the technique has continued to be very popular in the discipline. Moreover, there is a dearth of empirical evidence on the actual effectiveness of the technique, which we sought to remedy with this research. Our results, based on extensive Monte Carlo simulations, indicate that the approach shows limited effectiveness in detecting the presence of common method effects and may thus be providing a false sense of security to researchers. We therefore argue against the use of the technique moving forward and provide evidence to support our position.
- Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204--215. Google ScholarDigital Library
- Aguinis, H., & Vandenberg, R. (2014). An ounce of prevention is worth a pound of cure: Improving research quality before data collection. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 569--595.Google ScholarCross Ref
- Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. The Leadership Quarterly, 21(6), 1086--1120.Google ScholarCross Ref
- Bandalos, D., & Gagné, P. (2012). Simulation methods in structural equation modeling. In Handbook of structural equation modeling (pp. 92--108). New York, N.Y.: Guilford Press.Google Scholar
- Bandalos, D., Hancock, G. R., & Mueller, R. O. (2006). The use of Monte Carlo studies in structural equation modeling research. In Structural equation modeling: A second course (pp. 385--426). Greenwich, Conn: Information Age Publishing Inc.Google Scholar
- Bollen, K. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS Quarterly, 35(2), 359--372. Google ScholarDigital Library
- Bollen, K. A. (2012). Instrumental variables in sociology and the social sciences. Annual Review of Sociology, 38(1), 37--72.Google ScholarCross Ref
- Bollen, K., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 11(2), 305-- 314.Google ScholarCross Ref
- Chin, W., Thatcher, J., & Wright, R. (2012). Assessing common method bias: Problems with the ULMC technique. MIS Quarterly, 36(3), 1003--1019. Google ScholarDigital Library
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
- Conway, J. (1996). Analysis and design of multitraitmultirater performance appraisal Studies. Journal of Management, 22, 139--162.Google ScholarCross Ref
- Cote, J., & Buckley, M. (1987). Estimating trait, method, and error variance: Generalizing across 70 construct validation studies. Journal of Marketing Research, 24(3), 315--318.Google ScholarCross Ref
- Edwards, J. (2011). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370--388.Google ScholarCross Ref
- Ferguson, T., & Ketchen, D. (1999). Organizational configurations and performance: The role of statistical power in extant research. Strategic Management Journal, 20(4), 385--395.Google ScholarCross Ref
- Kemery, E. R., & Dunlap, W. P. (1986). Partialling factor scores does not control method variance: A reply to Podsakoff and Todor. Journal of Management, 12(4), 525--530.Google ScholarCross Ref
- MacKenzie, S., & Podsakoff, P. (2012). Common method bias in marketing: Causes, mechanisms, and procedural remedies. Journal of Retailing, 88(4), 542--555.Google ScholarCross Ref
- Majchrzak, A., Wagner, C., & Yates, D. (2013). The impact of shaping on knowledge reuse for organizational improvement with Wikis. MIS Quarterly, 37(2), 455--469. Google ScholarDigital Library
- Malhotra, N. K., Kim, S. S., & Patil, A. (2006). Common method variance in IS research: A comparison of alternative approaches and a reanalysis of past research. Management Science, 52(12), 1865--1883.Google ScholarCross Ref
- Paxton, P., Curran, P., Bollen, K., Kirby, J., & Chen, F. (2001). Monte Carlo experiments: Design and implementation. Structural Equation Modeling, 8(2), 287--312.Google ScholarCross Ref
- Petter, S., Straub, D., & Rai, A. (2007). Specifying formative constructs in information systems research. MIS Quarterly, 31(4), 623--656. Google ScholarCross Ref
- Podsakoff, P., Mackenzie, S., Lee, J.-Y., & Podsakoff, N. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879--903.Google ScholarCross Ref
- R Core Team. (2016). R: A language and environment for statistical computing (Version 3.3.1). Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/Google Scholar
- Revelle, W. (2016). Psych: Procedures for psychological, psychometric, and personality research. Evanston, Illinois. Retrieved from http://CRAN.R-project.org/package=psychGoogle Scholar
- Richardson, H., Simmering, M., & Sturman, M. (2009). A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance. Organizational Research Methods, 12(4), 762--800.Google ScholarCross Ref
- Roseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1--36.Google Scholar
- Schwarz, A., Rizzuto, T., Carraher-Wolverton, C., Roldan, J., & Barrera, R. (2017). Examining the impact and detection of the ?urban legend" of common method bias. The DATA BASE for Advances in Information Systems, 48(1). Google ScholarDigital Library
- Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the Association for Information Systems, 13(1), 63.Google Scholar
- Tsai, H.-T., & Bagozzi, R. P. (2014). Contribution behavior in virtual communities: Cognitive, emotional, and social influences. MIS Quarterly, 38(1), 143--163 Google ScholarDigital Library
- Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). New York, NY: Springer. Google ScholarDigital Library
- Zhang, T., Agarwal, R., & Lucas Jr, H. C. (2011). The value of IT-enabled retailer learning: Personalized product recommendations and customer store loyalty in electronic markets. MIS Quarterly, 35(4), 859. Google ScholarDigital Library
Recommendations
Common Method Bias in PLS-SEM: A Full Collinearity Assessment Approach
The author discusses common method bias in the context of structural equation modeling employing the partial least squares method PLS-SEM. Two datasets were created through a Monte Carlo simulation to illustrate the discussion: one contaminated by ...
Examining the Impact and Detection of the "Urban Legend" of Common Method Bias
Common Method Bias (CMB) represents one of the most frequently cited concerns among Information System (IS) and social science researchers. Despite the broad number of commentaries lamenting the importance of CMB, most empirical studies have relied upon ...
Estimating the effect of common method variance: the method-method pair technique with an illustration from TAM research
This paper presents a meta-analysis-based technique to estimate the effect of common method variance on the validity of individual theories. The technique explains between-study variance in observed correlations as a function of the susceptibility to ...
Comments