Elsevier

Intelligence

Volume 43, March–April 2014, Pages 52-64
Intelligence

The international cognitive ability resource: Development and initial validation of a public-domain measure

https://doi.org/10.1016/j.intell.2014.01.004Get rights and content

Highlights

  • Structural analyses of the ICAR items demonstrated high general factor saturation.

  • Primary factor loadings were consistent across items of each type.

  • Corrected correlations with the Shipley-2 were above 0.8.

  • Corrected correlations with self-reported achievement test scores were about 0.45.

  • Group discriminative validity by college major was high (~ 0.8) for the SAT and GRE.

Abstract

For all of its versatility and sophistication, the extant toolkit of cognitive ability measures lacks a public-domain method for large-scale, remote data collection. While the lack of copyright protection for such a measure poses a theoretical threat to test validity, the effective magnitude of this threat is unknown and can be offset by the use of modern test-development techniques. To the extent that validity can be maintained, the benefits of a public-domain resource are considerable for researchers, including: cost savings; greater control over test content; and the potential for more nuanced understanding of the correlational structure between constructs. The International Cognitive Ability Resource was developed to evaluate the prospects for such a public-domain measure and the psychometric properties of the first four item types were evaluated based on administrations to both an offline university sample and a large online sample. Concurrent and discriminative validity analyses suggest that the public-domain status of these item types did not compromise their validity despite administration to 97,000 participants. Further development and validation of extant and additional item types are recommended.

Introduction

The domain of cognitive ability assessment is now populated with dozens, possibly hundreds, of proprietary measures (Camara et al., 2000, Carroll, 1993, Cattell, 1943, Eliot and Smith, 1983, Goldstein and Beers, 2004, Murphy et al., 2011). While many of these are no longer maintained or administered, the variety of tests in active use remains quite broad, providing those who want to assess cognitive abilities with a large menu of options. In spite of this diversity, however, assessment challenges persist for researchers attempting to evaluate the structure and correlates of cognitive ability. We argue that it is possible to address these challenges through the use of well-established test development techniques and report on the development and validation of an item pool which demonstrates the utility of a public-domain measure of cognitive ability for basic intelligence research. We conclude by imploring other researchers to contribute to the on-going development, aggregation and maintenance of many more item types as part of a broader, public-domain tool — the International Cognitive Ability Resource (“ICAR”).

Section snippets

The case for a public domain measure

To be clear, the science of intelligence has historically been well-served by commercial measures. Royalty income streams (or their prospect) have encouraged the development of testing “products” and have funded their ongoing production, distribution and maintenance for decades. These assessments are broadly marketed for use in educational, counseling and industrial contexts and their administration and interpretation are a core service for many applied psychologists. Their proprietary nature

Study 1

We investigated the structural properties of the initial version of the International Cognitive Ability Resource based on internet administration to a large international sample. This investigation was based on 60 items representing four item types developed in various stages since 2006 (and does not include deprecated items or item types currently under development). We hypothesized that the factor structure would demonstrate four distinct but highly correlated factors, with each type of item

Study 2

Following the evidence for reliable variability in ICAR scores in Study 1, it was the goal of Study 2 to evaluate the validity of these scores when using the same administration procedures. While online administration protocols precluded validation against copyrighted commercial measures, it was possible to evaluate the extent to which ICAR scores correlated with (1) self-reported achievement test scores and (2) published rank orderings of mean scores by university major. In the latter case,

Study 3

The goal of the third study was to evaluate the construct validity of the ICAR items against a commercial measure of cognitive ability. Due to the copyrights associated with commercial measures, these analyses were based on administration to an offline sample of university students rather than an online administration.

General discussion

Reliability and validity data from these studies suggest that a public-domain measure of cognitive ability is a viable option. More specifically, they demonstrate that brief, un-proctored, and untimed administrations of items from the International Cognitive Ability Resource are moderately-to-strongly correlated with measures of cognitive ability and achievement. While this method of administration is inherently less precise and exhaustive than many traditional assessment methods, it offers

Conclusion

Public-domain measures of cognitive ability have considerable potential. We propose that the International Cognitive Ability Resource provides a viable foundation for collaborators who are interested in contributing extant or newly-developed public-domain tools. To the extent that these tools are well-suited for online administration, they will be particularly useful for large-scale cognitive ability assessment and/or use in research contexts beyond the confines of traditional testing

References (55)

  • J.C. Cassady

    Self-reported GPA and SAT: A methodological note

    Practical Assessment, Research & Evaluation

    (2001)
  • R.B. Cattell

    The measurement of adult intelligence

    Psychological Bulletin

    (1943)
  • S.J. Clark et al.

    Evaluation of Heckman selection model method for correcting estimates of HIV prevalence from sample surveys

  • J.S. Cole et al.

    Accuracy of self-reported SAT and ACT test scores: Implications for research

    Research in Higher Education

    (2009)
  • College Board

    2012 college-bound seniors total group profile report

    (2012)
  • G. Cuddeback et al.

    Detecting and statistically correcting sample selection bias

    Journal of Social Service Research

    (2004)
  • I. Dennis et al.

    Approaches to modeling item-generative tests

  • Educational Testing Service

    Table of GRE scores by intended graduate major field

    (2010)
  • R.B. Ekstrom et al.

    Manual for kit of factor-referenced cognitive tests

    (1976)
  • J. Eliot et al.

    An international directory of spatial tests

    (1983)
  • S.E. Embretson

    The new rules of measurement

    Psychological Assessment

    (1996)
  • T.G. Field

    Standardized tests: Recouping development costs and preserving integrity

  • M.C. Frey et al.

    Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability

    Psychological Science

    (2004)
  • V. Frucot et al.

    Further research on the accuracy of students' self-reported grade point averages, SAT scores, and course grades

    Perceptual and Motor Skills

    (1994)
  • L.R. Goldberg

    A broad-bandwidth, public-domain, personality inventory measuring the lower-level facets of several five-factor models

  • L.R. Goldberg

    International Personality Item Pool: A scientific collaboratory for the development of advanced measures of personality traits and other individual differences

  • Cited by (254)

    View all citing articles on Scopus
    1

    With thanks to Melissa Mitchell.

    View full text