Approaches to validating child care quality rating and improvement systems (QRIS): Results from two states with similar QRIS type designs

https://doi.org/10.1016/j.ecresq.2014.04.005Get rights and content

Highlights

  • Definition of a set of approaches to validate QRIS program standards.

  • Description of methods used in two states to implement a QRIS validation study.

  • Presentation of results of state QRIS validation studies and findings in support of the application of the validation approaches.

  • Validation approaches found to be of utility – effective in designing and implementing QRIS validation studies.

  • Recommendations for policy-makers, program administrators seeking to design validation studies for their own state QRISs.

Abstract

In recent years, child care quality rating and improvement systems (QRISs) have become an increasingly popular policy tool to improve quality in early childhood education and care (ECEC) settings and have been adopted in many localities and states. The QRIS proposition is that with higher-quality child care settings, it is more likely that children who attend those high-quality programs will benefit in terms of outcomes like school readiness. However, in order to demonstrate this linkage, QRIS standards and ratings must function as intended, i.e. be valid. This paper presents a framework for validating child care quality improvement standards and processes, along with examples from recent QRIS validation studies in two states. The state examples provide useful data about the strengths and limitations of these validation approaches. We discuss the implications of applying these approaches and provide recommendations to researchers, policy-makers, and program leaders who implement QRIS validation studies.

Introduction

In recent years, child care quality rating and improvement systems (QRISs) have become an increasingly popular policy tool to improve quality in early childhood education and care (ECEC) settings and have been adopted in many localities and states. The QRIS National Learning Network reports that 40 statewide QRISs have launched or piloted, including the District of Columbia (QRIS National Learning Network, 2014). The immediate goal of a QRIS is to raise the quality of care in early learning settings. Existing research suggests that care in higher-quality settings will improve child functioning, including school readiness (Burchinal et al., 2009, Burger, 2010, Howes et al., 2008), especially for children from lower-income families. QRIS logic models that guide these large-scale interventions focus on improving various dimensions of ECEC quality, with the ultimate goal of improving system outcomes, namely; child care program quality, training and technical assistance for child care providers, information and support for families, and, therefore, improvements to children's cognitive, language, social, emotional, and physical development.

The perceived need for QRIS has grown out of documented gaps in quality in existing ECEC programs, especially those serving children from lower-income families (Fuller et al., 2004, NICHD ECCRN, 2000) and the inability of the current ECEC system to promote uniformly high quality (Cochran, 2007). QRISs produce program-level quality ratings based on multi-component assessments designed to make ECEC quality transparent and easily understood to parents and other stakeholders. Most also include feedback, technical assistance, and incentives to both motivate and support providers’ efforts toward quality improvement (Tout et al., 2010). To make program quality transparent, QRISs typically rely on a multi-tiered rating system with one to five levels of program quality. Therefore, it is important that these ratings show evidence of validity, so that higher-quality programs are rated higher, and lower-quality programs are rated lower.

Recent research has documented the importance of both specificity and thresholds when testing hypotheses about child care quality impacts on children's developmental outcomes (Burchinal et al., 2000, Burchinal et al., 2010, Howes et al., 1992, NICHD ECCRN, 2000, NICHD ECCRN, 2002). However, common global measures of classroom quality such as the Early Childhood Environment Rating Scale-Revised (ECERS-R; Harms, Clifford, & Cryer, 2005) are not always significantly associated with specific child outcomes (Burchinal, Kainz, & Cai, 2011). This may be because these global quality scales do not focus enough on the particular child care quality processes most likely to bring about improved child outcomes (specificity) or they do not provide guidance for the level of quality required to produce improved child outcomes (thresholds). As states implement QRISs, they are using observational measures such as the ECERS-R, and they may also combine other quality measures such as the Classroom Assessment Scoring System (CLASS; Pianta, La Paro, & Hamre, 2008) or locally specified quality indicators. Because QRIS quality standards are often complex, including many components and measures at several quality levels, and because they vary from state to state, it is especially important for states to carefully validate their quality rating systems and match measures specifically to the stated outcome goals of the QRIS. For example, if a particular QRIS places more emphasis on the health aspects of children's development, then the ECERS-R and CLASS would not be appropriate tools; but a tool measuring child care health indicators, such as the National Health and Safety Tool being developed by the California Child Care Health Program (Alkon, 2013) would be more appropriate.

Validity data can also enable researchers to test conclusions about whether the quality indicators embedded in QRIS standards lead to adequate quality assessment and whether the methods used to assign quality ratings are working as intended (Cizek, 2007). This paper defines operationally the concept of QRIS validity, presents four general approaches to assessing validity in the context of large-scale QRISs, and critically examines the efforts of two states, Maine and Indiana, to assess the validity of recently implemented QRISs using these approaches.

Validation of a QRIS is a developmental and multi-step process that assesses the degree to which design decisions about program quality standards and measurement strategies are resulting in accurate and meaningful quality ratings. Validation of a QRIS provides designers, administrators, and stakeholders with crucial data about how well the system is functioning. A carefully designed plan for ongoing QRIS validation creates confidence in the system and a climate that supports continuous quality improvement at both the child care provider and system levels (Zellman & Fiene, 2012).

To date, QRIS validation research efforts have been limited, for a number of reasons. First, validation is complex and involves a range of activities, which should include validating standards, measures, and rating protocols. Second, there has been little information available in the field that clarifies the importance and purpose of QRIS validation or identifies recommended strategies. Third, child care quality advocates and policy makers have been extremely busy designing and implementing these statewide systems, often with limited resources. Given these constraints, validation may seem like an abstract luxury that can wait until later. Further, in states with more mature QRISs, there may be some reluctance among stakeholders to assess the validity of an established and accepted quality improvement system. In newer state systems, policymakers may question the need for validation, given arguments recently offered in support of establishing a QRIS system (Zellman and Fiene, 2012, Zellman et al., 2011). Yet early and ongoing validation research is essential to the long term success of any system.

One challenge is that QRIS validation cannot be determined by a single study. Instead, validation should be viewed as an iterative process with several equally important goals: refining the QRIS quality standards and ratings, improving system functioning, and increasing the credibility and value of rating outcomes and the QRIS system as a whole. A carefully designed validation plan can promote the accumulation of evidence over time that will provide a sound theoretical and empirical basis for the QRIS (AERA/APA/NCME, 1999, Kane, 2001, Zellman and Fiene, 2012). Ongoing validation activities, carried out in tandem with QRIS monitoring activities (those that examine ongoing implementation processes) and evaluation activities (those that examine specific outcomes) can help a QRIS improve throughout its development, implementation, and maturation (Lugo-Gil et al., 2011, Zellman et al., 2011).

QRIS validation research may produce three important benefits. First, validation evidence can promote increased support for the system among parents, ECEC providers, and other key stakeholders. Ratings that mirror the experiences of parents and providers can build trust and increase the overall credibility of the system. Second, a system that is measuring quality accurately and specifically should better able to target limited quality improvement resources to programs and program elements most in need of improvement. This should result in more targeted and effective supports for programs striving to offer higher-quality services. Third, validation evidence can be used to improve the efficiency of the rating process. If a QRIS is expending resources to measure a component of quality that is not making a unique contribution to a summary quality rating, is not measuring quality accurately, or is not contributing to desired program outcomes, that component can be removed or revised. For example, measures that vary little across providers, whose quality varies substantially in other ways, make little or no contribution to overall quality ratings (Zellman & Fiene, 2012).

A comprehensive QRIS validation plan includes multiple studies that rely on different sources of information and ask different but related questions. We suggest QRIS validation research be organized around four complementary approaches: key quality concepts; quality measurement; ratings outputs; and links to child outcomes (Zellman & Fiene, 2012). Summaries of these approaches are provided in Table 1, which includes the purpose of each validation approach, the types of research that can be undertaken, the questions that are asked, and some limitations of each approach. The four approaches are also elaborated later in the paper, as we summarize results of validation research in Indiana and Maine.

In reviewing the table, and throughout this paper, we use three key QRIS terms: component, standard, and indicator. The term ‘quality component’ refers to broad quality categories used in QRIS (such as staff qualifications, family engagement, or learning environment). A ‘quality standard’ is defined as a specific feature of quality, such as specialized training in the use of developmentally appropriate curriculum or developmental assessment training within the staff qualifications component. A set of quality standards comprise each quality component. ‘Quality indicators’ are the specific metrics used for each quality standard. A given quality standard may have one or more quality indicators. An indicator related to the curriculum/assessment staff training standard may be, for example, “At least 50% of teaching staff have completed the two-course statewide training session on developmentally-appropriate curriculum.”

This section will describe efforts at QRIS validation in two states in order to explore current validation efforts using these four approaches and to identify the successes and challenges experienced in these early QRIS validation studies. In Indiana and Maine, the QRIS designs are similar, but some aspects of the states’ child care contexts, specific QRIS quality components, standards, and rating processes employed are somewhat different. Both states launched their QRIS statewide in 2008, and both systems have four quality tiers, referred to as “levels” in Indiana and “steps” in Maine, organized into a “building block” framework, meaning that child care providers must enter at the lowest level and meet all quality standards and indicators at each level in order to advance to the next higher level. The focus on these two states in this paper is to help illustrate the application of these four approaches to operationalizing validation in a QRIS. While the QRIS evaluations in Maine and Indiana have resulted in other kinds of information disseminated for policy makers in these states and publications for other audiences, this paper is unique in that it is only intended to focus on these four concepts of validation.

Both states partnered with university-based researchers to conduct validation research, after piloting aspects of their QRIS design. However, there are also key differences between these two states. For example, the Indiana QRIS standards were developed based on a local community-based model that was then modified by a state stakeholder committee for statewide expansion. The Maine quality standards were developed to align with program-type-specific national accreditation standards. The Maine standards were also vetted through review and comment by many stakeholders and technical assistance was provided by University researchers based on reviews of the scientific literature. Maine QRIS ratings are generated by provider self-report, then verified by state agency staff, while Indiana employs independent raters who directly assess the standards by visiting child care settings. Provider voluntary participation rates are higher among state-licensed providers in Indiana. However, Indiana also has significant numbers of license-exempt child care providers, whereas license exemption is not a prominent feature of the Maine child care system. The key features of each state QRIS are summarized in Table 2. These two states provide useful examples, because while the state child care contexts are different, they each used strategies contained in the four validation approaches discussed above and outlined in Table 1. The successes and limitations of these states’ approaches will inform future validation research on QRIS.

Section snippets

Indiana

The Indiana QRIS is called “Paths to QUALITY™.” The validation research reported here includes a preliminary literature review and an empirical field study including a stratified random sample of 276 child care providers who had voluntarily entered the QRIS during 2008–2009, including 135 classrooms in 95 licensed child care centers, 169 licensed family child care homes, and 14 classrooms in 12 unlicensed registered child care ministry centers. Independent, on-site assessments were completed by

Results

Results of the QRIS validation research in Indiana and Maine are presented in relation to the four approaches to validation recommended by Zellman and Fiene (2012; refer to Table 1).

Limitation to validation study designs

Both of these state studies provide results that describe linear associations among variables. The study designs are limited due to the fact that the investigators have no control over how the QRIS systems are implemented which affects enrollment and therefore sample sizes and selection of measurement strategies were also not in the sole control of the investigator. It will be interesting as additional studies are done and where non-linear associations are found to determine the impact this has

Acknowledgements

We gratefully acknowledge the intellectual contributions and collegiality of members of the INQUIRE research network, supported by the Office of Planning, Research, and Evaluation (OPRE) in the Administration for Children & Families. Many stimulating research meetings, plus the research briefs cited in this article, were supported by INQUIRE and OPRE. Special thanks to Kathryn Tout, who leads INQUIRE, Ivelisse Martinez-Beck, the OPRE project officer for INQUIRE, and the state policy makers and

References (53)

  • D.M. Bryant et al.

    Empirical approaches to strengthening the measurement of quality: Issues in the development and use of quality measures in research and applied settings

  • M.R. Burchinal et al.

    Children's social and cognitive development and child-care quality. Testing for differential associations related to poverty, gender, or ethnicity

    Journal of Applied Developmental Science

    (2000)
  • M.R. Burchinal et al.

    Early care and education quality and child outcomes (Research-to-Policy Research-to-Practice Brief: OPRE Research-to-Policy Brief #1)

    (2009)
  • M.R. Burchinal et al.

    How well do our measures of quality predict child outcomes? A meta-analysis and coordinated analysis of data from large scale studies of early childhood settings

  • G.J. Cizek

    Introduction to validity

  • R. Clifford

    Structure and stability in the early childhood environment rating scale

  • M. Cochran

    Finding our way: The future of American early care and education

    (2007)
  • L.M. Dunn et al.

    Peabody picture vocabulary test

    (1997)
  • J. Elicker et al.

    Paths to QUALITY™: A child care quality rating system for Indiana. What is its scientific basis?

    (2007)
  • J. Elicker et al.

    Evaluation of quality rating and improvement systems in early childhood programs and school age care: Measuring children's development (Research to Policy, Research to Practice Brief, OPRE 2011-11c)

    (2011)
  • J. Elicker et al.

    Evaluation of “Paths to QUALITY” Indiana's child care quality rating and improvement system

    (2011)
  • J. Elicker et al.

    Paths to QUALITY: Collaborative evaluation of a new child care quality rating and improvement system

    Early Education and Development

    (2013)
  • A. Emlen et al.

    A packet of scales for measuring quality from a parent's view

    (2000)
  • R. Fiene

    Differential monitoring logic model and algorithm (DMLMA): A new early childhood program quality indicator model (ECPQIM) for early care and education regulatory agencies

    (2013)
  • R. Fiene et al.

    Instrument based program monitoring and key indicators in child care

    Child Care Quarterly

    (1985)
  • B. Fuller et al.

    Child care in poor communities: Early learning effects of type

    Quality, and Stability, Child Development

    (2004)
  • Cited by (14)

    • Examining the tensions between cultural models of care in family childcare and quality rating improvement systems

      2021, Children and Youth Services Review
      Citation Excerpt :

      QRIS is the latest iteration of standardized conceptualizations of quality, as it seeks to award quality ratings to programs that meet defined program standards. Although the focus and structure of QRIS vary from one state to another, most QRIS include 1) clearly defined quality standards and provisions for rating quality according to these standards; 2) systems for monitoring quality; 3) financial incentives for quality improvement, training, and technical assistance; and 4) delivering support and information to families, allowing them to compare centers through transparency of quality (Lahti et al., 2015; Mitchell, 2009; Paulsell, Tout, & Maxwell, 2013; Zellman & Perlman, 2008). Reflecting dominant ideologies about quality care in the U.S., the goals of QRIS focus on school readiness, DAP, and child-centered care (Reinke, Peters, & Castner, 2019), and these goals are assessed via standardized checklists, rating scales, and child assessment.

    • QRIS research: Looking back and looking forward

      2015, Early Childhood Research Quarterly
    • Exploring characteristics of quality in language teaching & learning: The Mother Tongue Adapted Coding Scheme (MACS)

      2020, Early Childhood Research Quarterly
      Citation Excerpt :

      Scores from all 51 coded classrooms were used for the analyses of child outcomes. We examined the 4 research questions using as a rough guide the components of validation approaches that have been utilized in other quality measurement studies (e.g., Lahti et al., 2015; Zellman & Fiene, 2012; see also Halle, Whittaker, & Anderson, 2010) (keeping in mind that our approach involves a quality rating tool rather than a system). RQ 1 involved the key concepts of the face or content validity of the MACS, and these are reviewed in the results.

    • Influence of quality credentialing programs on teacher characteristics in center-based early care and education settings

      2020, Early Childhood Research Quarterly
      Citation Excerpt :

      However, there is a body of work which reports null or ambiguous results from state-specific validation studies. For example, Lahti, Elicker, Zellman, and Fiene (2015) compare validation studies in Indiana and Maine, finding positive associations between QRIS ratings and ERS for family child care homes but not center-based programs. In addition, studies from North Carolina (Hestenes et al., 2015), Minnesota (Tout et al., 2011), and Colorado (Zellman, Perlman, Le, & Setodji, 2008) report mostly null associations between QRIS ratings and ERS, while Lipscomb, Weber, Green, and Patterson (2016) uncover distinct quality differences between Oregon centers at QRIS levels 1 and 2 versus those at levels 3 or higher, but no differences between centers at level 3 versus those at levels 4 or 5.

    • Measuring the quality of teacher–child interactions at scale: Comparing research-based and state observation approaches

      2018, Early Childhood Research Quarterly
      Citation Excerpt :

      Notably, across teams, the standardized effects tended to be small in magnitude, with the largest at .177. With the rapid expansion of QRIS nationwide, and the increasing use of observational measures within them, there is a need for research that can guide decision-making in ways that help ensure the fairness, reliability, and accuracy of data (Lahti, Elicker, Zellman, & Fiene, 2015). The use of local observers to conduct classroom observations offers several notable potential benefits (e.g., saving money, gaining local buy-in), but their use may also create unintended consequences if they produce biased or unreliable scores.

    View all citing articles on Scopus
    1

    Professor Emeritus at Pennsylvania State University.

    View full text