Skip to main content

Getting Ideas into Action: Building Networked Improvement Communities in Education

  • Chapter
  • First Online:

Part of the book series: Frontiers in Sociology and Social Research ((FSSR,volume 1))

Abstract

Schools today confront ambitious new societal goals aiming at greater learning for more students. Simultaneously, we are demanding that our educational institutions operate more efficiently. A growing cadre of scholars and policy organizations argue that responding to these challenges requires a fundamental reorganization of the connections among research and practice. This chapter details new ways for scholars and practitioners to engage together in disciplined inquiry organized around specified problems of practice improvement. It describes the social organization of networked communities aimed at systematic learning from practice to improve it. Embedded within the day-to-day work of such improvement communities are multiple cycles of design, engineering, and development (DEED) that generate numerous small tests about what works for whom under different circumstances. We call this improvement research. The chapter details a core set of structuring agents necessary to form such networked improvement communities (NIC). We illustrate these ideas drawing on early design experiences from an emerging NIC seeking to address the extraordinary high failure rates in developmental mathematics in community colleges. These “developmental” courses currently operate as a barrier to opportunity, blocking access to both occupational training certification and transfer to 4-year institutions. We posit that research and practice properly arranged can reframe the opportunity equation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Of special note in this regard is the Foundation’s partnership with the Dana Center at the University of Texas. Dana has lead design responsibility for developing the initial instructional kernel for Statway™. This includes pathway outcomes, a modular structure for the curriculum, classroom lessons, and assessments. In addition, the executive director of the Dana Center, Uri Treisman, also serves as a Carnegie senior partner. In this latter role, Treisman coleads the policy outreach and planning for scale team in the network.

  2. 2.

    American Association of Community Colleges; Retrieved from http://www2.aacc.nche.edu/research/index.htm on September 11, 2009.

  3. 3.

    The magnitude of community colleges’ collective responsibility nearly doubled in July 2009 when President Obama called for an additional 5 million community college degrees and certificates by 2020. To achieve this scale under a constrained timeframe requires bold innovation. Entitled the American Graduation Initiative, the plan as proposed will invest 12 billion dollars to invigorate community colleges across the USA by funding improvements in physical infrastructure, developing challenge grant mechanisms, and creating a virtual course clearinghouse. Specifically the President highlights open, online education as a strategy for reaching more nontraditional students, accelerating students’ progress, helping students persist, and improving instructional quality.

  4. 4.

    Bailey et al. (2008) (revised April 2009). These data were obtained from Achieving the Dream campuses and compared to NELS 88 data.

  5. 5.

    It is argued more generally that we need a small number of more structured pathways to success. Statway is Carnegie’s first effort in this regard. The Foundation also will support efforts on a second pathway, called Quantway™, seeking to achieve similar goals for students with somewhat different career aspirations.

  6. 6.

    For a seminal text on this topic, see Wenger (1999).

  7. 7.

    See for example Ruopp et al. (1992) and Schlager et al. (2002).

  8. 8.

    See for example the overview of IHI’s 5 Million Lives campaign: http://www.ihi.org/IHI/Programs/Campaign/Campaign.htm?TabId=1.

  9. 9.

    For example, Robert L. Linn of the National Center for Research on Evaluation is widely quoted as saying of NCLB: “There is a zero percent chance that we will ever reach a 100% target.” (Paley 2007. http://www.washingtonpost.com/wp-dyn/content/article/2007/03/13/AR2007031301781.html). Also see Bryant et al. 2008.

  10. 10.

    This population definition process is now underway. It includes measures from student math and reading placement tests, and English language capabilities.

  11. 11.

    Ideally, the Collaboratory would be able to draw results from previous institutional improvement efforts in the general domain of developmental mathematics education to set a network-wide goal. Absent the shared empirical discipline, such common data structures do not currently exist in the field. In contrast, were a community to embrace PDSA cycles as a common inquiry (see following section), such data might exist in the future. For an example of such a database in K–12, see research by the Consortium on Chicago School Research. Bryk et al. (2010), document rates of learning improvement across more than 400 elementary schools during a six-year period. These results provide an empirical basis setting improvement standards. Specifically, we know that improvements in annual learning gains of 10 percent or more in reading and 20 percent or more in mathematics are attainable.

  12. 12.

    For an example in healthcare improvement, see Gawande’s (2007) account of improvements in the treatment of cystic fibrosis across a health center network.

  13. 13.

    Our intent here is not to argue for the adequacy of the specific details offered, but simply to illustrate the system character of a problem and how it might be “carved at the joints” to guide subsequent efforts.

  14. 14.

    Given the space limits of the paper, we have constrained the example to a very rudimentary exposition. Our intent is simply to illustrate the tool rather than argue the merits of this particular instantiation.

  15. 15.

    We note that in use the adequacy of a driver diagram is subject to empirical test. If all of the primary drivers have been identified, and an organization demonstrates change on each, then measureable improvements on the specified targets should occur. If the latter fails to materialize, some aspect of the driver diagram is underspecified. At base here is an organizing idea in science. Measurement and theory development move hand-in-hand. Theory sets out what we should measure; measurement in turn forces clarification on theory.

  16. 16.

    Of note, the primary mode of inquiry in this applied domain does not typically follow the clinical trials methodology that characterizes the development and marketing of new drug treatments. The more common protocol involves establishing a baseline of results and comparing subsequent performance against this baseline. See for example, Gawande (2007, 2009).

  17. 17.

    For a very readable historical narrative on this account, see Kenney (2008).

  18. 18.

    Throughout this essay we use interchangeably the terms science of improvement and improvement research. One or the other may be more connotative depending upon context and audience.

  19. 19.

    Shavelson and Towne (2002) identify six core principles. These include: specific questions to be investigated empirically, theory guides the investigation and generating cumulative knowledge is a goal, use of methods that permit a direct investigation of the question, a coherent and explicit chain of reasoning, efforts to replicate findings across a range of time and places and synthesize and integrate results, and open research to scrutiny and critique where objectivity derives from the enforced norms of a professional community. All of these are operationalized across an improvement research network.

  20. 20.

    For a classic exposition of these ideas, see Lewin (1942).

  21. 21.

    The ideas developed in this section apply equally to all participants in a networked improvement community. Depending upon the particular improvement objective, the units of interest might be individual classrooms, study centers within community colleges, departments, or entire colleges. They also apply to commercial firms developing new tools, goods, and services for this marketplace. In the interest of simplicity, we use the term colleges as a placeholder for this larger and more varied domain of participants.

  22. 22.

    http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/ImprovementStories/TreatEveryDefectasaTreasure.htm

  23. 23.

    A close parallel to this in healthcare is the idea of complex treatment regimes. For a good example, see the Patients Like Me web site (http://www.patientslikeme.com/) as a knowledge base for chronic care. Patients have individual treatment histories and may be involved in multiple therapies simultaneously. Data to inform “what is right for me” involves more complex information structures than the on-average results derived from randomized control trials of individual therapies.

  24. 24.

    Formally, Cornfield and Tukey (1956) used the term inference, meaning how one might apply the results of an experiment to a larger and different set of cases. Modern causal inference places primacy on the word “cause” and not the idea of “generalization.” The latter in contrast is key to Cornfield and Tukey’s argument.

  25. 25.

    See also the classic distinction between internal and external validity introduced in Campbell and Stanley (1963) and further elaborated in Cook and Campbell (1979).

  26. 26.

    See Weisberg (2010) for an explication of this argument.

  27. 27.

    Note, we focus here on the common core of data that regularly informs the work of NIC participants and provides one basis for cross-network learning. A subnetwork within a NIC can, of course, also engage in specialized individual studies and one-time field trials. In fact, we are organizing as part of Statway an “alpha lab” that would bring an expanding array of applied researchers into this problem-solving research. The initial agenda for the alpha lab will focus on opportunities to deepen students’ mathematics understandings, strengthen motivation for sustained work in the Pathway, and address literacy and language demands in statistics instruction.

  28. 28.

    We note that these also create a basis for more microlevel process targets. In doing so, a network may catalyze the formation of subnetworks working on improving the same microprocesses and aspiring to the same common microtargets. The overall logic of the NIC still applies but now at a more microlevel.

  29. 29.

    See the extensive work on this topic using the Survey of Entering Student Engagement (http://www.ccsse.org/sense/).

  30. 30.

    This idea has been developed in some detail at IHI. See: http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/Measures/

  31. 31.

    By way of example, there is great interest today in teacher assessments. Considerable attention now is directed toward developing protocols for rating classroom instruction and judging the quality of these protocols to the extent that they correlate with classroom level value-added measures of student learning. Predictive validity is viewed as the main criterion for judging instrument quality. One can envision instruments that rate relatively high by this standard, but afford little guidance as to what teachers need to learn or do differently to actually affect improvements in student learning. The latter is the informative quality of the assessment—does it signal what we value/want others to actually attend to?

  32. 32.

    This is closely related to the idea of unobtrusive measures described by Webb et al. (1966).

  33. 33.

    Almost four decades ago, Light and Smith (1971) detailed such an accumulating evidence strategy. While these proved formative ideas for the emergence of meta-analysis (i.e., the quantitative synthesis of research findings), Light and Smith actually cast their arguments in terms of the prospective design of a program of applied research rather than post hoc search for patterns in previously published results. It is this idea that we return to here.

  34. 34.

    See Shavelson and Towne (2002) on the role of common methods as part of a practice of disciplined inquiry.

  35. 35.

    Since the Statway network begins as an innovation zone, this is the alpha development phase discussed in Bryk and Gomez (2008). To function as an innovation network has implications for selection of the initial charter members of the network, placing a premium on individuals and contexts conducive to such work.

  36. 36.

    To be sure, randomized trials remain the strongest design to implement in improvement research when practical. However, it is important to note that the results of randomized control are not always definitive. Weisberg (2010) documents that clinical trials can actually lead to biased conclusions when the causal effect of an intervention varies across cases (p. 23; also Weisberg et al. 2009). Not only the magnitude, but also the direction of effects may be erroneous. Since improvement research begins with an assumption of variable effects, this caution is noteworthy.

  37. 37.

    For a partial example of this, see Bryk et al. (2010). Under a radical school decentralization in Chicago, significant new resources and authorities were devolved to individual schools precipitating in a natural experiment in school change and improvement. Through systematic longitudinal inquiry, the authors developed in conjunction with local leaders a working theory of school improvement, a practical measurement system to characterize changes school by school, over time, and linked this in turn to a series of value-added estimates over time, to changes in student learning. Both the working theory of practice improvement that evolved here and specific empirical evidence were taken up in continuous improvement efforts in local schools and systemwide.

  38. 38.

    The conduct of improvement research documented in the Checklist Manifesto provides a concrete example of this. Once Gawande (2009) had established the efficacy of the checklist in his own surgical theater, the team undertook a field study that deliberately introduced the checklist into a highly varied set of healthcare settings in terms of fiscal resources, cultural norms organizing relations among physicians and nurses, and basic organizational capacities. A key design concern at this point was whether and how this routine could be integrated into practice in organizations that were quite different than the context of original development. This is a textbook case of the problem of integrative adaptivity.

  39. 39.

    Of note, both of the polar positions laid out earlier (translation research and action research) deflect attention away from this question. Under the translation paradigm, the aim is to standardize the treatment, and evidence on treatment variability is considered as implementation failure. The responsibility for the latter is externalized to the local context. In action research, all of the complexity and dynamism of the context are embraced, but how an innovation might effectively travel to another locale is not generally a core subject of inquiry. In contrast, this is a core inquiry goal for a NIC.

  40. 40.

    See Surowiecki (2004).

  41. 41.

    For an illustrative example, see how in the Checklist Manifesto, Gawande and colleagues systematically addressed utility of their prototype checklist by deliberately moving the checklist out to eight very different contexts. The key learning objective in this phase of the work (what we have termed beta phase inquiry) is whether this could be made to work in a very different institutional and cultural context, and if so, what would it take. This is explicit inquiry about integrative adaptivity.

  42. 42.

    We note that this basic phenomenon continues in the beta phase where innovations move into new contexts. Inevitably, some accommodations may be needed to integrate the initiative into these new settings. Accomplishing this well entails an analytic practice where local conditions must intersect with the principled design of the intervention (Coburn and Stein 2010). The knowledge generated at the network level, by synthesizing learning efforts at multiple sites, is key to discerning how and the conditions necessary for the intervention to be reliably engaged in other places.

  43. 43.

    This evangelizing role is now being pursued by the Statway program senior partners as they reach out to community colleges, professional associations, policy and foundation leaders, and the academic research community. Institutionally, the Carnegie Foundation seeks to draw on its reputation as a neutral broker and convener, as a resource in forming the connective tissue necessary for the Statway network to take root and grow.

  44. 44.

    It is interesting to note that these same arguments appear in the early history of the quality improvement movement in health services. Kenney’s (2008) account details exchanges in this regard. There are several places in his text where one could easily exchange the words “doctors and hospitals” with “teachers and schools.”

  45. 45.

    There is a precedent in the elective community engagement classification that Carnegie established in 2005. Participation is voluntary and over 311 colleges have chosen to do so. It involves a detailed, data-based process of application and membership which has proven quite meaningful across the larger community.

  46. 46.

    See Weber (2004), pp. 111–116.

References

  • Bailey, T., D. W. Jeong, and S. W. Cho. 2008. Referral, enrollment, and completion in developmental education sequences in community colleges. CCRC Working Paper No. 15, Teachers College, Columbia University, New York.

    Google Scholar 

  • Berwick, D.M. 2008. The science of improvement. The Journal of the American Medical Association 299(10): 1182–1184.

    Article  Google Scholar 

  • Boudett, K.P., E.A. City, and R.J. Murnane (eds.). 2005. Data wise: A step-by-step guide to using assessment results to improve teaching and learning. Cambridge: Harvard Education Press.

    Google Scholar 

  • Bryant, M.J., K.A. Hammond, K.M. Bocian, M.F. Rettig, C.A. Miller, and R.A. Cardullo. 2008. School performance will fail to meet legislated benchmarks. Science 321(5897): 1781–1782.

    Article  Google Scholar 

  • Bryk, A.S. 2009. Support a science of performance improvement. Phi Delta Kappan 90(8): 597–600.

    Google Scholar 

  • Bryk, A.S., and L.M. Gomez. 2008. Ruminations on reinventing an R&D capacity for educational improvement. In The future of educational entrepreneurship: Possibilities of school reform, ed. F.M. Hess, 181–206. Cambridge: Harvard Education Press.

    Google Scholar 

  • Bryk, A.S., P.B. Sebring, E. Allensworth, S. Luppescu, and J.Q. Easton. 2010. Organizing schools for improvement: Lessons from Chicago. Chicago: University of Chicago Press.

    Google Scholar 

  • Burkhardt, H., and A.H. Schoenfeld. 2003. Improving educational research: Toward a more useful, more influential, and better-funded enterprise. Educational Researcher 32(9): 3–14.

    Article  Google Scholar 

  • Campbell, D.T., and J.C. Stanley. 1963. Experimental and quasi-experimental designs for research and teaching. In Handbook of research on teaching, eds. N.L. Gage, 84. Chicago: Rand McNally.

    Google Scholar 

  • Coburn, C.E., and M.K. Stein (eds.). 2010. Research and practice in education: Building alliances, bridging the divide. Lanham: Rowman and Littlefield Publishers.

    Google Scholar 

  • Committee on a Strategic Education Research Partnership (SERP). 2003. Washington, DC: Strategic Education Research Partnership.

    Google Scholar 

  • Cook, T.D., and D.T. Campbell. 1979. Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally College Publishing Company.

    Google Scholar 

  • Cornfield, J., and J.W. Tukey. 1956. Average values of mean squares in factorials. Annals of Mathematical Statistics 27: 907–949.

    Article  Google Scholar 

  • Cronbach, L.J. 1980. Toward reform of program evaluation. San Francisco: Jossey-Bass.

    Google Scholar 

  • Cullinane, J. and U. Treisman. 2010. Improving developmental mathematics education in community colleges: A prospectus and early progress report on the Statway initiative. NCPR Working Paper, National Center for Postsecondary Research, New York.

    Google Scholar 

  • Deming, W.E. 2000. Out of the crisis. Cambridge: MIT Press.

    Google Scholar 

  • Englebart, D.C. 1992. Toward high-performance organizations: A strategic role for groupware. Groupware ‘92. San Jose: Morgan Kaufman Publishers.

    Google Scholar 

  • Englebart, D. C. 2003. Improving our ability to improve: A call for investment in a new future. IBM Co-Evolution Symposium.

    Google Scholar 

  • Gawande, A. 2007. Better: A surgeon’s notes on performance. New York: Henry Holt.

    Google Scholar 

  • Gawande, A. 2009. Checklist manifesto: How to get things right. New York: Metropolitan Books.

    Google Scholar 

  • Goldsmith, S., and W.D. Eggers. 2004. Governing by network: The new shape of the public sector. Washington, DC: Brookings Institution Press.

    Google Scholar 

  • Gomez, L. M., K. Gomez, and B. R. Gifford. 2010. Educational innovation with technology: A new look at scale and opportunity to learn. Educational reform: Transforming America’s Education through Innovation and Technology. Whistler: Aspen Institute Congressional Conference Program Papers.

    Google Scholar 

  • Hiebert, J., R. Gallimore, and J.W. Stigler. 2002. A knowledge base for the teaching profession: What would it look like and how can we get one? Educational Researcher 31(5): 3–15.

    Article  Google Scholar 

  • Juran, J.M. 1962. Quality control handbook. New York: McGraw-Hill.

    Google Scholar 

  • Kelly, G.J. 2006. Epistemology and educational research. In The handbook of complementary methods in educational research, eds. J.L. Green, G. Camilli, and P. Nelmore, 33–56. Mahwah: Erlbaum.

    Google Scholar 

  • Kenney, C. 2008. The best practice: How the new quality movement is transforming medicine. New York: Public Affairs.

    Google Scholar 

  • Langley, G.J., R.D. Moen, K.M. Nolan, T.W. Nolan, C.L. Norman, and L.P. Provost. 1996. The improvement guide: A practical approach to enhancing organizational performance. San Francisco: Jossey-Bass.

    Google Scholar 

  • Lewin, K. 1942. Field theory of learning. Yearbook of National Social Studies of Education 41: 215–242.

    Google Scholar 

  • Light, R.J., and P.V. Smith. 1971. Accumulating evidence: Procedures for resolving contradictions among different research studies. Harvard Educational Review 41: 429–471.

    Google Scholar 

  • National Academy of Education. 1999. Recommendations regarding research priorities: An advisory report to the national education research policy and priorities board. New York: NAE.

    Google Scholar 

  • Norman, D. 1988. The design of everyday things. New York: Currency Doubleday.

    Google Scholar 

  • Paley, A.R. 2007. ‘No Child’ target is called out of reach. The Washington Post, March 14. Retrieved 1 September 2010 (http://www.washingtonpost.com/wp-dyn/content/article/2007/03/13/AR2007031301781.html).

  • Podolny, J.M., and K.L. Page. 1998. Network forms of organization. Annual Review of Sociology 24: 54–76.

    Article  Google Scholar 

  • Powell, W.W. 1990. Neither market nor hierarchy: Network forms of organization. Research in Organizational Behavior 12: 295–336.

    Google Scholar 

  • Ruopp, R.G., S. Gal, S. Drayton, and B. Pfister. 1992. Labnet: Toward a community of practice. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Schaller, R.R. 2004. Technological innovation in the semiconductor industry: A case study of the international technology roadmap for semiconductors (ITRS). In School of public policy. Fairfax: George Mason University.

    Google Scholar 

  • Schlager, M., J. Fusco, and P. Schank. 2002. Evolution of an on-line education community of practice. In Building virtual communities: Learning and change in cyberspace, eds. K.A. Renninger and W. Shumar. New York: Cambridge University Press.

    Google Scholar 

  • Shavelson, R.J., and L. Towne (eds.). 2002. Scientific research in education. Washington, DC: National Academy Press.

    Google Scholar 

  • Shirky, C. 2008. The power of organizing without organizations. New York: Penguin Press.

    Google Scholar 

  • Surowiecki, J. 2004. The wisdom of crowds. New York: Anchor Books.

    Google Scholar 

  • von Hippel, E. 2005. Democratizing innovation. Cambridge: The MIT Press.

    Google Scholar 

  • Webb, E.J., D.T. Campbell, R.D. Schwartz, and L. Sechrest. 1966. Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally.

    Google Scholar 

  • Weber, S. 2004. The success of open source. Cambridge: Harvard University Press.

    Google Scholar 

  • Weisberg, H.I. 2010. Bias and causation: Models and judgment for valid comparisons. Hobokon: Wiley.

    Book  Google Scholar 

  • Weisberg, H.I., V.C. Hayden, and V.P. Pontes. 2009. Selection criteria and generalizability within the counterfactual framework: Explaining the paradox of antidepressant-induced suicide. Clinical Trials 6(2): 109–118.

    Article  Google Scholar 

  • Wenger, E. 1999. Communities of practice: Learning, meaning and identity. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anthony S. Bryk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media B.V.

About this chapter

Cite this chapter

Bryk, A.S., Gomez, L.M., Grunow, A. (2011). Getting Ideas into Action: Building Networked Improvement Communities in Education. In: Hallinan, M. (eds) Frontiers in Sociology of Education. Frontiers in Sociology and Social Research, vol 1. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-1576-9_7

Download citation

Publish with us

Policies and ethics