Perhaps the history of the errors of mankind, all things considered, is more valuable and interesting than that of their discoveries. Truth is uniform, and narrow […] But error is endlessly diversified
. (Benjamin Franklin)
We all have our own closely-held archetypes of a great scholar or scientist—someone who can drill down to the core of a problem, who can harness insight in order to solve problems in new and fantastic ways, or who can tackle a phenomenon with a ground-breaking approach. We follow the work of our favourite scholars, attend conferences to hear new and interesting ideas and innovations, and engage in good conversations about the meaning of new studies, theories, frameworks, and educational movements. We tend to see our collective successes—the papers published, the innovative new solutions, the implementation of new approaches. We have a carefully crafted one-sided view of what good and successful scholarship looks like [1
As a scholarly community we are also aware of how challenging it is to secure funding (success rates in my context range from 16–47%) [2
], and the low percentages of papers that see publication in a given journal (acceptance rates for top Health Professions Education (HPE) journals hover around 13–30%) [4
]. These success rates provide the community with a marker for comparison—if we are successful, we are amongst the few. If we aren’t, then we are at the very least in good company. Those metrics are aggregate measures of success for a heterogeneous community of scholars, and a project may not receive funding or be accepted for publication for a variety of reasons [4
]—reasons that range from how well the project ‘fits’ in the mandate of the funding agency or journal, to issues of relevance, articulation, design, or planned interpretation. In ‘pitching’ or describing a project we may have failed to identify the gap in the literature [12
], failed to clearly articulate our objectives, failed to describe a coherent theoretical or conceptual framework [13
], failed to describe our approach to data collection or analysis effectively, or failed to articulate the impact of this piece of scholarship on the HPE community. These failures in the scholarly process are not the intended focus of this piece. Nor is the intended focus on the value of failure, or clinical errors, as an educational strategy—that body of work is well described elsewhere [15
]. Here, I would like to focus explicitly on the well-situated, well-designed, well-considered, and well-executed scholarly project that fails—the ‘good study’ that simply fails to generate the expected learning outcomes, fails to support an assumption or theory, or generates entirely surprising and often puzzling results. In other words, I wish to explore these high-quality research endeavours that have failed to generate the findings, knowledge, or impact that were anticipated, and argue that the failure of a sound scholarly project is actually a critical component of successful scholarship in HPE.
Failure can be defined as a lack of success, but also as a ‘nonoccurrence of something due, required or expected’ [16
]. The failure of a study, project, or line of inquiry to generate expected findings has had a critical role in every domain of science or scholarship [1
]. Without failure there can be no discovery, no new theories, no new revolutions in thought [17
]. Falsification of hypotheses, and theories have been a hallmark of current scientific practices [18
], and these practices rest on discovering when an idea fails. The accumulation of anomalous or unexpected findings is a marker of the failure of a current way of thinking about a phenomenon, and Kuhn suggests that this often marks the beginning of a paradigm shift [17
]. Failure is one important component of scholarship, and is necessary to help our thinking progress. But, as a scientific community writ large, we have done a poor job of communicating the frequency of failure, the benefits of failure, and the growth that is possible as a direct result of failure [1
]. We have failed to admit the frequency of failure necessary in scholarship to the general public, instead choosing to promote our carefully curated programs of work as intentionally logical, linear, and failure-free [1
]. By quietly filing away our failures we are collectively missing out on important null or unexpected findings and hiding or minimizing important opportunities for learning from our trainees, peers, and often ourselves.
Despite the difficulty in discussing or divulging failures, I would like to argue for the value and utility of failure—specifically the purposeful and careful deconstruction of a failed scholarly project—in research and scholarship. In order to facilitate this conversation, I propose a simplified taxonomy of a few different forms of failure possible in research and scholarship.
As a cautionary note, this proposed taxonomy of failure is intended to frame a conversation around failure in research and scholarship. It is not intended to capture all forms of failure. Rather, it is intended to: 1) provide a language for describing the different ways a project can fail, 2) focus how we interpret and examine failed scholarly projects to encourage meaningful learning, and 3) support discussions of the value of failed scholarly projects. In this brief taxonomy I will focus on: Innovation-oriented failure, Discovery-oriented failure, and serendipitous failure.
Generation of the taxonomy of forms of failure
In thinking about failure and scholarship, the initial inspiration for this taxonomy came from the opportunity to reflect on several failed studies within my own scholarly practice [21
], from an interest in the philosophy of science literature, from my reflections on several classical philosophy of science pieces and their applicability to the multidisciplinary field of HPE [17
], and drew from Stuart Firestein’s book ‘Failure: Why Science is So Successful’ [1
]. In this book, the author draws on two quotes—the ones I have drawn on as examples within the taxonomy from Edison and Einstein—and his work contrasting these two quotes laid the foundations in my thinking about the types of scholarly failure. I draw on some distinctions made by Firestein in his book, and certainly encourage anyone with an interest in failure in science to read it. I originally developed this particular taxonomy for a panel discussion regarding failure in scholarship [23
], and further expanded it here for publication.
I decided to explore whether relevant literature describing failed scholarly projects in HPE could be captured by the taxonomy in order to explore the applicability of this taxonomy to published works. I chose to draw on all papers published to date in the Perspectives on Medical Education
‘failures/surprises’ series [22
], as these publications were identified by their authors to be a direct result of a failed scholarly project or innovation [22
]. I then categorized the publications (n
= 13), and have included them throughout as examples of the proposed taxonomy, with a summary available in Tab. 1
. This categorization is based on my reading of the published works, and is therefore likely imperfect. My intent is not to impose a personal interpretation on these works, nor to claim that this is a comprehensive summary of all literature describing failed research projects in the HPE literature. Rather I relied on these self-identified publications on failure in order to explore the applicability of the taxonomy to research and scholarship in HPE.
Types of scholarly failure, and distribution of articles published in the Perspectives on Medical Education (PME) special series on Failure/surprises (articles available from 2018–October 2019) presented by category of scholarly failure
Attempts are made to innovate, and it didn’t work or didn’t work as expected
Worden & Ait-Daoud Tiouririne, 2018 [26
Gagliardi & Rudd, 2018 [30
A theory or hypothesis is tested with the explicit attempt to establish its generalizability and reach
A well-designed project generated unanticipated and unexpected findings
Forms of failure
I propose that there are (at minimum) three distinct types of useful failure.
HPE is a field that bridges the deeply theoretical and the entirely pragmatic. Albert describes HPE scholarship on a spectrum of work produced for producers (i.e. researchers and scholars) to work produced for users (i.e. educators and teachers) [37
], and has discussed the tension between service-oriented research and discovery-oriented research [38
]. While respecting the spectrum of work present in HPE scholarship, the field has been described as centring on the pragmatic goal of ensuring the best education to support the best possible patient care. In other words, ‘The primary goal of medical education is to improve patient care’ [39
, p. 6]. With this as a focal lens it means that a large proportion of scholarship done in HPE is about delivering high-quality health professions education, often in new and innovative ways.
Our first kind of failure is strongly associated with an innovation mindset. Often innovations begin with an idea of how something ‘should be’ or ‘should be done’—a particular way a curriculum should be structured, how content should be delivered, or how inferences or judgments of competence should be made. Attempts are made to change from a status quo to what ‘should be’—attempts are made to innovate and implement a new way of doing something. Sometimes it works, and sometimes it doesn’t.
The quote ‘I have never failed, just found 10,000 ways it doesn’t work2
’, famously attributed to Thomas Edison, captures the essence of innovation-oriented failure. This kind of failure is systematic, progressive, goal-oriented failure. The slow march towards what you think should be, with careful consideration for why an innovation may have failed. Careful analysis of why an innovation failed to reach its intended goal leads to modification of process or content of the innovation often in order to try it again—to try to get closer to how things ‘should be’. This slow, purposeful, attentive failure is key to sound innovation, and was well represented in the literature reviewed. Literature reviewed included attempts to innovate how educational content is delivered [25
], the way a learner should be assessed [28
] or managed [27
], how a program should be evaluated [29
], how policies should be set [31
], or how a particular phenomenon should be investigated [30
], Innovation-oriented failure occurs when you know where you want to go, but repeated attempts and failures help you figure out how to get there.
When discussing how scientific thought has changed, we often draw on the history of physics—a key example being how we abandoned the notion of the earth being the centre of the solar system. These are classic examples of paradigm shifts [17
]—when our ‘typical’ way of understanding something is replaced by a new theory, new set of assumptions, and new understandings of the world. New discovery, new theories, new revolutions in thought are dependent on the failure of our current understanding in the face of phenomena.
Our second type of failure is discovery-oriented failure. This kind of failure occurs at the periphery of what we know (or think we know) [40
]—this kind of failure happens when we are developing or testing a theory [32
], attempting to falsify a hypothesis, or critically examine the generalizability or transferability of an idea to a new context [13
]. Our archetype of discovery-oriented failure is none other than Einstein for whom ‘Failure is success in progress3
’. This kind of failure is purposeful failing on a grand scale—pushing a theory, an idea, a hypothesis until it breaks. Pushing it until it fails in order to understand it better in order to improve it, or to know when to abandon it. Discovery-oriented failure is what challenges conventional wisdom, opens up the possibility of changes in thought, explores new explanations for old problems or phenomena, and tests the limits of what we think we know.
In a way, discovery-oriented failure is analogous to the engineering and manufacturing principle of destructive testing. In destructive testing, an object, material, or final product is subjected to a variety of conditions (e.g. pressure, temperature, vibration) under close monitoring. The object is subjected to increasing stress until it fails. The failure is analyzed closely—under what conditions did it fail? How did it fail? Could we have predicted that this is where and how it would fail? Can we make it better next time to reduce the likelihood of failure? And occasionally, destructive testing uncovers a fatal flaw and that object, material, or product is re-engineered, re-designed, and re-tested. Discovery-oriented failure and destructive testing both push beyond anticipated limits in order to understand the circumstances under which an idea or object remains strong, and where it fails. Only one example of this kind of failure was included in the literature reviewed [32
]—it may be that this kind of failure is a less frequent, less frequently published, or it is possible that it is more commonly reported in other publication venues. Driving an idea to failure allows for a sophisticated and nuanced understanding unlikely to be achieved any other way.
And sometimes things just don’t go as planned. We are taught (and hopefully role model) that crafting an innovation or designing a study are dependent on engaging sound educational and research practices, carefully crafting theoretical and conceptual frameworks [13
], and drawing on available work prior to launching a study or intervention. We (generally) have a sense of how the study is likely going to progress, what theories and discussions in the literature we are intending to contribute to, and perhaps are even purposefully testing a specific hypothesis. But, sometimes things don’t go as planned. Isaac Asimov said ‘The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny” …’4
. This serendipitous failure may be one of more common types of failure in research and scholarship—the occurrence of the unexpected, and the careful investigation that follows. Drawing on publications of failed scholarly projects, serendipitous failure can result in a challenge to our understanding of a given research domain [22
] or educational learning objective [33
], of our research [34
] or educational practice [35
]. This opportunistic failure focuses on engaging with and chasing the unexpected finding to try to understand the ‘why’ behind the pattern of findings [42
While not typically considered a key thinker in the philosophy of science, our archetype for serendipitous failure is American public painter Bob Ross, who said ‘there are no mistakes, just happy little accidents’. Asimov and Ross both recognize the value of unexpected events—and suggest the insights and progress possible in shifting perception from traditional notions of failure to opportunities to embrace the unexpected to create, or uncover, new understanding. This kind of failure allows for the circumstantial discovery of counter-intuitive findings, of questioning typical approaches or frames, and of productive lines of inquiry counter to traditional pathways of thinking. In order to harness the benefits of this kind of failure, we must be open to exploring the mechanisms behind our serendipitous and unexpected findings.
Having the opportunity to reflect on the value of failure in my own work [21
], I have landed on four main conclusions. The first is that failure is an integral component of research and scholarship
—whether that failure was purposeful and innovation-oriented, discovery-oriented, or unanticipated serendipitous failure. ‘Science grows in the mulch of puzzled bewilderment, scepticism, and experiment’ [1
, p. 59], and we have typically hidden the often puzzling, unanticipated, and challenging work done following a meaningful failure, and so have perpetuated a rather tidied-up version of the scholarly process [1
]. We have underrepresented the value of failure, and this has likely contributed to a skewed perspective of the linearity and success-focused nature of good scholarship [1
]. While we tend to think of modern science as slowly accumulating evidence in a logical, linear way, Popper has suggested that small, safe hypotheses with a high likelihood of success contribute to building knowledge, but don’t help advance understanding. Rather, great steps forward come from big, risky ideas—ideas that have a high likelihood of failure are the ones that help our thinking progress [18
There are two possible outcomes: if the result confirms the hypothesis, you’ve made a measurement. If the result is contrary to the hypothesis, then you’ve made a discovery.5
Acknowledging that failure has an important role to play in science is an important first step, we must also purposefully engage with the opportunities that failure provide
(specific recommendations in Tab. 2
). An implicit argument woven throughout this piece is that a failed project—an innovation that did not generate the impact it intended, or a research project that resulted in unanticipated findings—is not failed scholarship. However, hard work is needed to understand why a particular innovation failed, or why a particular pattern of results were generated in order to turn a failed project into a successful piece of scholarship. The taxonomy described here can be a means to describe three different contexts in which a failed scholarly project can lead to a meaningful contribution to the HPE literature. Engaging with the opportunities that failure provide is most often not to correct a failure, but rather to engage with the failure in order to learn from it. To understand why an innovation or intervention failed to solve a problem, why a theory fails under certain conditions, or why a cleanly-designed study went in an unanticipated direction. But, this purposeful engagement takes time, often requires resources for follow-up studies, occasionally involves learning and adopting a new theoretical lens, and sometimes involves critically reconsidering your own assumptions embedded throughout a given piece of scholarship. It may be easier to put a failed project aside, or tuck it into the bottom of a file drawer, but choosing not to engage with a failure is a missed opportunity for scholarly growth.
Strategies for better supporting and engaging with failed scholarly projects in health professions education (HPE) research and scholarship
Purposefully engage with failure
– Resist the temptation to file a failure away
– Identify a colleague that could help you to talk through the study, and why it didn’t go as planned (i.e. a failure friend)
– Determine what kind of failure it was, and whether there are any lessons to be learned
– Consider presenting your project to a small research group or trusted group of colleagues to help determine whether a different theoretical lens could help explain why the project failed
– Brainstorm as to whether the (now) productive failure would benefit others
Publicly engage with failure
– Resist the temptation to only share scholarly successes
– Consider submission of a failed project that could be of value to the community to a growing number of venues
– Consider developing a workshop that includes the lessons learned from designing an innovation, attempting a methodology, or executing a project that failed
– If you are not new to HPE research, publicly discuss failed scholarly projects or consider keeping a public CV of failures [43
Humanize and normalize failure
– If you are new to HPE research, know that failure in scholarship is typical
– If you are coming from a clinical background, know that failure is the backbone of scholarship [44
– Find a ‘failure friend’—someone with whom you can discuss failed projects, and be sure to return the favour
– Encourage, support, and facilitate brainstorming around a failed project
– If needed, find some humour in the failed project, and consider contributing to something like Scienceconfessionals.com
Engaging purposefully with a failed study can lead to insights regarding implementation, methodology, approaches to analysis, underlying assumptions, and even a given theory or educational model. While purposefully engaging with failure can be informative to a research team, scholarship is a fundamentally social endeavour. Our community learns socially—we learn from the work of others. In order to capitalize as a community on the opportunities provided by productive failure, we must publically engage with failure
. Sharing failure publically is not easy. When our archetypes and traditional metrics of scholarly success include the dominance of successful interventions, innovations, and apparently effortless discovery, we minimize the recognition of the importance of less-than-traditionally successful work. In the words of James D. Watson:
Science seldom proceeds in the straightforward logical manner imagined by outsiders. Instead its steps forwards (and sometimes backward) are often very human events6
Research and scholarship are human endeavours, which means each objective discovery and each failure is lived by a researcher, scholar, or team of collaborators. Our judgments of worth regarding scholarship are also social endeavours—peer-review acts as a gatekeeper for conferences acceptances, journal publication, and promotion decisions. Public disclosure of failure, and important insights drawn from understanding a failed project, need to be seen as valuable by our Health Professions Education (HPE) scholarly community. While some avenues now exist [24
], more work can be done to highlight the value of failure in HPE research and scholarship.
As a community, we need to become more comfortable discussing productive failures and our insights that have resulted from them in order to grow our collective knowledge base. If nothing else, deliberate public discussion of failed projects could lead to more effective use of limited research and educational resources—if we know that someone has tried and failed, we are unlikely to benefit from trying the same thing again. This could be achieved through formal scholarly publication, dissemination through less formal venues, participating in discussion forums, symposiums, or offering or participating in workshops. Purposefully engaging with failure, publicly discussing failed work, could contribute to appropriate scholarship resource stewardship and better translation of successful innovations (if we know why/how they have failed) [45
]. Further, purposefully engaging with failure could embed a community-wide growth mindset [46
]—where we would value engaging with failures as opportunities for growth and development as a scholarly community.
In addition to the possibility of a community-wide growth mindset, work in the broader educational literature has documented the benefits of productive failure [47
]. In this work, school-aged children are asked to work in groups on ill-structured problems that are outside of their abilities to solve—for example, eleventh-grade students are asked to solve Newtonian Kinematics problems [47
]. In a control condition, students work in groups to solve well-structured problems that are within their abilities to solve. All students then work to solve well- and ill-structured problems individually that are designed to be either near- or far-transfer problems. For the students who work in groups to solve problems outside of their abilities, they engage creatively trying to solve the problem, but rarely, if ever, do. But the effects of this ‘productive failure’ are seen when the students go on to solve problems individually. Those who worked in groups designed to fail to solve the problem perform better on the subsequent tasks—they are better able to solve both well- and ill-defined problems. The aftereffects of struggling to solve an unsolvable problem are that you are better able to later solve problems. The phenomena of productive failure has been applied in higher education contexts [48
], and continues to be refined [49
] and debated, however the underlying notion remains helpful for a discussion of the value of failure in research and scholarship. While engaging with our failed scholarly projects may not in itself lead to insight or solution, it may nevertheless facilitate engaging with our next problem or project more effectively—engaging with the failure on this project may support our next scholarly project.
Finally, reflecting on failure in science necessitates an acknowledgement of the scientists, scholars, and researchers behind the failures. Acknowledging, engaging with, and learning from failures as a community may help to humanize and normalize the experience of a failed project
—we all fail, have all failed, and will continue to fail. If we aren’t failing, we aren’t helping our field move forward [18
]. In conclusion, I echo Neil Gaiman7
and encourage us to “Go, and make interesting mistakes, make glorious and fantastic mistakes”. And then be sure to share them.
I would like to extend my sincere thanks to Drs. Stuart Lubarsky and Lara Varpio for their support and continued help in refining this piece.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.