Tacit and encoded knowledge in the use of standardised outcome measures in multidisciplinary team decision making: A case study of in-patient neurorehabilitation

https://doi.org/10.1016/j.socscimed.2008.03.006Get rights and content

Abstract

This paper explores how multidisciplinary teams (MDTs) balance encoded knowledge, in the form of standardised outcome measurement, with tacit knowledge, in the form of intuitive judgement, clinical experience and expertise, in the process of clinical decision making. The paper is based on findings from a qualitative case study of a multidisciplinary in-patient neurorehabilitation team in one UK NHS trust who routinely collected standardised outcome measures. Data were collected using non-participant observation of 16 MDT meetings and semi-structured interviews with 11 practitioners representing different professional groups. Our analysis suggests that clinicians drew on tacit knowledge to supplement, adjust or dismiss ‘the scores’ in making judgements about a patients' likely progress in rehabilitation, their change (or lack of) during therapy and their need for support on discharge. In many cases, the scores accorded with clinicians' tacit knowledge of the patient, and were used to reinforce this opinion, rather than determine it. In other cases, the scores, in particular the Barthel Index, provided a partial picture of the patient and in these circumstances, clinicians employed tacit knowledge to fill in the gaps. In some cases, the scores and tacit knowledge diverged and clinicians preferred to rely on their clinical experience and intuition and adjusted or downplayed the accuracy of the scores. We conclude that there are limits to the advantages of quantifying and standardising assessments of health within routine clinical practice and that standardised outcome measures can support, rather than determine clinical judgement. Tacit knowledge is essential to produce and interpret this form of encoded knowledge and to balance its significance against other information about the patient in making decisions about patient care.

Introduction

Outcome measurement is one element within the long term trend towards ‘scientific-bureaucratic medicine’ (Harrison, 2002, Harrison, 2004) and the increased regulation of medical professionals in both the UK (Flynn, 2002) and the US (Timmermans & Berg, 2003). In practice, outcome measurement involves many different activities in meeting several different policy agendas. There are many different types of health outcome measures and numerous ways of categorising them exist, based on the content or scope of concepts measured (eg impairment, disability, handicap, health related quality of life), the purpose of the measurement (eg to make predictions, to distinguish between groups or individuals or to measure change over time) and how the measurement is performed (eg generic versus disease specific measures, a profile versus an index) (McDowell & Newell, 1996). Such measures are also designed to be completed by either the patient themselves or by clinicians. However, most health outcome measures have two features in common. Firstly, health outcome measures aim to standardise the way in which dimensions of health states are judged through preset questions or scoring guidelines. Thus, identical criteria are used across different individuals and groups to measure a common dimension of health, to allow comparison between and within individuals or groups. Secondly, they seek to quantify a particular construct or attribute by allocating a set of numbers to descriptions of different health states using an ordinal and sometimes a ratio scale. Thus, they aim to provide an indication as to whether a particular dimension of health has improved or worsened over time, or whether one group or individual has worse health than another.

Outcome measurement has had a dual role in the promulgation of evidence based practice. Health outcome measures are increasingly used within randomised controlled trials as criteria to judge the effectiveness of different treatments and care packages (Fitzpatrick, Davey, Buxton, & Jones, 1998). Such trials then form the basis of guidelines and protocols which seek to standardise clinical practice and provide a rational foundation for clinical decisions (Harrison, 2004). However, there has also been increasing interest in the collection of patient reported or clinician rated outcome measures by clinicians at an individual level to facilitate decision making in the management of individual patients (Long & Fairfield, 1996). The assumption here is that health outcome measures offer a more systematic way of assessing various dimensions of a patient's health than clinical judgement alone (Schor, Lerner, & Malpeis, 1995) and can be used to assess whether the desired outcomes of care have been achieved (Long, 2002).

Systematic reviews have suggested that the use of outcome measures in routine clinical practice has had only a limited effect on the ways in which clinicians manage their patients (Gilbody et al., 2002, Greenhalgh and Meadows, 1999, Marshall et al., 2006). However, this literature has treated clinical decision making as a ‘black box’ by focusing on whether the collection of outcomes information makes a difference to the treatment and health outcomes of patients and ignoring how clinicians make use of the information from outcome measures within decision making (Greenhalgh, Long, & Flynn, 2005). To understand how clinicians might use outcome measures in clinical practice requires us to place outcome measurement in the context of both the nature of clinical decision making and medical knowledge.

Randomised controlled trials exploring the use of outcome measures in clinical practice (Gilbody, Whitty, et al., 2002) and other work exploring the importance clinicians place on outcome measures (Bezjak et al., 2001, Gough & Dalgliesh, 1991) have, implicitly, drawn on cognitive theories of decision making (Dowie & Elstein, 1988). Such theories have conceptualised decision making as a time limited activity that occurs within the private world of the clinician's mind at a single site, most often, the clinician–patient interface. Information from an outcome measure is viewed as one of a number of cues to which the process of decision making is then applied. However, this presents a limited view of clinical work; ethnographic studies have shown that clinical decision making is a collective activity that occurs amongst groups of clinicians in a diffuse, iterative way over a protracted length of time, often in several locations (Atkinson, 1995, Hughes and Griffiths, 1997, Rapley, 2008, White, 2002). Discrete ‘decisions’ themselves are difficult to isolate.

Furthermore, it is difficult in practice to draw a distinction between ‘information’ and ‘decisions’ (Atkinson, 1995). Information, embodied in different forms such as laboratory test results, X-Rays and electronic monitoring machines, is itself a judgement and an outcome of decision making. Thus ‘information’ and ‘decisions’ are mutually constitutive of each other (Atkinson, 1995). White and Stancombe (2003) have also shown that the processes of clinical judgement involve assembling ‘facts’ around a case, which themselves are approximations and equivocations containing moral evaluations, in order to construct ‘warrants for action’. Thus, outcome measures are a particular embodiment of clinical judgements and their meaning may not be self evident, but require translation and interpretation (Atkinson, 1995). This raises questions about how clinicians produce and use different forms of knowledge in carrying out their work.

Both sociology and organisational studies have moved from defining typologies of knowledge to exploring how knowledge is created and used in everyday practice. Polanyi (1966) distinguished between explicit knowledge or ‘knowing what’ and tacit knowledge or ‘knowing how’. Explicit knowledge can be codified, abstracted and transferred in the form of text books. Tacit knowledge is intuitive, acquired through practical experience and as such, is personal and contextual and cannot be readily made explicit or formalised. Schön (1988) argued that professionals' routine practice is dependent upon ‘knowledge-in-action’. Clinicians cannot fully describe what they know; this knowledge is only revealed through the action itself. Blackler (1995) and Lam (2000) expanded Polanyi's dichotomy to distinguish between individual forms of tacit ‘embodied’ and explicit ‘embrained’ knowledge and collective forms of tacit ‘embedded’ and explicit ‘encoded’ knowledge. Using this framework, outcome measures can be seen as a form of ‘encoded’ knowledge since they represent shared, written rules and procedures to define and standardise how a particular dimension of health should be judged across different populations.

More recent work within medical sociology and organisation studies has explored the social production of knowledge and how clinicians render different forms of knowledge useable within clinical practice (Atkinson, 1995, Blackler, 1995, Casper and Berg, 1995, Newell et al., 2003). Knowledge does not simply reside within the brains of individuals or within organisational processes, but is produced through interactions between social actors and is transformed through its application in local contexts. Thus, new knowledge is created through a dialogue between tacit and explicit knowledge, in which tacit knowledge may be transformed into explicit knowledge and vice versa (Nonaka, 1994). Clinicians integrate these different forms of knowledge in their everyday practice through the development of ‘routines’ (Smith, Goodwin, Mort, & Pope, 2003) or ‘mindlines’ (Gabbay & Le May, 2004) in which tacit knowledge from experience is essential to interpret explicit knowledge and apply it in particular circumstances, thus guarding against ‘cookbook’ implementation of codified knowledge.

Many other writers have also stressed this tension – or disjunction – between professional expertise and intuitive judgement, and formalised or rule-based systems (see May, Rapley, Moreira, Finch, & Heaven, 2006). Nettleton et al. (2008) uncovered doctor's concerns that current regulatory practices within the NHS were limiting the profession's opportunity to develop valuable tacit, hands on, experiential knowledge. McDonald, Waring, and Harrison (2006) showed how surgeons' routines are acquired through frontline experience and that the many contingencies in this work mean that it cannot be readily subsumed within guidelines and protocols. Rafalovich (2005) reported that clinicians perceived diagnostic manuals to be useful as a general guide but were not sufficient to account for the multiplicity of factors they took into account in making a diagnosis of attention deficit disorder. Wood, Prior, and Gray (2003) also observed how clinicians used their clinical experience to override or adjust referral guidelines and computerised decision support systems in their decision making.

With regard specifically to the use of standardised outcome measures, surveys and interviews have found that clinicians prefer to rely on their subjective judgement in assessing one particular dimension of health, namely health related quality of life (Gilbody et al., 2002, McKevitt and Wolfe, 2002). Tannenbaum (1994) observed that clinicians used outcome research only when they were at the limits of their personal experience. Cowley, Mitcheson, and Houston (2004) also showed how health visitors resisted using standardised needs assessments in the manner required of them by their managers. These findings suggest that, in practice, clinicians prefer to rely on embodied or embedded knowledge and often use this knowledge to override, adjust or resist attempts to dictate their practice through encoded knowledge.

However, to date, research has not directly explored how multidisciplinary teams of clinicians use standardised outcome measures in the process of clinical judgement and decision making. It is not clear how such teams balance this form of encoded knowledge with intuitive reasoning in the care of individual patients. This paper explores these issues by drawing on qualitative data collected from one multidisciplinary in-patient neurorehabilitation unit in which standardised outcome measures were routinely collected. The data were part of an ESRC funded study to examine how (and to what extent) health and social care professionals use outcome measures in routine clinical practice.

Section snippets

Choice of setting

In-patient neurorehabilitation was selected for investigation for two reasons. Firstly, the use of standardised outcome measures in this setting could be described as ‘common practice’. A survey of rehabilitation units in the UK found that just over three quarters were routinely collecting a standardised measure within clinical practice (Turner-Stokes & Turner-Stokes, 1997) and guidance exists on the most appropriate measure to use in this setting (Turner-Stokes, 2002). The most common

Analysis

Qualitative data analysis was iterative and ongoing throughout the study using the techniques of grounded theory (Charmaz, 2006, Strauss and Corbin, 1998) and was aided by QSR Nvivo. The research team met regularly to discuss and agree on emerging themes.

Based on the field notes and transcripts, we initially developed broad themes around: (1) the process through which the MDT ‘scored’ the outcome measures; (2) MDT member's subsequent comments and discussions about the resulting scores; and (3)

Structure of MDT meetings

Before presenting our analyses it is first important to provide some contextual information about the setting and format of MDT meetings and the nature of the discussions within them. MDT meetings were held in a ‘boardroom’, with a notice board summarising the meeting agenda, the dates and times of family and goal setting meetings for that week and the discharge dates that had been set. Booklets about the scoring guidelines for the outcome measures used by the team and laminated cards detailing

Discussion and conclusions

This paper aimed to understand how multidisciplinary teams use standardised outcome measures in the context of broader literature about the nature of clinical decision making and medical knowledge. It is acknowledged that, given the case study approach and qualitative methods used, any claims to generalisability must be limited. Nevertheless, the observational techniques used were invaluable in providing rich descriptive and narrative data about otherwise opaque procedures. The site we observed

References (58)

  • A. Bezjak et al.

    Oncologists' use of quality of life information: results of a survey of Eastern Cooperative Oncology Group physicians

    Quality of Life Research

    (2001)
  • F. Blackler

    Knowledge, knowledge work and organisations: an overview and interpretation

    Organization Studies

    (1995)
  • M. Casper et al.

    Constructivist perspectives on medical work: medical practices and science and technology studies

    Science Technology and Human Values

    (1995)
  • K. Charmaz

    Constructing grounded theory

    (2006)
  • S. Cowley et al.

    Structuring health needs assessments: the medicalisation of health visiting

    Sociology of Health and Illness

    (2004)
  • J. Dowie et al.

    Introduction

  • R. Fitzpatrick et al.

    Evaluating patient based outcome measures for use in clinical trials

    Health Technology Assessment

    (1998)
  • R. Flynn

    Clinical governance and governmentality

    Health, Risk and Society

    (2002)
  • J. Gabbay et al.

    Evidence based guidelines or collectively constructed ‘mindlines’? Ethnographic study of knowledge management in primary care

    British Medical Journal

    (2004)
  • J.M.L. Geddes et al.

    The Leeds assessment scale of handicap: it's operationalisation, reliability, validity and responsiveness in in-patient rehabilitation

    Disability and Rehabilitation

    (2000)
  • S.M. Gilbody et al.

    Psychiatrists in the UK do not use outcomes measures. National survey

    British Journal of Psychiatry

    (2002)
  • S.M. Gilbody et al.

    Improving the recognition and management of depression in primary care

    Effective Health Care

    (2002)
  • C.V. Granger et al.

    Functional status measures in a comprehensive stroke care program

    Archives of Physicial Medicine and Rehabilitation

    (1977)
  • I. Gough et al.

    What value if given to quality of life assessment by health professionals considering response to palliative chemotherapy for advanced cancer

    Cancer

    (1991)
  • J. Greenhalgh et al.

    The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review

    Journal of Evaluation in Clinical Practice

    (1999)
  • K.M. Hall et al.

    Characteristics and comparisons of functional assessment indices: disability rating scales, functional independence measure and functional assessment measure

    Journal of Head Trauma and Rehabilitation

    (1993)
  • B.B. Hamilton et al.

    A uniform national data system for medical rehabilitation

  • S. Harrison

    New labour, modernisation and the medical labour process

    Journal of Social Policy

    (2002)
  • S. Harrison

    Governing medicine: governance, science and practice

  • Cited by (68)

    • Learning from professionals: Exploring cognitive rehabilitation strategies for the definition of the functional requirements of a telerehabilitation platform

      2018, Computers in Biology and Medicine
      Citation Excerpt :

      Studies on the processes of knowledge and problem solving skills, such as those in the human factors field, have focused much more on the first component [22–27] rather than the second. This latter component however is of particular importance, especially in the clinical setting [28–30], as the exercise of the profession relies on this type of knowledge [29,30], for the fact that it supports the clinician in formulating the diagnosis and in understanding the patient [30–34]. Considered the tacit dimension of this type of knowledge, mostly acquired through years of practice [31–34], the elicitation of clinician experience is needed.

    View all citing articles on Scopus

    The study on which this paper was based was funded by ESRC Small Grant RES-000-22-1117. We would also like to thank the participants in the study for allowing us access to their meetings and giving the time to be interviewed.

    View full text