Skip to main content
Log in

Maximising Responses to Discrete Choice Experiments

A Randomised Trial

  • Original Research Article
  • Published:
Applied Health Economics and Health Policy Aims and scope Submit manuscript

Abstract

Objective

To identify any differences in response and completion rates across two versions of a questionnaire, in order to determine the trade-off between a potentially higher response rate (from a short questionnaire) and a greater level of information from each respondent (from a long questionnaire).

Methods

This was a randomised trial to determine whether response rates and/or results differ between questionnaires containing different numbers of choices: a short version capable of estimating main effects only and a longer version capable of estimating two-way interactions, provided certain assumptions hold. Best-worst scaling was the form of discrete choice experimentation used. Data were collected by post and analysed in terms of response rates, completion rates and differences in mean utilities.

Results

Fifty-three percent of individuals approached agreed to take part. From these, the response to the long questionnaire was 83.2% and the short questionnaire was 85.1% (difference 1.9%, 95% CI -7.3, 11.2; p = 0.68). The two versions of the questionnaire provided similar inferences.

Discussion/conclusion

This trial indicates that, in a healthcare setting, for this complexity of questionnaire (i.e. four attributes and the best-worst scaling design), the use of 16 scenarios obtained very similar response rates to those obtained using half this number.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Table I
Table II
Table 3
Table IV

Similar content being viewed by others

Notes

  1. 1When using a best-worst design, the estimation of subsets of two-way interactions is not possible: all two-way interactions must be estimated, or none.

References

  1. Louviere JJ, Hensher DA, Swait JD. Stated choice methods: analysis and application. Cambridge: Cambridge University Press, 2000

    Book  Google Scholar 

  2. Hall J, Kenny P, King M, et al. Using stated preference discrete choice modelling to evaluate the introduction of varicella vaccination. Health Econ 2002; 11: 457–65

    Article  PubMed  Google Scholar 

  3. Phillips KA, Maddala T, Johnson FR. Measuring preferences for health care interventions using conjoint analysis: an application to HIV testing. Health Serv Res 2002; 37: 1681–705

    Article  PubMed  Google Scholar 

  4. Bryan S, Buxton M, Sheldon R, et al. The use of magnetic resonance imaging for the investigation of knee injuries: an investigation of preferences. Health Econ 1998; 7: 595–604

    Article  PubMed  CAS  Google Scholar 

  5. Gyrd-Hansen D, Sogaard J. Analysing public preferences for cancer screening programmes. Health Econ 2001; 10: 617–34

    Article  PubMed  CAS  Google Scholar 

  6. McKenzie L, Cairns J, Osman L. Symptom-based outcome measures for asthma: the use of discrete choice methods to assess patient preferences. Health Policy 2001; 57: 193–204

    Article  PubMed  CAS  Google Scholar 

  7. Johnson FR, Banzhaf MR, Desvousges WH. Willingness to pay for improved respiratory and cardiovascular health: a multiple-format stated-preference approach. Health Econ 2000; 9: 295–317

    Article  PubMed  CAS  Google Scholar 

  8. Hundley V, Ryan M, Graham W. Assessing women’s preferences for intrapartum care. Birth 2001; 28: 254–63

    Article  PubMed  CAS  Google Scholar 

  9. Ryan M, McIntosh E, Dean T, et al. Trade-offs between location and waiting times in the provision of health care: the case of elective surgery on the Isle of Wight. J Public Health Med 2000; 22: 202–10

    Article  PubMed  CAS  Google Scholar 

  10. Moayyedi P, Wardman M, Toner J, et al. Establishing patient preferences for gastroenterology clinic reorganization using conjoint analysis. Eur J Gastroenterol Hepatol 2002; 14: 429–33

    Article  PubMed  Google Scholar 

  11. Ryan M, Hughes J. Using conjoint analysis to assess women’s preferences for miscarriage management. Health Econ 2004; 6: 261–73

    Article  Google Scholar 

  12. Ratcliffe J. Public preferences for the allocation of donor liver grafts for transplantation. Health Econ 2000; 9: 137–48

    Article  PubMed  CAS  Google Scholar 

  13. Bech M. Politicians’ and hospital managers’ trade-offs in the choice of reimbursement scheme: a discrete choice experiment. Health Policy 2003; 66: 261–75

    Article  PubMed  Google Scholar 

  14. Ubach C, Scott A, French F, et al. What do hospital consultants value about their jobs? A discrete choice experiment. BMJ 2003; 326: 1432–8

    Article  PubMed  Google Scholar 

  15. Marley AAJ, Louviere JJ. Some probabilistic models of best, worst and best-worst choices. J Math Psychol 2005; 49: 464–80

    Article  Google Scholar 

  16. Mcintosh E, Louviere J. Separating weight and scale value: an exploration of best-attribute scaling in health economics: Health Economists Study Group. Brunel: Brunel University, 2002

    Google Scholar 

  17. Flynn TN, Louviere J, Peters TJ, et al. Best-worst scaling: what it can do for health care research and how to do it. J Health Econ 2007; 26: 171–89

    Article  PubMed  Google Scholar 

  18. DeShaso JR, Fermo G. Designing choice sets for stated preference methods: the effects of complexity on choice consistency. J Environ Econ Manage 2002; 44: 123–43

    Article  Google Scholar 

  19. Louviere J. Random utility theory-based stated preference elici-tation methods: applications in health economics with special reference to combining sources of preference data. Centre for the Study of Choice (CenSoC) working paper no. 04-001. Sydney (NSW): University of Technology, 2004

    Google Scholar 

  20. Viney R, Savage E, Louviere J. Empirical investigation of experimental design properties of discrete choice experiments in health care. Health Econ 2005; 14: 349–62

    Article  PubMed  Google Scholar 

  21. Ryan M, Gerard K. Using discrete choice experiments to value health care programmes: current practice and future research reflections. Appl Health Econ Health Policy 2003; 2: 55–64

    PubMed  Google Scholar 

  22. Picavet HS. National health surveys by mail or home interview: effects on response. J Epidemiol Community Health 2001; 55: 408–13

    Article  PubMed  CAS  Google Scholar 

  23. Scott A. Identifying and analysing dominant preferences in discrete choice experiments: an application in health care. J Econ Psychol 2002; 23: 383–98

    Article  Google Scholar 

  24. Salkeld G, Ryan M, Short L. The veil of experience: do consumers prefer what they know best? Health Econ 2000; 9: 267–70

    Article  PubMed  CAS  Google Scholar 

  25. Jepson C, Asch DA, Hershey JC, et al. In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol 2005; 58: 103–5

    Article  PubMed  Google Scholar 

  26. Kalantar JS, Talley NJ. The effects of lottery incentive and length of questionnaire on health survey response rates: a randomized study. J Clin Epidemiol 1999; 52: 1117–22

    Article  PubMed  CAS  Google Scholar 

  27. Edwards P, Roberts I, Sandercock P, et al. Follow-up by mail in clinical trials: does questionnaire length matter? Controlled Clin Trials 2004; 25: 31–52

    Article  PubMed  Google Scholar 

  28. Bryan S, Roberts T, Heginbotham C, et al. QALY-maximisation and public preferences: results from a general population survey. Health Econ 2002; 11: 679–93

    Article  PubMed  Google Scholar 

  29. Maddala T, Phillips KA, Johnson FR. An experiment on simplifying conjoint analysis designs for measuring preferences. Health Econ 2003; 12: 1035–47

    Article  PubMed  Google Scholar 

  30. Ryan M, Bate A. Testing the assumptions of rationality, continuity and symmetry when applying discrete choice experiments in health care. Appl Econ Lett 2001; 8: 59–63

    Article  Google Scholar 

  31. Detsky A, Naglie G, Krahn M, et al. Primer on medical decision analysis: part 3. Estimating probabilities and utilities. Med Decis Mak 1997; 17: 136–41

    Article  Google Scholar 

  32. Jan S, Mooney G, Ryan M, et al. The use of conjoint analysis to elicit community preferences in public health research: a case study of hospital services in South Australia. Aust N Z J Public Health 2000; 24: 64–70

    Article  PubMed  CAS  Google Scholar 

  33. Ryan M, Major K, Skatun D. Using discrete choice experiments to go beyond clinical outcomes when evaluating clinical practice. J Eval Clin Pract 2005; 11: 328–38

    Article  PubMed  Google Scholar 

  34. Louviere JJ, Burgess L, Street D. Modelling the choices of single individuals by combining efficient choice experiment designs with extra preference information. Centre for the Study of Choice (CenSoC) working paper no. 04-005. Sydney (NSW): University of Technology, 2004

    Google Scholar 

  35. Coast J, Salisbury C, de Berker D, et al. Preferences for aspects of a dermatology consultation. Br J Dermatol 2006; 155: 387–92

    Google Scholar 

  36. Salisbury C, Noble A, Horrocks S, et al. Evaluation of a general practitioner with special interest service for dermatology: randomized controlled trial. BMJ 2005; 331: 1441–6

    Article  PubMed  Google Scholar 

  37. Coast J, Noble S, Noble A, et al. Economic evaluation of a general practitioner with special interests led primary care dermatology service. BMJ 2005; 331: 1444–9

    Article  PubMed  Google Scholar 

  38. Coast J, Horrocks S. Developing attributes and levels for discrete choice experiments using qualitative methods. J Health Serv Res Policy. In press

  39. Sloane NJA. A library of orthogonal arrays. Fixed-level arrays with more than three levels: oa.16.5.4.2 [online]. Available from URL: http://www.research.att.com/~njas/oadir/oa.16.5.4.2.txt [Accessed 2007 Jan 9]

  40. Green PE, Carroll JD, Carmone FJ. Some new types of fractional factorial designs for marketing experiments. In: Sheth JN, editor. Research in marketing, Vol 1. Greenwich (CT): JAI Press, 1978: 99–122

    Google Scholar 

  41. Bridges JFP. Stated preference methods in health care evaluation: an emerging methodological paradigm in health economics. Appl Health Econ Health Policy 2003; 2: 213–24

    PubMed  Google Scholar 

  42. Stata Corporation. Stata statistical software: release 9.0SE. College Station (TX): Stata Corporation, 2005

    Google Scholar 

  43. Hensher D, Louviere J, Swait J. Combining sources of preference data. Journal of Econom 1999; 89: 197–221

    Article  Google Scholar 

  44. Louviere JJ. What if consumer experiments impact variances as well as means: response variability as a behavioural phenomenon. J Consumer Res 2001; 28: 506–11

    Article  Google Scholar 

Download references

Acknowledgements

This research would not have been possible without the participation of the dermatology patients surveyed, and our thanks go to them. We would also like to thank the broader team involved in the dermatology work, in particular, Sue Horrocks and Alison Noble for contributions to data collection, and David de Berker for constructive comment in relation to the development of the attributes used in the research. Funding to support the work was obtained from the National Health Service Delivery and Organisation Research and Development Programme and Medical Research Council Health Services Research Collaboration. There are no conflicts of interest in relation to this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joanna Coast.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Coast, J., Flynn, T.N., Salisbury, C. et al. Maximising Responses to Discrete Choice Experiments. Appl Health Econ Health Policy 5, 249–260 (2006). https://doi.org/10.2165/00148365-200605040-00006

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.2165/00148365-200605040-00006

Keywords

Navigation