Abstract
Objective
To identify any differences in response and completion rates across two versions of a questionnaire, in order to determine the trade-off between a potentially higher response rate (from a short questionnaire) and a greater level of information from each respondent (from a long questionnaire).
Methods
This was a randomised trial to determine whether response rates and/or results differ between questionnaires containing different numbers of choices: a short version capable of estimating main effects only and a longer version capable of estimating two-way interactions, provided certain assumptions hold. Best-worst scaling was the form of discrete choice experimentation used. Data were collected by post and analysed in terms of response rates, completion rates and differences in mean utilities.
Results
Fifty-three percent of individuals approached agreed to take part. From these, the response to the long questionnaire was 83.2% and the short questionnaire was 85.1% (difference 1.9%, 95% CI -7.3, 11.2; p = 0.68). The two versions of the questionnaire provided similar inferences.
Discussion/conclusion
This trial indicates that, in a healthcare setting, for this complexity of questionnaire (i.e. four attributes and the best-worst scaling design), the use of 16 scenarios obtained very similar response rates to those obtained using half this number.
Similar content being viewed by others
Notes
1When using a best-worst design, the estimation of subsets of two-way interactions is not possible: all two-way interactions must be estimated, or none.
References
Louviere JJ, Hensher DA, Swait JD. Stated choice methods: analysis and application. Cambridge: Cambridge University Press, 2000
Hall J, Kenny P, King M, et al. Using stated preference discrete choice modelling to evaluate the introduction of varicella vaccination. Health Econ 2002; 11: 457–65
Phillips KA, Maddala T, Johnson FR. Measuring preferences for health care interventions using conjoint analysis: an application to HIV testing. Health Serv Res 2002; 37: 1681–705
Bryan S, Buxton M, Sheldon R, et al. The use of magnetic resonance imaging for the investigation of knee injuries: an investigation of preferences. Health Econ 1998; 7: 595–604
Gyrd-Hansen D, Sogaard J. Analysing public preferences for cancer screening programmes. Health Econ 2001; 10: 617–34
McKenzie L, Cairns J, Osman L. Symptom-based outcome measures for asthma: the use of discrete choice methods to assess patient preferences. Health Policy 2001; 57: 193–204
Johnson FR, Banzhaf MR, Desvousges WH. Willingness to pay for improved respiratory and cardiovascular health: a multiple-format stated-preference approach. Health Econ 2000; 9: 295–317
Hundley V, Ryan M, Graham W. Assessing women’s preferences for intrapartum care. Birth 2001; 28: 254–63
Ryan M, McIntosh E, Dean T, et al. Trade-offs between location and waiting times in the provision of health care: the case of elective surgery on the Isle of Wight. J Public Health Med 2000; 22: 202–10
Moayyedi P, Wardman M, Toner J, et al. Establishing patient preferences for gastroenterology clinic reorganization using conjoint analysis. Eur J Gastroenterol Hepatol 2002; 14: 429–33
Ryan M, Hughes J. Using conjoint analysis to assess women’s preferences for miscarriage management. Health Econ 2004; 6: 261–73
Ratcliffe J. Public preferences for the allocation of donor liver grafts for transplantation. Health Econ 2000; 9: 137–48
Bech M. Politicians’ and hospital managers’ trade-offs in the choice of reimbursement scheme: a discrete choice experiment. Health Policy 2003; 66: 261–75
Ubach C, Scott A, French F, et al. What do hospital consultants value about their jobs? A discrete choice experiment. BMJ 2003; 326: 1432–8
Marley AAJ, Louviere JJ. Some probabilistic models of best, worst and best-worst choices. J Math Psychol 2005; 49: 464–80
Mcintosh E, Louviere J. Separating weight and scale value: an exploration of best-attribute scaling in health economics: Health Economists Study Group. Brunel: Brunel University, 2002
Flynn TN, Louviere J, Peters TJ, et al. Best-worst scaling: what it can do for health care research and how to do it. J Health Econ 2007; 26: 171–89
DeShaso JR, Fermo G. Designing choice sets for stated preference methods: the effects of complexity on choice consistency. J Environ Econ Manage 2002; 44: 123–43
Louviere J. Random utility theory-based stated preference elici-tation methods: applications in health economics with special reference to combining sources of preference data. Centre for the Study of Choice (CenSoC) working paper no. 04-001. Sydney (NSW): University of Technology, 2004
Viney R, Savage E, Louviere J. Empirical investigation of experimental design properties of discrete choice experiments in health care. Health Econ 2005; 14: 349–62
Ryan M, Gerard K. Using discrete choice experiments to value health care programmes: current practice and future research reflections. Appl Health Econ Health Policy 2003; 2: 55–64
Picavet HS. National health surveys by mail or home interview: effects on response. J Epidemiol Community Health 2001; 55: 408–13
Scott A. Identifying and analysing dominant preferences in discrete choice experiments: an application in health care. J Econ Psychol 2002; 23: 383–98
Salkeld G, Ryan M, Short L. The veil of experience: do consumers prefer what they know best? Health Econ 2000; 9: 267–70
Jepson C, Asch DA, Hershey JC, et al. In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol 2005; 58: 103–5
Kalantar JS, Talley NJ. The effects of lottery incentive and length of questionnaire on health survey response rates: a randomized study. J Clin Epidemiol 1999; 52: 1117–22
Edwards P, Roberts I, Sandercock P, et al. Follow-up by mail in clinical trials: does questionnaire length matter? Controlled Clin Trials 2004; 25: 31–52
Bryan S, Roberts T, Heginbotham C, et al. QALY-maximisation and public preferences: results from a general population survey. Health Econ 2002; 11: 679–93
Maddala T, Phillips KA, Johnson FR. An experiment on simplifying conjoint analysis designs for measuring preferences. Health Econ 2003; 12: 1035–47
Ryan M, Bate A. Testing the assumptions of rationality, continuity and symmetry when applying discrete choice experiments in health care. Appl Econ Lett 2001; 8: 59–63
Detsky A, Naglie G, Krahn M, et al. Primer on medical decision analysis: part 3. Estimating probabilities and utilities. Med Decis Mak 1997; 17: 136–41
Jan S, Mooney G, Ryan M, et al. The use of conjoint analysis to elicit community preferences in public health research: a case study of hospital services in South Australia. Aust N Z J Public Health 2000; 24: 64–70
Ryan M, Major K, Skatun D. Using discrete choice experiments to go beyond clinical outcomes when evaluating clinical practice. J Eval Clin Pract 2005; 11: 328–38
Louviere JJ, Burgess L, Street D. Modelling the choices of single individuals by combining efficient choice experiment designs with extra preference information. Centre for the Study of Choice (CenSoC) working paper no. 04-005. Sydney (NSW): University of Technology, 2004
Coast J, Salisbury C, de Berker D, et al. Preferences for aspects of a dermatology consultation. Br J Dermatol 2006; 155: 387–92
Salisbury C, Noble A, Horrocks S, et al. Evaluation of a general practitioner with special interest service for dermatology: randomized controlled trial. BMJ 2005; 331: 1441–6
Coast J, Noble S, Noble A, et al. Economic evaluation of a general practitioner with special interests led primary care dermatology service. BMJ 2005; 331: 1444–9
Coast J, Horrocks S. Developing attributes and levels for discrete choice experiments using qualitative methods. J Health Serv Res Policy. In press
Sloane NJA. A library of orthogonal arrays. Fixed-level arrays with more than three levels: oa.16.5.4.2 [online]. Available from URL: http://www.research.att.com/~njas/oadir/oa.16.5.4.2.txt [Accessed 2007 Jan 9]
Green PE, Carroll JD, Carmone FJ. Some new types of fractional factorial designs for marketing experiments. In: Sheth JN, editor. Research in marketing, Vol 1. Greenwich (CT): JAI Press, 1978: 99–122
Bridges JFP. Stated preference methods in health care evaluation: an emerging methodological paradigm in health economics. Appl Health Econ Health Policy 2003; 2: 213–24
Stata Corporation. Stata statistical software: release 9.0SE. College Station (TX): Stata Corporation, 2005
Hensher D, Louviere J, Swait J. Combining sources of preference data. Journal of Econom 1999; 89: 197–221
Louviere JJ. What if consumer experiments impact variances as well as means: response variability as a behavioural phenomenon. J Consumer Res 2001; 28: 506–11
Acknowledgements
This research would not have been possible without the participation of the dermatology patients surveyed, and our thanks go to them. We would also like to thank the broader team involved in the dermatology work, in particular, Sue Horrocks and Alison Noble for contributions to data collection, and David de Berker for constructive comment in relation to the development of the attributes used in the research. Funding to support the work was obtained from the National Health Service Delivery and Organisation Research and Development Programme and Medical Research Council Health Services Research Collaboration. There are no conflicts of interest in relation to this work.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Coast, J., Flynn, T.N., Salisbury, C. et al. Maximising Responses to Discrete Choice Experiments. Appl Health Econ Health Policy 5, 249–260 (2006). https://doi.org/10.2165/00148365-200605040-00006
Published:
Issue Date:
DOI: https://doi.org/10.2165/00148365-200605040-00006