Abstract
Providing an opt-out alternative in discrete choice experiments can often be considered to be important for presenting real-life choice situations in different contexts, including health. However, insufficient attention has been given to how best to address choice behaviours relating to this opt-out alternative when modelling discrete choice experiments, particularly in health studies. The objective of this paper is to demonstrate how to account for different opt-out effects in choice models. We aim to contribute to a better understanding of how to model opt-out choices and show the consequences of addressing the effects in an incorrect fashion. We present our code written in the R statistical language so that others can explore these issues in their own data. In this practical guideline, we generate synthetic data on medication choice and use Monte Carlo simulation. We consider three different definitions for the opt-out alternative and four candidate models for each definition. We apply a frequentist-based multimodel inference approach and use performance indicators to assess the relative suitability of each candidate model in a range of settings. We show that misspecifying the opt-out effect has repercussions for marginal willingness to pay estimation and the forecasting of market shares. Our findings also suggest a number of key recommendations for DCE practitioners interested in exploring these issues. There is no unique best way to analyse data collected from discrete choice experiments. Researchers should consider several models so that the relative support for different hypotheses of opt-out effects can be explored.
Similar content being viewed by others
Data availability statement
For this paper, the data have been synthetically generated. Full details on the data-generating process and the code required to replicate our analysis are given in Appendix A of the ESM.
Notes
We note that random utility maximisation is not the only framework for modelling choices. Indeed, for certain decisions, other choice axioms may be better suited, such as regret minimisation. In this paper, we utilise the most widely used framework to analyse opt-out effects.
Note, however, that the derivation of the nested logit model does not necessarily imply that participants make choices in this hierarchical manner.
While this design ensures that all attribute levels can be estimated independently of each other, we recognise that a more efficient experimental design could have been used to minimise the variance of the parameters. However, in a Monte Carlo experiment with specified parameters it may be more appropriate to show that the results stand up in cases where the experimental design is not tailored too closely to the data-generating parameters. Indeed, this would be the case in a real-life empirical application.
In this paper, we use the Bayesian information criterion . We derive this for each estimated model m in treatment t and replication r as follows: \(\text {IC}_{m_\mathrm{tr}}= \ln \left( N\right) K_{m_\mathrm{tr}} - 2\ln \left( \hat{\mathcal {L}}_{m_\mathrm{tr}}\right)\), where N is the number of choice observations, \(\hat{\mathcal {L}}_{m_\mathrm{tr}}\) is the maximised value of the likelihood function for model m in treatment t and replication r, and \(K_{m_\mathrm{tr}}\) is the number of estimated parameters associated with this model.
As noted when describing the independent availability logit model in Sect. 2.2.3, the alternatives taken into account by a (real or simulated) participant cannot be established with certainty. For the sake of comparison, we assume an alternative is deemed to be not in a participant’s consideration set if they never choose it in any of their eight choices.
References
Craig BM, Lancsar E, Mühlbacher AC, Brown DS, Ostermann J. Health preference research: an overview. Patient Patient Cent Outcomes Res. 2017;10(4):507–10.
Ryan M, Skåtun C. Modelling non-demanders in choice experiments. Health Econ. 2004;13(4):397–402.
Lancsar E, Louviere JJ. Conducting discrete choice experiments to inform healthcare decision making. Pharmacoeconomics. 2008;26(8):661–77.
Boxall P, Adamowicz WL, Moon A. Complexity in choice experiments: choice of the status quo alternative and implications for welfare measurement. Aust J Agric Resour Econ. 2009;53(4):503–19.
Veldwijk J, Lambooij MS, de Bekker-Grob EW, Smit SA, de Wit DA. The effect of including an opt-out option in discrete choice experiments. PLoS ONE. 2014;9(11):e111805.
Louviere JJ, Lancsar E. Choice experiments in health: the good, the bad, the ugly and toward a brighter future. Health Econ Policy Law. 2009;4(4):527–46.
Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health—a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14(4):403–13.
Johnston RJ, Boyle KJ, Adamowicz W, Bennett J, Brouwer R, Cameron TA, Hanemann WM, Hanley N, Ryan M, Scarpa R, Tourangeau R, Vossler C. Contemporary guidance for stated preference studies. J Assoc Environ Resour Econ. 2017;4(2):319–405.
Niebor A, Xander K, Elly S. Preferences for long-term care services: willingness to pay estimates derived from a discrete choice experiment. Soc Sci Med. 2010;70(9):1317–25.
Milte R, Ratcliffe J, Miller M, Whitehead C, Cameron I, Crotty M. What are frail older people prepared to endure to achieve improved mobility following hip fracture? A discrete choice experiment. J Rehabil Med. 2013;45(1):81–6.
Dhar R, Simonson I. The effect of forced choice on choice. J Market Res. 2003;40(2):146–60.
Bahamonde-Birke FJ, Navarro I, de Dios Ortúzar J. If you choose not to decide, you still have made a choice. J Choice Model. 2017;22:13–23.
Schlereth C, Skiera B. Two new features in discrete choice experiments to improve willingness-to-pay estimation that result in SDR and SADR: separated (adaptive) dual response. Manag Sci. 2017;63(3):829–42.
Salkeld G, Ryan M, Short L. The veil of experience: do consumers prefer what they know best? Health Econ. 2000;9(3):267–70.
Ryan M, Ubach C. Testing for an experience endowment effect in health care. Appl Econ Lett. 2003;10(7):407–10.
Meyerhoff J, Liebe U. Status quo effect in choice experiments: empirical evidence on attitudes and choice task complexity. Land Econ. 2009;85(3):515–28.
Oehlmann M, Meyerhoff J, Mariel P, Weller P. Uncovering context-induced status quo effects in choice experiments. J Environ Econ Manag. 2017;81:59–73.
Kahneman D, Knetsch JL, Thaler RH. Anomalies: the endowment effect, loss aversion, and status quo bias. J Econ Perspect. 1991;5(1):193–206.
Krosnick JA, Holbrook AL, Berent MK, Carson RT, Hanemann WM, Kopp RJ, Mitchell RM, Presser S, Ruud PA, Smith VK, Moody WR, Green MC, Conaway M. The impact of “no opinion” response options on data quality: non-attitude reduction or an invitation to satisfice? Public Opinion Q. 2002;66(3):371–403.
Tversky A, Shafir E. Choice under conflict: the dynamics of deferred decision. Psychol Sci. 1992;3(6):358–61.
Baron J, Ritov I. Reference points and omission bias. Organ Behav Hum Decis Process. 1994;59(3):475–98.
Masatlioglu Y, Ok EA. Rational choice with status quo bias. J Econ Theory. 2005;121(1):1–29.
Brazell JD, Diener CG, Karniouchina E, Moore WL, Séverin V, Uldry PF. The no-choice option and dual response choice designs. Market Lett. 2006;17(4):255–68.
Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertain. 1988;1(1):7–59.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2016. https://www.R-project.org/.
Scarpa R, Ferrini S, Willis K. Performance of error component models for status-quo effects in choice experiments. In: Scarpa R, Alberini A, editors. Applications of simulation methods in environmental and resource economics. Dordrecht: Springer; 2005. p. 243–73.
Train K. Discrete choice methods with simulation. 2nd ed. Cambridge: Cambridge University Press; 2009.
von Haefen RH, Massey RH, Adamowicz WL. Serial nonparticipation in repeated discrete choice models. Am J Agric Econ. 2005;87(4):1061–76.
Manski CF. The structure of random utility models. Theory Decis. 1977;8:229–54.
Frejinger E, Bierlaire M, Ben-Akiva M. Sampling of alternatives for route choice modeling. Transp Res Part B Methodol. 2009;43(10):984–94.
Campbell D, Hensher DA, Scarpa R. Cost thresholds, cut-offs and sensitivities in stated choice analysis: identification and implications. Resour Energy Econ. 2012;34(3):396–411.
Kaplan S, Shiftan Y, Bekhor S. Development and estimation of a semi-compensatory model with a flexible error structure. Transp Res Part B Methodol. 2012;46(2):291–304.
Campbell D, Erdem S. Position bias in best-worst scaling surveys: a case study on trust in institutions. Am J Agric Econ. 2015;97(2):526–45.
Erdem S, Campbell D, Thompson C. Elimination and selection by aspects in health choice experiments: prioritising health service innovations. J Health Econ. 2014;38:10–22.
Henningsen A, Toomet O. Maxlik: a package for maximum likelihood estimation in R. Comput Stat. 2011;26(3):443–58.
Buckland ST, Burnham KP, Augustin NH. Model selection: an integral part of inference. Biometrics. 1997;53(2):603–18.
Symonds MRE, Moussalli A. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behav Ecol Sociobiol. 2011;65(1):13–21.
Layton DF, Lee ST. Embracing model uncertainty: strategies for response pooling and model averaging. Environ Resour Econ. 2006;34(1):51–85.
Campbell D, Mørkbak MR, Olsen SB. The link between response time and preference, variance and processing heterogeneity in stated choice experiments. J Environ Econ Manag. 2018;88(1):18–34.
Wuertz D et al. fExtremes: Rmetrics—extreme financial market data. 2013. R package version 3010.81. https://CRAN.R-project.org/package=fExtremes.
Aizaki H. Basic functions for supporting an implementation of choice experiments in R. J Stat Softw. 2012;50(2):1–24.
Acknowledgements
We thank the editor Christopher Carswell for his invitation to write this paper. We also thank four anonymous reviewers for their helpful comments and suggestions on previous versions of this paper. Any remaining errors or misinterpretations are solely the authors’ responsibility.
Author information
Authors and Affiliations
Contributions
DC and SE contributed equally to all aspects of this paper, including the conceptualisation, data generation, analysis and drafting of the manuscript.
Corresponding author
Ethics declarations
Funding
The study was not supported by any external sources or funds.
Ethical approval
The study did not involve the collection of primary data or the use of secondary data sources, thus negating the need for ethical approval.
Informed consent
Participants have been artifically generated as part of the Monte Carlo simulation, meaning that informed consent is not applicable.
Conflict of interest
Danny Campbell and Seda Erdem declare no conflicts of interest relevant to the content of this manuscript.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Campbell, D., Erdem, S. Including Opt-Out Options in Discrete Choice Experiments: Issues to Consider. Patient 12, 1–14 (2019). https://doi.org/10.1007/s40271-018-0324-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40271-018-0324-6