Skip to main content
Log in

Segmenting Patients and Physicians Using Preferences from Discrete Choice Experiments

  • Practical Application
  • Published:
The Patient - Patient-Centered Outcomes Research Aims and scope Submit manuscript

Abstract

People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university students. Those segments were very different—where one wanted substantial penalties against cyberbullies and were willing to devote time to a prevention program, while the other felt no need to be involved in prevention and wanted only minor penalties. Segmentation recognizes key differences in why patients and physicians prefer different health programs and treatments. A viable segmentation solution may lead to adapting prevention programs and treatments for each targeted segment and/or to educating and communicating to better inform those in each segment of the program/treatment benefits. Segment members’ revealed preferences showing behavioral changes provide the ultimate basis for evaluating the segmentation benefits to the health organization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. HB is a two-level procedure that shares information among respondents at the higher level using a multivariate normal distribution while estimating at the lower level using multinomial logit regression. While coefficients at the individual level could have been obtained from LCA, HB assumes a continuous distribution of heterogeneity which appears to be closer to this data than the discrete assumption of LCA [75]. Those individual estimates can be obtained from LCA by multiplying the PWUs for segments by respondents’ posterior probabilities of membership in each segment. The coefficients from LCA may be less accurate than HB at estimating respondent preferences [47]. Also, HB may be somewhat more effective in alleviating the independence from irrelevant alternatives (IIA) problems [76]. For sparse data sets, HB seems to capture more of the heterogeneity, while LCA may produce slightly less biased estimates [77].

References

  1. Bensing J. Bridging the gap, the separate worlds of evidence-based medicine and patient-centered medicine. Patient Educ Couns. 2000;39:17–15.

    Google Scholar 

  2. Lerer L. Pharmaceutical marketing segmentation in the age of the internet. Int J Med Mark 2. 2002;2(2):159–66.

    Article  Google Scholar 

  3. Bassi F. Latent class factor models for market segmentation: an application to pharmaceuticals. Stat Methods Appl. 2007;16:270–87.

    Article  Google Scholar 

  4. Vaughn S, Sarianne S. Examining physician segments. Pharm Represent. 2009;39(4):12–5.

    Google Scholar 

  5. American Marketing Association. The American Marketing Association releases new definition for marketing. http://www.marketingpower.com/AboutAMA/Documents/American%20Marketing%20Association%20Releases%20New%20Definition%20for%20Marketing.pdf.. Accessed 13 Nov 2013.

  6. Andreasen AR. Redesigning the marketing universe. Keynote address, World Marketing Summit, Dhaka, 2 Mar 2012.

  7. Levitt T. The marketing imagination. New York: The Free Press; 1983.

    Google Scholar 

  8. Smith W. Product differentiation and market segmentation as alternative marketing strategies. J Mark. 1956;21:3–8.

    Article  Google Scholar 

  9. Greengrove K. Needs-based segmentation: principles and practice. Int J Mark Res. 2002;44(4):405–21.

    Google Scholar 

  10. Ferrandiz J. The impact of generic goods in the pharmaceutical industry. Health Econ. 1999;8(7):599–612.

    Article  CAS  PubMed  Google Scholar 

  11. Cunningham C, Deal K, Chen Y. Adaptive choice-based conjoint analysis: a new patient-centered approach to the assessment of health service preferences. Patient. 2010;3(4):257–73.

    Article  PubMed Central  PubMed  Google Scholar 

  12. Cunningham C, Deal K, Neville A, Miller H, Lohfeld L. Modeling the problem-based learning preferences of McMaster University undergraduate medical students using a discrete choice conjoint experiment. Adv Health Sci Educ Theory Pract. 2006;3(2):245–66.

    Article  Google Scholar 

  13. Cunningham C, Vaillancourt T, Rimas H, Deal K, Cunningham L, Short K, Chen Y. Modeling the bullying prevention program preferences of educators: a discrete choice conjoint experiment. J Abnorm Child Psychol. 2009;37(7):929–43. doi:10.1007/s10802-009-9324-2.

    Article  PubMed  Google Scholar 

  14. Yin Y, Zhang X, Williams R, et al. LOGISMOS—Layered optimal graph image segmentation of multiple objects and surfaces: cartilage segmentation in the knee joint. IEEE Trans Med Imag. 2010;29(12):2023–37.

    Article  Google Scholar 

  15. Schaap M, van Walsum T, Neefjes L, et al. Robust shape regression for supervised vessel segmentation and its application to coronary segmentation in CTA. IEEE Trans Med Imag. 2010;30(11):1974–86.

    Article  Google Scholar 

  16. Van Gerven MA, Jurgelenaite R, Taal BG, et al. Predicting carcinoid heart disease with noisy-threshold classifier. Artif Intell Med. 2007;40(1):45–55.

    Article  PubMed  Google Scholar 

  17. Giuly RJ, Martone M, Ellisman M. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets. BMC Bioinform. 2012;13:29.

    Article  Google Scholar 

  18. Dolnicar S, Lazarevski K. Methodological reasons for the theory/practice divide in market segmentation. J Mark Manag. 2009;25(3–4):357–73.

    Article  Google Scholar 

  19. Aldenderfer M, Blashfield R. Cluster analysis. Newbury Park: Sage Publications; 1984.

    Google Scholar 

  20. Everitt BS. Unresolved problems in cluster analysis. Biometrics. 1979;35:169–82.

    Article  Google Scholar 

  21. Aaker D. Developing business strategies. 5th ed. New York: Wiley; 1998. p. 47.

    Google Scholar 

  22. Vermunt J. Latent class modeling with covariates: two improved three-step approaches. Polit Anal. 2010;18:450–69.

    Article  Google Scholar 

  23. Dolnicar S, Leisch F. Evaluation of structure and reproducibility of cluster solutions using the bootstrap. Market Lett. 2010;21:83–101.

    Article  Google Scholar 

  24. Retzer J, Shan M. Cluster ensemble analysis and graphical depiction of cluster partitions. Proceedings of the 2007 Sawtooth Software Conference, Sequim (WA); 2007.

  25. Williams G. Data mining with Rattle and R: the art of excavating data for knowledge discovery. New York: Springer Science+Business Media; 2011.

    Book  Google Scholar 

  26. Orme B. Getting started with conjoint analysis: strategies for product design and pricing research. Madison: Research Publishers LLC: p. 65.

  27. Breiman L. Random forests. Mach Learn. 2001;45(5–3):2.

    Google Scholar 

  28. Arabie P, Hubert L. Cluster analysis in marketing research. In: Bagozzi R, editor. Advanced methods of marketing research, Cambridge: Blackwell; 1994. p. 160–189.

  29. Rousseeuw P. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Comput Appl Math. 1986;20:53–65. doi:10.1016/0377-0427(87)90125-7.

    Article  Google Scholar 

  30. Calinski T, Habarasz J. A dendrite method for cluster analysis. Commun Stat. 1974;3:1–17.

    Article  Google Scholar 

  31. Schwartz G. Estimating the dimension of a model. Ann Stat. 1978;6:461–4.

    Article  Google Scholar 

  32. Akaike, H. Information theory as an extension of the maximum likelihood principle. In: Petrov BN, Csaki F, editors. Second international symposium on information theory. Budapest: Akademiai Kiado; 1973. p. 267–8.

  33. Bozdogan H. Model selection and Akaike’s information criterion (AIC): the general theory and its analytical extensions. Psychometrika. 1987;52:345–70.

    Article  Google Scholar 

  34. Sugiura N. Further analysis of the data by Akaike’s information criterion and the finite corrections. Commun Stat Theory Methods. 1978;A7:13–26.

    Article  Google Scholar 

  35. Banfield JD, Raftery AE. Model-based gaussian and non-gaussian clustering. Biometrics. 1993;49:803–21.

    Article  Google Scholar 

  36. Rand WM. Objective criteria for the evaluation of clustering methods. J Am Stat Assoc. 1971;66:846–50.

    Article  Google Scholar 

  37. Hubert L, Arabie P. Comparing partitions. J Class. 1985;2:193–218.

    Article  Google Scholar 

  38. Morey LC, Agresti A. An adjustment to the rand statistic for chance agreement. Classif Soc Bull. 1981;5:9–10.

    Google Scholar 

  39. Fowlkes EB, Mallows CL. A method for comparing two hierarchical clusterings. J Am Stat Assoc. 1983;78(383):553–69.

    Article  Google Scholar 

  40. Hultsch L. Untersuchung zur Besiedlung einer Sprengfläche im Pockautal durch die Tiergruppen Heteroptera (Wanzen) und Auchenorrhyncha (Zikaden).

  41. Krieger AM, Green PE. A generalized Rand-index method for consensus clustering of separate partitions of the same data base. J Classif. 1999;16:63–89.

    Article  Google Scholar 

  42. Zweig M, Campbell G. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin Chem. 1993;39(4):561–77.

    CAS  PubMed  Google Scholar 

  43. Vuk M, Curk T. ROC curve, lift chart and calibration plot. Metodoloski zvezki. 2006;3(1):89–108.

    Google Scholar 

  44. Goodman LA. The analysis of systems of qualitative variables when some of the variables are unobservable. Part I: a modified latent structure approach. Am J Sociol. 1974;79:1179–259.

    Article  Google Scholar 

  45. Magidson J, Vermunt JK. Latent class factor and cluster models, bi-plots and related graphical displays. Sociol Methodol. 2001;31:223–64.

    Article  Google Scholar 

  46. The CBC Latent Class Technical Paper. Version 3. Sawtooth Software Technical Paper Series, 2004.

  47. Latent Class v4.5, Sawtooth Software Inc., 26 Sep 2012.

  48. Vermunt JK, Magidson J. Latent Gold Choice 4.0 user’s guide. Statistical Innovations; 2005.

  49. Allenby G, Arora N, Ginter J. On the heterogeneity of demand. J Market Res. 1998;35:384–9.

    Article  Google Scholar 

  50. Rossi P, Allenby G, McCullough R. Bayesian statistics and marketing. New York: Wiley; 2005.

    Book  Google Scholar 

  51. Revelt D, Train K. Mixed logit with repeated choices: households’ choices of appliance efficiency level. Rev Econ Stat. 1998;30(4):647–57.

    Article  Google Scholar 

  52. Johnson FR, Mansfield C. Survey design and analytical strategies for better healthcare stated-choice studies. The Patient. 2008;1(4):299–307.

    Article  PubMed  Google Scholar 

  53. MacQueen JB. Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. Vol. 1. University of California Press; 1967: p. 281–297.

  54. Witek E. Comparison of model-based clustering with heuristic clustering methods. Folia Oeconomica. 2011;255:191–7.

    Google Scholar 

  55. Wang X, Qiu W, Zamar RH. CLUES: a non-parametric clustering method based on shrinking. Comput Stat Data Anal. 2007;52(1):286–98.

    Article  Google Scholar 

  56. Chang F, Qiu W, Zamar RH, Lazarus R, Wang X. Clues: An R package for nonparametric clustering based on local shrinking. J Stat Softw. 2010;33:4.

    Google Scholar 

  57. Kaufman L, Rousseeuw PJ. Finding groups in data. New York: Wiley; 2005.

    Google Scholar 

  58. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2013. http://www.R-project.org/.

  59. Magidson, J. SI-CHAID 4.0 user’s guide. Statistical Innovations; 2005.

  60. Retzer J, Shan M. Cluster ensemble analysis and graphical depiction of cluster partitions. In: Proceedings of the 2007 Sawtooth Software Conference, Sequim (WA); 2007.

  61. Strehl A, Ghosh J. Cluster ensembles: a knowledge reuse framework for combining multiple partitions. J Mach Learn Res. 2002;3:583–617.

    Google Scholar 

  62. Orme B, Johnson R. Improving K-means cluster analysis: ensemble analysis instead of highest reproducibility replicates. Sawtooth Software Research Paper Series; 2008.

  63. Arseneault L, Walsh E, Trzesniewski K, Newcombe R, Caspi A, Moffitt TE. Bullying victimization uniquely contributes to adjustment problems in young children: a nationally representative cohort study. Pediatrics. 2006;118(1):130–8. doi:10.1542/peds.2005-2388.

    Article  PubMed  Google Scholar 

  64. Arseneault L, Bowes L, Shakoor S. Bullying victimization in youths and mental health problems: ‘Much ado about nothing’? Psychol Med. 2010;40:717–29.

    Article  CAS  PubMed  Google Scholar 

  65. Kim YS, Leventhal BL, Koh YJ, Hubbard A, Boyce WT. School bullying and youth violence: causes or consequences of psychopathologic behavior? Arch Gen Psychiatry. 2006;63(9):1035–41. doi:10.1001/archpsyc.63.9.1035.

    Article  PubMed  Google Scholar 

  66. Sawtooth Software. http://www.sawtoothsoftware.com/version/ssiweb/ssiweb_history.html.. Accessed 13 Nov 2013.

  67. Bridges JFP, Hauber AB, Marshall D, et al. Conjoint analysis applications in health—a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14:403–13.

    Article  PubMed  Google Scholar 

  68. Chen C, Liaw A, Breiman L. Using random forest to learn imbalanced data. UC Berkeley: Department of Statistics; 2004.

    Google Scholar 

  69. Svetnik V, Liaw A, Tong C, et al. Random forest: a classification and regression tool for compound classification and QSAR modeling. J Chem Inf Comput Sci. 2003;43:1947–58.

    Article  CAS  PubMed  Google Scholar 

  70. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;2(3):18–22.

    Google Scholar 

  71. Haley R. Benefit segmentation: a decision-oriented research tool. J Mark. 1968;32(3):30–5.

    Article  Google Scholar 

  72. Zapert K, Spears D. Reengineering a US-based diabetes patient segmentation for Japan: lost in translation. Presented at 2011 Annual National Conference of the Pharmaceutical Marketing Research group; 2011.

  73. Bogle A, Simpson SL, Mills TM. Segmentations that work. First Annual Meeting of the Pharmaceutical Marketing Research Group; 2007.

  74. Ross C, Steward CA, Sinacore JM. The importance of patient preferences in the measurement of health care satisfaction. Med Care. 1993;31(12):1138–49.

    Article  CAS  PubMed  Google Scholar 

  75. Magidson J, Eagle T, Vermunt JK. New developments in latent class choice models. In: Sawtooth Software Conference Proceedings; 2003: p. 89–112.

  76. The CBC Latent Class Technical Paper. Version 3. Sawtooth Software Technical Paper Series; 2004.

  77. McCullough PR. Comparing hierarchical Bayes and latent class choice: practical issues for sparse data sets. In: 2009 Sawtooth Software Conference Proceedings, Delray Beach (FL); Mar 2009.

Download references

Acknowledgements

The author greatly appreciates the generous access provided by Charles E. Cunningham, McMaster University, to the cyberbullying research data. The cyberbullying research (Cunningham et al., personal communication, 2013) was supported by a Community-University Research Alliance grant from the Social Sciences and Humanities Research Council of Canada, the Canadian Institutes of Health Research, the Jack Laidlaw Chair in Patient-Centred Health Care held by Dr. Charles E. Cunningham, and a Canada Research Chair from the Canadian Institutes of Health Research held by Dr. Tracy Vaillancourt. There were no conflicts of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ken Deal.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Deal, K. Segmenting Patients and Physicians Using Preferences from Discrete Choice Experiments. Patient 7, 5–21 (2014). https://doi.org/10.1007/s40271-013-0037-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40271-013-0037-9

Keywords

Navigation