Two for the price of one: If moving beyond traditional single-best discrete choice experiments, should we use best-worst, best-best or ranking for preference elicitation?

Samare P.I. Huls, Emily Lancsar, Bas Donkers, Jemimah Ride

    Research output: Contribution to journalArticlepeer-review

    7 Citations (Scopus)

    Abstract

    This study undertook a head-to-head comparison of best-worst, best-best and ranking discrete choice experiments (DCEs) to help decide which method to use if moving beyond traditional single-best DCEs. Respondents were randomized to one of three preference elicitation methods. Rank-ordered (exploded) mixed logit models and respondent-reported data were used to compare methods and first and second choices. First choices differed from second choices and preferences differed between elicitation methods, even beyond scale and scale dynamics. First choices of best-worst had good choice consistency, scale dynamics and statistical efficiency, but this method's second choices performed worst. Ranking performed best on respondent-reported difficulty and preference; best-best's second choices on statistical efficiency. All three preference elicitation methods improve efficiency of data collection relative to using first choices only. However, differences in preferences between first and second choices challenge moving beyond single-best DCE. If nevertheless doing so, best-best and ranking are preferred over best-worst DCE.
    Original languageEnglish
    Pages (from-to)2630-2647
    Number of pages18
    JournalHealth Economics (United Kingdom)
    Volume31
    Issue number12
    DOIs
    Publication statusPublished - 2022

    Fingerprint

    Dive into the research topics of 'Two for the price of one: If moving beyond traditional single-best discrete choice experiments, should we use best-worst, best-best or ranking for preference elicitation?'. Together they form a unique fingerprint.

    Cite this