Technical points on DCE with

This note is prepared for those familiar with the specifics of discrete choice experimentation to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

DCE or conjoint? uses discrete choice experimentation, which is sometimes referred to as choice-based conjoint. DCE is a more robust technique consistent with random utility theory and has been proven to simulate customers’ actual behaviour in the marketplace (Louviere, Flynn & Carson, 2010 cover this topic in detail). However, the output on relative importance of attributes and value by level is aligned to the output from conjoint analysis (partworth analysis).

Experimental design. uses the attributes and levels you specify to create a (fractional factorial) choice design, optimising balance, overlap, and other characteristics. Our algorithm does not specifically attempt to maximise D-efficiency, but it tends to produce D-efficient designs. It tends to produce designs of resolution IV or V (as such, it does support measurement of two-way interactions, even though they are not used in our modelling at this stage). In most cases, the number of choice sets is excessive for one respondent and the experiment is split into multiple blocks (often between five and ten). We do not support individualised designs (i.e., every respondent has their own block). Each choice set consists of several product construct alternatives and, by default, one “do not buy” alternative. To review the experimental design for your experiment, in your design set-up page, please go to “Advanced options”, then click “Export experimental design”.

Minimum sample size. automatically recommends a minimum sample size. In most cases, it is between 50 and 300 responses. In our calculations, we use a proprietary formula that takes into account the number of attributes, levels, and other experimental settings.

Relative importance of attributes and value of levels. estimates a hierarchical bayesian (HB) multinomial logit model of choice using responses deemed valid. The value (partworth) of each level reflects how strongly that level sways the decision to buy the construct. Attributes with large variations in the sway factor are deemed more important. Specifically, we calculate attribute importance and level value scores (partworth utilities) by taking coefficients from the estimated model and linearly transforming them so that:

  • in each attribute, the sum of absolute values of positive partworths equals the sum of absolute values of the negative ones, and
  • in each attribute, the sum of the spreads (maximum minus minimum) of parthworths equals 100%.

Marginal willingness to pay. For experiments where one of the attributes is price, estimates a separate model with price as a numerical variable. We also perform checks for appropriateness of calculation of the measure, taking into account both the experimental set-up and the received responses (for example, limiting MWTP calculation in cases where there is non-linearity in price). We use the concept of “Market Value of Attribute Improvement” (MVAI), but unlike in the original paper we do not use the closed-form formula, but rather find the values numerically.

Share of preference simulation. Share of preference simulation is performed using individual coefficients from the estimated HB multinomial logit model. Two models for calculating market shares are available:

  • “Share of preference” model, which is appropriate for low-risk or frequently purchased products: FMCG, software, etc. This model is applicable in the vast majority of applications.
  • “First choice” model, which is suitable for high-risk or seldom purchased products: education, life insurance, pension plans, etc.

Response quality checks

Responses on are checked for signs of fraudulent or inattentive behaviour. They are automatically marked as low quality if the following signs are present:

  • Short total length of time spent on the survey (e.g., 20 seconds);
  • Short length of time spent on each conjoint choice sets (e.g., 1 second);
  • If a mouse is present on the device, if we do not detect movement of the mouse;
  • If scrolling is required to read the full question, but no scrolling occurs.
  • If certain unlikely answers are provided in open-ended questions.

When you set up your study, you can specify a separate redirect for these respondents under “Advanced settings”. If you are using our panel respondents, you do not pay for low quality answers.

Raw response data

The raw data collected in all experiments are available in the Excel sheets, which makes it possible to do additional analysis on the data outside of In particular, the “Raw data” sheet will contain the following key tables:

  • Raw responses is the list of all responses. Each row in this table correspondents to a single choice by a particular respondent for a particular question.
    • participant_id is the ID of the respondent
    • block is the number of the block (i.e., the version of the questionnaire into which the particular respondent is assigned)
    • set_seq_order is the number of the question (i.e., the choice set within a particular block)
    • alternative_seq_order is the chosen alternative
  • List of levels arranged in questions shows which options are shown in each question. Each row corresponds to a single level shown for a particular attribute for a particular alternative of a particular question within a particular block.
    • block is the number of the block (i.e., the version of the questionnaire — a group of choice sets — which is shown to a particular respondent)
    • set_seq_order is the number of the question (i.e., the choice set within a particular block)
    • alternative_seq_order is the number of the alternative (“option” or “card”) displayed to the respondent
    • AseqNum is the number of attribute (corresponds to the AseqNum field in the List of level names)
    • LseqNum is the number of the level (corresponds to the LseqNum field in the List of level names)
  • List of level names details the levels used in the study. Each row corresponds to a single level of a particular attribute.
    • AseqNum is the sequential number of attribute that corresponds with the sequence of attributes stated in the experimental design.
    • LseqNum is the sequential number of the level that corresponds with the sequence of levels stated in the experimental design.

The values set_seq_order and alternative_seq_order are not necessarily in the order in which each respondent sees these, because the rendering of questions is randomised for each respondent, and the rendering of alternatives is randomised for each question.