Technical points on DCE with

This note is prepared for those familiar with the specifics of discrete choice experimentation to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

DCE or conjoint? uses discrete choice experimentation, which is sometimes referred to as choice-based conjoint. DCE is a more robust technique consistent with random utility theory and has been proven to simulate customers’ actual behaviour in the marketplace (Louviere, Flynn & Carson, 2010 cover this topic in detail). However, the output on relative importance of attributes and value by level is aligned to the output from conjoint analysis (partworth analysis).

Experimental design. uses the attributes and levels you specify to create a (fractional factorial) choice design, optimising balance, overlap, and other characteristics. Our algorithm does not specifically attempt to maximise D-efficiency, but it tends to produce D-efficient designs. It tends to produce designs of resolution IV or V (as such, it does support measurement of two-way interactions, even though they are not used in our modelling at this stage). In most cases, the number of choice sets is excessive for one respondent and the experiment is split into multiple blocks (often between five and ten). We do not support individualised designs (i.e., every respondent has their own block). Each choice set consists of several product construct alternatives and, by default, one “do not buy” alternative. To review the experimental design for your experiment, in your design set-up page, please go to “Advanced options”, then click “Export experimental design”.

Minimum sample size. automatically recommends a minimum sample size. In most cases, it is between 50 and 300 responses. In our calculations, we use a proprietary formula that takes into account the number of attributes, levels, and other experimental settings.

Relative importance of attributes and value of levels. estimates a hierarchical bayesian (HB) multinomial logit model of choice using responses deemed valid. The value (partworth) of each level reflects how strongly that level sways the decision to buy the construct. Attributes with large variations in the sway factor are deemed more important. Specifically, we calculate attribute importance and level value scores (partworth utilities) by taking coefficients from the estimated model and linearly transforming them so that:

  • in each attribute, the sum of absolute values of positive partworths equals the sum of absolute values of the negative ones, and
  • in each attribute, the sum of the spreads (maximum minus minimum) of parthworths equals 100%.

Marginal willingness to pay. For experiments where one of the attributes is price, estimates a separate model with price as a numerical variable. We also perform checks for appropriateness of calculation of the measure, taking into account both the experimental set-up and the received responses (for example, limiting MWTP calculation in cases where there is non-linearity in price). Marginal willingness to pay is only an indicative number.

Share of preference simulation. Share of preference simulation is performed using individual coefficients from the estimated HB multinomial logit model. Two models for calculating market shares are available:

  • “Share of preference” model, which is appropriate for low-risk or frequently purchased products: FMCG, software, etc. This model is applicable in the vast majority of applications.
  • “First choice” model, which is suitable for high-risk or seldom purchased products: education, life insurance, pension plans, etc.

Response quality checks

Responses on are checked for signs of fraudulent or inattentive behaviour. They are automatically marked as low quality if the following signs are present:

  • Short total length of time spent on the survey (e.g., 20 seconds);
  • Short length of time spent on each conjoint choice sets (e.g., 1 second);
  • If a mouse is present on the device, if we do not detect movement of the mouse;
  • If scrolling is required to read the full question, but no scrolling occurs.
  • If certain unlikely answers are provided in open-ended questions.

When you set up your study, you can specify a separate redirect for these respondents under “Advanced settings”. If you are using our panel respondents, you do not pay for low quality answers.