Technical points on DCE with Conjoint.ly


This note is prepared for those familiar with the specifics of discrete choice experimentation to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

DCE or conjoint? Conjoint.ly uses discrete choice experimentation, which is sometimes referred to as choice-based conjoint. DCE is a more robust technique consistent with random utility theory and has been proven to simulate customers’ actual behaviour in the marketplace (Louviere, Flynn & Carson, 2010 cover this topic in detail). However, the output on relative importance of attributes and value by level is aligned to the output from conjoint analysis (partworth analysis).

Experimental design. Conjoint.ly uses the attributes and levels you specify to create an choice design, optimising balance, overlap, and other characteristics. Our algorithm does not specifically attempt to maximise D-efficiency, but it tends to produce D-efficient designs. In most cases, the number of choice sets is excessive for one respondent and the experiment is split into multiple blocks. Each choice set consists of several product construct alternatives and, by default, one “do not buy” alternative.

Minimum sample size. Conjoint.ly automatically recommends a minimum sample size. In most cases, it is between 50 and 300 responses. In our calculations, we use a proprietary formula that takes into account the number of attributes, levels, and other experimental settings.

Relative importance of attributes and value of levels. Conjoint.ly estimates a hierarchical bayesian (HB) multinomial logit model of choice using responses deemed valid. The value (partworth) of each level reflects how strongly that level sways the decision to buy the construct. Attributes with large variations in the sway factor are deemed more important. Specifically, we calculate attribute importance and level value scores (partworth utilities) by taking coefficients from the estimated model and linearly tansforming them so that:

  • in each attribute, the sum of absolute values of positive partworths equals the sum of absolute values of the negative ones, and
  • in each attribute, the sum of the spreads (maximum minus minimum) of parthworths equals 100%.

Marginal willingness to pay. For experiments where one of the attributes is price, Conjoint.ly estimates a separate model with price as a numerical variable. We also perform checks for appropriateness of calculation of the measure, taking into account both the experimental set-up and the received responses (for example, limiting MWTP calculation in cases where there is non-linearity in price). We use the concept of “Market Value of Attribute Improvement” (MVAI), but unlike in the original paper we do not use the closed-form formula, but rather find the values numerically.

Market share simulation. Market share simulation is performed using individual coefficients from the estimated HB multinomial logit model.

Ranked list of product constructs. Conjoint.ly forms the complete list of product constructs using all possible combinations of levels and ranks them based on a score computed from the relative level value scores (partworths).

Segmentation. Conjoint.ly segments the market based on the individual coefficients from the estimated HB multinomial logit model using k-means clustering. We provide the values of the Calinski-Harabasz criterion (to help choose the appropriate number of clusters) and the normalised (0 to 1) Dunn partition coefficient for fuzzy k-means (to help choose the appropriate number of clusters as well as to decide if segmentation at all is crisp and hence appropriate). We provide the same reporting for each segment.

Raw response data

The raw data collected in all experiments are available in the Excel sheets, which makes it possible to do additional analysis on the data outside of Conjoint.ly. In particular, the “Raw data” sheet will contain the following key tables:

  • RAW RESPONSES is the list of all responses (by respondent, question, and block)
    • participant_id is the ID of the respondent
    • block is the number of the block
    • set_seq_order is the question number
    • alternative_seq_order is the chosen alternative
  • LIST OF LEVELS AS ARRANGED IN QUESTIONS
    • alternative_seq_order is the sequential number of the alternative
    • AseqNum is the sequential number of attribute
    • LseqNum is the sequential number of the level

The values set_seq_order and alternative_seq_order are not necessarily in the order in which each respondent sees these, because the rendering of questions is randomised for each respondent, and the rendering of alternatives is randomised for each question.