# Technical points on prediction markets with Conjoint.ly

This note is prepared for those familiar with the specifics of prediction market methodology to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

Prediction markets on Conjoint.ly are specifically designed for product and concept testing through aggregating the audience’s binary assessment of a particular statement or question. Technically, our prediction markets tool is a betting game: Each respondent gets only one chance to bet on a question. This dynamic is optimised for panel respondents and survey-takers who are not necessarily trained to make predictions and may encounter prediction markets as a technique only once. Furthermore, the chatbot-like survey template aims to keep respondents engaged throughout the survey for best response quality.

## Survey structure

Introductory messages are shown to each respondent to greet them into the survey and explain the rules. The system initially allocates each respondent 100 points to bet in the survey. These points cannot be traded for money, and no such suggestion is given to respondents.

As a next step, each respondent is served one or more calibration questions. These questions should be pertain to known facts that happened in the past for which there is a knowable correct answer.

When respondents select the right answer in the calibration component, they will be awarded points equal to the amount of points they bet multiplied by the odds for that answer. When respondents select the wrong answer, they lose the amount of points bet into that answer. The amount of points allocated by a particular respondents generally indicates their level of confidence for a particular answer.

The purpose of calibration questions is threefold:

1. To introduce respondents to the dynamic of the prediction market test.
2. To differentiate between knowledgeable and diligent respondents and the rest: Those who respond correctly end up with more points that can be placed as bets for prediction questions (giving more weight to their predictions),
3. To test whether respondents as a whole are knowledgeable in a particular area.

Prediction questions are the main part of the survey. You can test your ideas, product concepts, claims or advertising stimuli that has not been launched before. There are no correct or incorrect answers to these questions and therefore respondents will not lose or win points in this component of the survey. The amount bet for these questions will still indicate the level of confidence in their answer.

Odds in calibration and prediction questions are always explained at the beginning of each calibration question for every choice so respondents will take into consideration their level of confidence in their answer. They are calculated as capped fair odds:

where:

• $odds$ are the odds shown to a respondent (“$odds$ to 1”, such as “3 to 1”) and used to calculate the prize;
• $bets$ is the percentage of bets placed on a particular answer. These percentages are also displayed to respondents. Importantly, they are not taken from the whole set of responses but from a random subset (so as to limit groupthink).

Comments are used in prediction questions. Respondents will be asked why they made a bet on a particular question. These are later shown to other respondents for:

• Consideration before making a prediction (as an attempt to present a persuasive argument for each side of the argument by another respondent);
• Evaluation of whether the comment is a convincing rationale for predicting a particular option (after those later respondents already made their own prediction).

Their conviction score in the report will correspond to the percentage of other respondents who voted the comment as convincing minus percentage who voted the comment as unconvincing. Comments with low conviction scores will be not be shown for consideration to other respondents.