Choice Based Conjoint (CBC)

Advanced Analytical Techniques
Presentation to Central Operations Group
Objectives
•
To increase Central Operations Group’s understanding of the following analytical
techniques
– Choice Based Conjoint (CBC) Studies
• MaxDiff/Paired Comparisons
• Conjoint/DCM
•
Specifically, we will cover
– What are these techniques? How do they meet the needs of our clients?
– What are the challenges to COG?
– What deliverables do we deliver to our clients?
•
Hopefully, the info session will lead to
– Smoother work flow
– Higher quality of work
– Increased researcher/client satisfaction
2
Choice Based Conjoint Studies
CBC – Background
•
Business objectives
–
–
–
–
–
•
To search for optimal feature combination as part of new product development.
To test market potential for promotion and pricing strategies.
To optimize portfolio management
To understand what product attributes drive purchase
To calculate the value of product features
A CBC study refers to a research project where one or more Choice
Based Conjoint (CBC) exercises are used in the questionnaire.
–
–
–
–
a list of alternatives that are presented in a choice board, also called a task.
any number of tasks.
defined by a statistical experimental design.
different respondents will see different versions of these tasks.
4
Choice Based Conjoint Studies
Maximum Difference Scaling (MaxDiff)
Maximum Difference Scaling (MaxDiff) - Background
•
Researchers often use scales to solicit absolute measures on a wide variety
of product attributes
– Pros: very simple to execute and allow respondents to evaluate many items at a
time
– Cons: prone to halo effect and heterogeneity in scale uses (e.g., everything is
important or nothing is).
– these ratings can be of limited usefulness.
•
Paired Comparisons and Trade-off Models
– originated in psychometrics to get implicit rankings and/or relative measures from
stimuli (i.e., derived importance)
– The answers are elicited by simply asking preference, do you prefer A to B? A to
C? B to C? etc.
– The choices can then be treated as results from a choice-based conjoint exercise.
Maximum Difference Scaling (MaxDiff) – Client Needs
•
When we are interested in using the tradeoff method to get at the stated
preferences and relative importance of the concepts or product/service
attributes involved.
To establish the relative preference of a list of items.
–
–
–
–
Name testing
Claims/Messages testing
Feature prioritization
Prioritization of decision criteria (attribute importance)
Maximum Difference Scaling (MaxDiff) - Methodology
•
Remember what a CBC exercise is?
•
The respondent is asked to give the most preferred item and the least
preferred item out of the list of attributes.
•
Which of the following attributes are … when choosing a long distance
provider?
Maximum Difference Scaling (MaxDiff) - Methodology
•
•
The benefit is out of one single task, we get a lot of information.
In the previous example, we have 4 items, out of the possible 6 pairs, we
know the preference for 5 from one task.
– E>A
– E>B
– E>J
– B>A
– J>A
– B?J
# items
Possible Pairs
Known Pairs
2
1
1
3
3
3
4
6
5
5
10
7
6
15
9
Maximum Difference Scaling (MaxDiff) - Methodology
• Key Advantages
– It reduces or eliminates scale usage biases.
• Respondents in Japan will provide answer in the same way as those in Brazil.
– Easy task for respondent
– A larger number of items can be used
– Provides interval scaled preference data and the relative importance of all
products/features tested
• Key Disadvantages
– Gives no strategic/diagnostic information. .
– Preference is measured to most/least preferred products (i.e. doesn’t eliminate
best of bad bunch) unless you do anchoring.
– Tasks can sometimes become repetitive if there are lots of items/attributes.
Maximum Difference Scaling (MaxDiff) - Methodology
•
The number of tasks and items are determined by
–
–
–
–
•
the total number of items/features,
the complexity of the items,
the sample size, and
the precision anticipated in the preference estimates.
Analytic considerations when generating the design. Ideally,
– each item needs to be shown to each respondent at least 3 times.
– each item need to receive a minimum of 500 exposure across the entire
sample.
– if you have a key segment you are interested in, it should have a sample size of
about n=100-150.
– If you have 12 items and you want to show 4 options per task, you will need to
do 9 tasks (12 items * 3 times / 4 options)
Maximum Difference Scaling (MaxDiff) - Methodology
•
Traditional MaxDiff can handle up to 18-20 items
– Same task across (e.g. 8 tasks of 4 items each)
– Same number of options in each task
– The respondent is asked to do the same thing in each task, pick the best & the
worst
– If you have 20 items and you want to show 4 options per task, you will need to
do 15 tasks (20 items * 3 times / 4 options)
•
Adaptive MaxDiff
–
–
–
–
Usually use when we have more than 15 items
Different tasks grouped in various stages
Likely different number of options in each task
The respondent may not do the same thing in each task, e.g.
•
•
•
•
pick the best & the worst,
pick the worst only,
pick the best only,
rank the options
Maximum Difference Scaling (MaxDiff) - Methodology
•
Adaptive MaxDiff
– Will likely take longer
– More interesting for respondents
– Provides very detailed estimates about the more important items, but
less on the ones that gets dropped off in the first stage (i.e. the worst
ones), and good level of estimates about the ones in the middle of the
pack.
– Depending on the client needs for precision on the middle of the road
items, it can be made shorter.
Maximum Difference Scaling (MaxDiff) - Methodology
•
Below is an example of how Adaptive MaxDiff works given 20 items:
– Stage 1: Show 5 tasks, each with 4 randomly selected items, ask for the one he likes
best and worst one in each. Discard the worst ones.
– Stage 2: Show 5 tasks, each with 3 items, ask for the best and worst one. Discard
the worst ones.
– Stage 3: Show 5 tasks, each with 2 items, ask for the better one. Discard the worst
ones.
– Stage 4: Show 1 task, with the surviving 5 items, ask them to pick the worst one.
Discard the worst one.
– Stage 5 - Show 2 tasks, each with 2 items, ask for the better one. Discard the worst
ones.
– Stage 6 - show 1 pair of the 2 winners from stage 5 and ask respondents to pick the
final winner.
Maximum Difference Scaling (MaxDiff) - Methodology
•
Anchoring
– To understand more than simple relative preferences and address the possibility
of the “best of a bad bunch” scenario
– incorporated in the modeling to provide the “none” estimate.
– The resulting MaxDiff scores indicate absolute preference rather than relative
preference.
• With anchoring: 75% of respondents prefer Item A vs. 50% for Item B
• Without anchoring: Item A has a score of 75 vs. 50 for Item B
•
Example of an anchoring question:
– To be asked at the end of the MaxDiff exercise
– More than one or none of these messages may be appealing to you. Which of
the messages below, if any, are appealing to you?
• [Pipe messages according to instructions in MaxDiff design]
• None of the above [ANCHOR EXCLUSIVE]
Maximum Difference Scaling (MaxDiff) - Challenges to COG
•
Statistical design from Analytics Team
– Randomization instructions
– Examples
•
Programming Team has established templates for both traditional MaxDiff
and adaptive MaxDiff
– Considered to be “standard” layout in PPE.
•
Quality control
– Testing of survey script
• Traditional MaxDiff design
– Respondents assigned to different blocks
– Options show up according to design
• Adaptive MaxDiff design
– Options show up according to design
– ATR data check
– Soft-launch data check
16
Maximum Difference Scaling (MaxDiff) - Modeling
•
•
•
Partial data file for checking and advanced set up
Data matched with the design using the BLOCK variable
Simplified illustration of what modeling means:
1st task
3 options
9 items
•
Choice
We use Sawtooth Software’s CBC/HB product for estimation.
– The Hierarchical Bayes component allows us to provide individual level estimates
for respondents’ preferences.
•
Result of the modeling process
– A set of parameter estimates for each respondent that measure his preferences for
each item/feature.
– These are often referred to as utilities.
17
Maximum Difference Scaling (MaxDiff) - Output
Most
131
Likely
126
103
100
94
77
Least
Indexed Max Diff Score = This score is the indexed
score from the model. E.g., format #1 is nearly 2x as
motivating as format 6.
69
Likely
Q14a/b/c.
Please indicate which of the two Reactine packages you are most likely to purchase based on what the package
communicates to you.
Q14d.
Please indicate which of these Reactine products you are most likely to purchase and which you are least likely to
purchase, based on what the package communicates to you?
Choice Based Conjoint (CBC)
CBC/DCM
CBC/DCM – Client Needs
•
Business objectives
–
–
–
–
–
•
To search for optimal feature combination as part of new product development.
To test market potential for promotion and pricing strategies.
To optimize portfolio management
To understand what product attributes drive purchase
To calculate the value of product features
Why CBC/DCM?
– When the number of concepts/marketing scenarios/product configurations we
need to study is too numerous to consider monadic or sequential monadic
testing.
– When we need to understand the purchase decision process and the tradeoff
between two or more components of products and/or market forces.
– When we need to study various versions of the product/market scenarios in a
competitive environment. The most common example of this is a pricing study.
21
CBC/DCM – Client Needs
•
Because of its flexibility and power, researchers have applied conjoint in
a number of industries, including:
–
–
–
–
–
–
–
Packaged goods
Telecommunications,
Financial services,
Tourism,
Consumer electronics,
Automotive
etc.
22
CBC/DCM – Methodology
•
•
•
Choice-based conjoint relies on data from a discrete choice experiment in
which respondents choose between sets of products.
Each product is a hypothetical combination of attributes chosen by an
experimental design procedure.
The experiment involves
– presenting several sets of such products to each respondent , and
– having the respondent indicate which of the products he or she would be most
likely to purchase.
•
•
The model produces choice shares, estimates of the probability the
consumer would choose each item in the given choice set.
Model enables the manager to specify various hypothetical pricing,
promotion, and availability scenarios and to examine how choice shares
change from scenario to scenario.
23
Choice Based Models (Discrete/Conjoint)
• Choice Based Conjoint models vary the individual
product attributes.
• Discrete Choice models show various complete concepts
and vary the pricing in a competitive set.
What it is
• Most Choice Based modeling studies we design are a ‘hybrid’,
varying both price and attribute.
Conjoint Analysis: CBC vs. DCM
DCM
CBC

Uses with multiple factors (6-10) to
describe products

Respondents are shown limited
number of options per card (4-6).

Usually come at the earlier stage in
product development for

Market potential

Best feature combination

Rough price level

Mostly use Brand/Price combo to
describe products

Respondents are shown many options
that represents most of the market

Usually at later stage in product
development to:

Test for various marketing inputs,
such as package, POS

Determine pricing scenario,
product lineup vs. competitions.
Example of a CBC Task
26
Example of a DCM Task
Example of a DCM Task
Approx. 30 SKUS
in each task
The Different Forms of a CBC Task
•
Dual-response
29
The Different Forms of a CBC Task
•
•
•
Single Selection – “none” option shown with product alternatives
Constant Sum Allocation – physicians asked to allocate 10 choices for his
next 10 patients among the options shown
Discrete choice
– remember the DCM examples shown earlier?
•
Menu Based Choice
– choose between alternatives that represent sub-products, with the end goal of
creating a full product.
– Automobile purchase
• choosing the base model of the car based on brand, make and price
• an upgrades menu can be provided to the respondent to select for those optional features he
would like to include in the final purchase
30
CBC/DCM – Design Considerations
•
•
•
•
•
The number of factors and levels determine the size of the CBC Model
Larger CBC model (lots of factors, each with lots of levels) requires a larger
sample, and potentially more complicated choice tasks.
Most CBC exercises have about 6 to 8 choice tasks.
Number of options/alternatives in each task depends on the complexity of
the decision process
Restrictions/prohibitions - where certain combinations of factors/levels can
not be shown to the respondent
– Usually taken care of in statistical design
31
CBC/DCM - Challenges to COG
•
Statistical design from Analytics Team
– Example
– If there are multiple statistical design files, stitching instructions on how the
different components should come together to form the full choice task.
– Randomization instructions on the order of how choice tasks/alternatives/factors
should be shown to each respondent.
– Additional programming instructions as to what other information should be
retained for the task, aside from the respondent’s choices.
– Most of the time, the order of the options is randomized within each choice task for
each respondent, but there are EXCEPTIONS.
32
CBC/DCM - Challenges to COG
•
Key Steps
1. Programmers create a “mock-up” board to finalize all visual and functional
components based on the statistical design.
2. Programmers create the full CBC exercise in Sparq.
• Default Sparq CBC template
• Additional scripts to implement any deviations
– e.g. Changing the border color for the cell components
• Implementation randomization and stitching instructions
• Additional scripts prohibiting certain choice combinations
• May need complete custom setup
– BEST PRACTICE - it is a good idea to create a mock-up board, have all the
functional and visual components of that first board verified and approved by the
researcher (and ideally the client) before duplicating that board for subsequent
tasks.
33
CBC/DCM - Challenges to COG
•
Key Steps (Con’t)
3. Project manager coordinates with the programmer and the QA team for testing of
the CBC task and full questionnaire.
•
•
•
•
Without randomization
With randomization
Respondents assigned to different blocks
Options show up according to design
4. A test link is established and sent to the researcher/client.
•
QA from an overall perspective
5. ATR data check
• More complicated if design involved piping in choices from CBC into follow-up questions
6.
7.
8.
9.
Soft-launch data check
Full launch
Flagging for speeders and other disqualified respondents.
Close of field with full data file sent to the data analyst for weighting and filter
34
CBC/DCM - Modeling
•
•
•
Partial data file for checking and advanced set up
Data matched with the design using the BLOCK variable
Sawtooth Software’s CBC/HB product used for estimation.
– The Hierarchical Bayes component allows us to provide individual level estimates
for respondents’ preferences.
•
Result of the modeling process
– A set of parameter estimates for each respondent that measure his preferences for
each item/feature.
– These are often referred to as utilities.
35
CBC Output - Isotherms
•
The output of a choice-based conjoint model is a set of Attributes shown in a
preference chart. The percentages represent the utility of each attribute or factor in
the purchase decision. The levels within that factor are shown by preference.
Free CDs
12 oz.
Granola
Coconut
10 oz
$0.70
$0.85
$0.99
$1.10
Almonds
Caramel
23%
No
8 oz.
39%
31%
7%
Size
Price
Main Ingredients Promotion
36
CBC Output - Simulator
•
A Choice-Based Conjoint model also estimates the respondent choice probabilities
for products in a choice set.
Product X - Claims DCM
November 19, 2007
Efficacy
Safety
Ease of Use
Total Sample
Option 1 X
Product
Option 1 Y
Product
Wide therapeutic w indow
Gets patient to target quickly
Equivalent safety profile to Product Y
Safe for patients w ith diabetes
No monitoring
Easy to titrate
Product Z
Existing Claims
Of your next 10 patients of each type, to how many would you Rx ….?
(Simulated Choice Share)
Patient Type A
28%
24%
48%
Patient Type B
30%
25%
45%
Patient Type C
27%
23%
50%
All Patients
28%
24%
47%
37
DCM Output - Simulator
•
The primary output of a DCM is a simulator, estimating the respondent choice probabilities for
products in the set.
38
Thank You!