Adaptive Conjoint Analysis

Health Care Choice Modeling Training
Jane Tang
October 26, 2010
- All the flavors of conjoint analysis
- History
- What we do today, comparison of methods
-“Calibration”
- Market share vs. Choice Share
- Forecasting – when do we need it?
- Pricing for choice models
- Input from clients
- Key impact areas
History of Conjoint
• 1970s – Full Profile Conjoint
– Rating/Ranking based Conjoint (Paul Green)
– Dan McFadden introduced Choice theory in Transportation
• 1980s – ACA & CBC
– Rich Johnson invented Adaptive Conjoint Analysis – launching
Sawtooth Software
– Dick Wittink introduced Conjoint analysis (ACA) to patient-based
health care research
– Jordan Louviere introduced Choice Based Conjoint to Marketing
• 1990s – HB estimation
• 2000s – CBC Becomes Most Widely Used
• 2008/9 – Adaptive CBC (A/CBC) was introduced.
Overview of Conjoint Analysis:
• Conjoint analysis is a popular marketing research
technique that marketers use to determine what
features a new product should have and how it
should be priced.
• Conjoint analysis became popular because it was a
far less expensive (smaller sample size) and more
flexible way to address these issues than concept
testing.
– When there are just too many potential product combinations
for concept testing
– Need to understand the tradeoff respondents make
– Need to understand the competitive context
http://intranet/download/attachments/10027862/Discrete+Choic
e+Modeling+vs+Concept+Tests.pdf?version=1
Overview of Conjoint Analysis:
• Conjoint analysis involving showing respondent
potential product combinations.
• Products can be factored into parts, called factors.
Different options within each factor represents factor
levels.
• The basic premise of Conjoint Analysis that a
respondent makes purchase decision based on the
inherent value he places on factor levels.
– He will tradeoff the levels within different factors. E.g.
trade in his favourite color for lower price, etc…
– However, the recent development of A/CBC has
changed this where we non-compensatory rules are
allowed.
Overview of Conjoint Analysis:
• These three steps form the basics of conjoint
analysis:
– collecting trade-offs: questionnaire with statistical design
showing various options of the product, and respondents
input in terms of product preference.
– estimating buyer value systems: modeling by the analytics
team.
– making predictions: simulation based on the model
developed. Analytics team working with you for results best
suited to answer your client’s marketing question.
Different flavors of a conjoint: Rating based Conjoint
• We design conjoint cards that represent possible products
based on factor levels. Respondents are asked to rate each
cards in terms of purchase intent.
Please rate this product in terms of how likely you would use it for your … patients?
Use the scale where 1 being Very unlikely to use this product, and 10 being very unlikely to use this product.
PRIMARY ENDPOINT (compared to Warfarin):
Incidence of stroke or systemic embolism at the end of
No warfarin data available
treatment period (2 years)
PRIMARY ENDPOINT (compared to Aspirin):
1.56% vs 2.6% for aspirin 40% Relative Rate
Incidence of stroke or systemic embolism at the end of
Reduction p<0.05 Statistically significant (compared to
treatment period (2 years)
aspirin)
PRIMARY SAFETY ENDPOINT: OVERALL POPULATION
(Compared to Warfarin)
No warfarin data available
Major Bleeding
PRIMARY SAFETY ENDPOINT: WARFARIN NAIVE
POPULATION
Major Bleeding
PRIMARY SAFETY ENDPOINT: OVERALL POPULATION
2.1% vs 1.48% for aspirin -42% Relative Rate
(Compared to Aspirin)
Reduction p<0.05 Statistically significant (compared to
Major Bleeding
aspirin)
Dosing
Oral, twice-daily
Half life
10-15 hrs
Renal Elimination
25%
Average CHADS2 score of participants
2
• Alternatively we can show respondents a stack of cards and ask
him to rank all the cards in terms of his preference.
Different flavors of a conjoint: Rating based Conjoint
• Analysis: based on regression. Linear (ratings), Logistic
(ranking).
• Individual level estimate is possible, i.e., each respondents will
have a model based his own data: collect lots of information
from each individual.
• Problems:
– Ratings: scale usage issues, “yeah”ers vs. “nay”ers.
– Ranking: only works with very small problem
• Output:
– Preference for the various product options on the same
rating scale
• simulated preference rating
– Relative preference for the various levels within each factor
• Isotherm
Different flavors of a conjoint: Adaptive Conjoint Analysis
• Adaptive Conjoint Analysis (ACA): Sawtooth
proprietary technology. Only works within the
Sawtooth SSI Web interviewing interface.
• Most popular conjoint technique in the 1990s. Still
enjoys popularity among certain research area.
• The respondent task is adaptive. That is, rather than
a fixed statistical design, the respondent’s later
conjoint tasks are determined by his preference
selection made earlier.
• Claims to be able to handle large number of factors:
– by focusing respondents on a few factors that are
considered most important through direct
solicitation.
Different flavors of a conjoint: Adaptive Conjoint Analysis
• Output:
– Similar to rating based, except we can simulate
respondent’s share of preference for the product by
assigning each respondent to his most preferred
product.
– The model is produced for each individual separately.
It is possible at the end of the interview to then build an
ideal product for each respondent.
• E.g. Tailoring patient preference to treatment options.
Different flavors of a conjoint: Self-Explicated Conjoint
• The poor-man’s conjoint
• Use direction questioning to get at respondent’s factor
importance and preferences for the different levels within the
factors.
• allocate 100 points across all the factors
• Rate each level within each factor in terms of preference.
• Not recommended.
What we do today: Choice Based Conjoint
• Choice Based Conjoint: we design conjoint cards that represent
possible products based on factor levels. Products are grouped
into options within a card, and respondents are asked to choose
within the group.
• Over the last decade, academics and practitioners have favored
choice over ratings-based methods:
– Stronger mathematical theory (McFadden: MNL theory)
– Stronger psychological underpinnings
– Argued to be more accurate (comparison to market data)
option1
One month free supply of
medication
Activiation is required in order to obtain the
savings.
option2
option3
Receive savings over 3 months:
Pay no more than $22.50 in your
Receive 60% off your co-pay for 6
first month, $20 in your second
months
month, and $17.50 in your third
month.
Includes a one month free supply of
medication
Includes a one month free supply of
medication
Activation is optional. A small additional
financial discount per redemption will be
provided with activation.
Activation is optional. Additional disease
education and/or health and wellness
information will be provided with activation.
What we do today: Discrete Choice Model
• DCM is really just one type of CBC, where the focus is less on
optimizing the product offer, more on the market competitive
context.
DCM
CBC
•
•
•
Uses with multiple factors (6-10) to
describe products
Respondents are shown limited
number of options per card (4-6).
Usually come at the earlier stage
in product development for
–
–
–
Market potential
Best feature combination
Rough price level
•
•
•
Mostly use Brand/Price combo to
describe products
Respondents are shown many options
that represents most of the market
Usually at later stage in product
development to:
–
–
Test for various marketing inputs,
such as package, POS
Determine pricing scenario, product
lineup vs. competitions.
CBC Choice Tasks
DCM Choice Tasks
What we do today: CBC & DCM
• Output: the basic output is still the similar as those
from ACA/Rating based conjoint
– CBC:
• Factor importance/Level preference - Isotherm
• Simulation: simulator, product optimization
• Individual level estimation allows you to further segment
the respondents.
– Potentially developing different optimized product for each
segment. Caution: no simple typing tools for these.
– DCM:
• Usually no isotherm except for impact of packaging
change, sale/promotions
• Simulator: line optimization, pricing optimization
• Unlike ACA, no individual level recommendation.
Variations on CBC
• MaxDiff/Best-Worst Scaling
– One factor CBC
– Often used for the stated importance question
– Output: isotherm – relative preference of the items
• Anchored MaxDiff
– Add a direction question at the end
– Turn relative preference to “absolute”/anchored preference
• Adaptive MaxDiff – when there are too many items.
Total Unweighted
MaxDiff – Relative
Preference Output, Anchored
Attribute F
Attribute K
Attribute C
Attribute
Attribute
Attribute
Attribute
Attribute
Attribute
Attribute
Attribute
Attribute
D
P
E
IJ
G
O
B
A
M
Attribute H
Attribute N
Attribute L
Total sample
Variations on CBC
• Best-Worst Conjoint
– Standard CBC
– Ask respondents to choose both the most and least
preferred option to get more data out of each
respondent
• Respondent can do this additional task very easily and
quickly since they have already evaluated all the options
– The additional information improves the model
significantly.
• To be tested: could mean potentially smaller sample/less
number of tasks.
– Output: same as before
What we would like to do: Adaptive CBC
• We have only done one of these study – internal R
on R on technology product. Never been tried in
health care.
• Allows for non-compensatory decisions
– What process do the respondents go through to make
decisions? How likely will non-compensatory rules apply?
– More likely in patient based research
• Issues:
– Longer interview length, 50%-100% longer
– Sawtooth Proprietary software: respondents are routed out
of Sparq for this portion.
Market Share vs. Choice share
• Choice shares are NOT market shares
– 100% awareness,
– 100% availability
– “Overstatement” on the new products
– “Price is no object”
• In our experience, we generally under-estimate price
elasticity
– Other issues ….
Comparison should only be made to the “BASE
CASE” – not to current market share
“Calibration”
• When client insists on comparison to market
information:
– We calibrate the “Base Case” to market information:
external effects adjustment
– We apply the same adjustment to all the simulated scenarios
– Effectively we are doing the same comparison – only that we
have now moved the “Base Case”.
– However, even the calibrated choice shares are still NOT the
market shares.
Forecast
• Market Shares are NOT one-time measures. They
reflect decisions consumers make over time.
– Trial: first purchase - Would you buy it?
– Repeat: subsequent purchases – Would you buy it
again?
• Calibrated choice shares, adjusted for media spend
and marketing plans, can be used to assess “Trial”.
– We have no information on “repeat”.
DCM alone will not give you “forecast”
• Bring in the “forecast” expert.
• Dr. Lin
Pricing for CBC – Analytics cost
• A standard CBC is about 60 hrs in the PPE,
– $9K internal cost
• Analytics will bill you the actual hours only
– If it will be more than 60 hrs, you will get notified.
– Rosanna bills at a lower rate than Jane, but might take
more time, so the cost will be about same in the end.
– $15K external to the client
• Your SBU keeps the difference
CBC Pricing - what do we need from the client?
• Sample specs:
– Sample size
• Who’s in the sample? How interested are they in this product?
• Model specs:
– Factors and levels
• # of factors, how many levels in each
• Restrictions: none/some/lots
– How the factors go together.
• Can we show everybody everything?
– Or do we have to worry about scenarios?
• Task specs:
– What question is asked to respondents:
• How many product options can we show? How many fixed
competitive options?
• What type of answers are we asking for?
– Choose one vs. allocation
What is reasonable?
• If we can show multiple product options per task to the
respondents,
– A sample size of 300 physicians should support a model with 6
factors, each with 5 levels, with no restrictions in the model
– We need n=1,500 patients for the same model
– assuming 50% are interest in the product
• If we can show one product option per task to the
respondents,
– A sample size of 300 physicians would only support a model
with 4 factors, each with 3 levels
– We need n=1,500 patients for the same model
– assuming 50% are interest in the product
Ask Jane or Sample planning paper or
http://intranet/pages/viewpage.action?pageId=10027862
What impacts pricing?
• Factors that impacts Analytics cost:
– Complexity of the model:
•
•
•
•
•
restrictions
scenarios
Choices in stages/ selection from menus
Large categorical factors: 8+ levels (except in MaxDiff)
“unusual” requirements: purchases/ virtual shopping
– Output requirement other than isotherm and
simulator:
•
•
•
•
•
“Calibration”
Optimization scenarios
Premium calculations/Willingness-to-pay
Standard error estimates on choice shares/ factor importance.
Segmentations
What impacts pricing?
• Factors that impacts Sample/Ops cost:
– Size of the model:
• larger model requires more sample,
• longer task length, more incentive
– More complex design may require more
programming and PM cost as well.
• Options enabled/disabled based on previous
choice on the same page
• Adaptive factor levels
– Virtual shopping
http://intranet/display/research/Analytics
[email protected]