Session 78 PD, Credibilty and the Impact on Assumption Setting

Session 78 PD, Credibilty and the Impact on Assumption Setting
Moderator:
William M. Sayre, FSA, MAAA
Presenters:
Robert W. Foster, Jr., FSA, MAAA
Marianne C. Purushotham, FSA, MAAA
Session 78 PD
Credibility and the Impact on
Assumption Setting
Robert Foster FSA, MAAA
Marianne Purushotham FSA, MAAA
William Sayre FSA, MAAA
The revised Standard of Actuarial
Practice No. 25
Credibility Procedures
- replaces -
Credibility Procedures Applicable to Accident and Health, Group Term Life, and Property/Casualty Coverages 3
Why the change?
Life committee recommended that scope should be expanded to life insurance and annuities, responding to a request for review by the ASB.
The General Committee recommended that the ASOP be expanded to all actuarial practice areas, and a multi‐discipline task force of the GC developed the exposure drafts.
When the Adopted at the ASB meeting in December 2013, with effective date for services on change?
or after May 1, 2014.
4
Process overview
Subject
data
evaluate subject data
Improved
Blend
data
pick procedure
Relevant
blend data
data
5
AGENDA
• Follows the ASOP with some added comments
• The ASOP
–
–
–
–
–
Scope
Definitions
Guidance
Communications
Appendix
• Added comments
– Additional guidance
– Timeliness of the ASOP
– Summary
6
Old Scope
When the ASOP applies
Coverages, procedures, exemptions
• Coverages:
–
–
–
–
accident and health;
group term life;
property/casualty and other non-life coverage.;
financial security systems, such as self-insurance
• Procedures:
– ratemaking,
– prospective experience rating, and
– whenever else credibility procedures are used
• Exemptions: does not apply to
– individual life insurance and annuities, and
– pension plans.
7
New Scope
When the ASOP Applies
Situations, no exemption but no compulsion
• When selecting and when applying credibility procedures
• Situations
–
–
–
–
when required by law or regulation or binding authority to assess credibility
when the actuary chooses or states compliance with the ASOP
when blending subject experience with other experience;
when the actuary represents that data is credible
• ASOP 35 governs if there is a conflict for pensions
8
Definitions
Credibility
A measure of the predictive value that the actuary attaches to a
particular set of data
• “predictive” in the statistical sense
9
Definitions
Credibility procedure
• the evaluation of subject experience for potential use in setting
assumptions without reference to other data; or
• the identification of relevant experience and
• the selection and implementation of a method for blending the
relevant experience with the subject experience
10
Definitions
Subject experience
A specific set of data drawn from the experience under
consideration used for the purpose of predicting the parameter
under study
11
Definitions
Relevant experience
Sets of data, that include data other than the subject experience,
that, in the actuary’s judgment, are predictive of the parameter
under study
12
Guidance
Purposes of credibility procedures
• to evaluate subject experience for potential use in setting
assumptions without reference to other data; and
• to improve the estimate of parameters under study
13
Guidance
Selecting the procedures - criteria
• whether the procedure is expected to produce reasonable results;
• whether the procedure is appropriate for the intended use and
purpose; and
• whether the procedure is practical to implement when taking into
consideration both the cost and benefit of employing a procedure.
14
Guidance
Selecting relevant experience
• relevant experience should have characteristics similar to the
subject experience
• there is no presumption that relevant experience exits
15
Guidance
Professional Judgment
• The use of credibility procedures is not always a precise
mathematical process
• For example, the actuary may, in some situations, assign full,
partial, or zero credibility to the subject experience without using
a rigorous mathematical model
16
Guidance
Homogeneity
• Grouping or stratification may increase the usefulness of the data
• Applies to subject experience and to relevant experience
17
Communications and Disclosures
ASOP 25 refers to ASOP 41
When disclosures are needed
•
•
•
If method or assumptions were prescribed
If the actuary states reliance on other sources
If there are material deviations from the ASOP
18
Appendix
• Some background and introductory material
• Additional information can be obtained from AAA, SOA, CAS
19
Additional Comments
• Additional guidance
• Timeliness
• Summary
20
Additional Guidance
• As mentioned AAA, SOA, CAS
• Google “Credibility Theory” and you’ll get
Statistical Credibility Theory
Donald F. Behan
Presented to the Southeastern Actuarial Conference
June 18, 2009
21
Behan points to……….
Actuarial Standards Board, Actuarial Standard of Practice No. 25,
Credibility Procedures Applicable to Accident and Health, Group Term
Life, and Property/Casualty Coverages, October, 1996.
American Academy of Actuaries, Credibility Practice Note, July, 2008.
Bühlmann, Hans and Alois Gisler, A Course in Credibility Theory,
Springer-Verlag, New York, 2005.
Klugman, Stuart A., Bayesian Statistics in Actuarial Science with
Emphasis on Credibility, Kluwer Academic Publishers, Boston, 1991.
22
Timeliness of the revised ASOP
•
•
•
•
•
•
PBR
ORSA
IFRS
US GAAP
Solvency II
Embedded values
23
IFRS, for example
Fulfilment cash flows –
the expected value, or statistical mean
24
Summary
•
•
•
•
•
“Credible” is a term of art
ASOP 25 has no exemptions, but no compulsion
Many situations require or lead to use of credibility procedures
Become an expert or befriend an expert
There is no avoiding use of judgment
25
A Practical Application of
Credibility Theory
Marianne Purushotham
LIMRA
8/26/14
Credibility Theory Basics
Definition
• Mathematical method for adjusting experience‐
based estimates
Credibility • Sampling Theory
Frameworks • Bayesian
27
Developing Credibility-Weighted Assumptions
Select base estimate (“relevant experience”)
Develop own experience (“subject experience”)
Select a credibility method Develop credibility weighted base assumption
Modify base assumption for actuarial judgment
28
Standard Credibility Formula
Credibility weighted assumption =
Z x (subject experience) + (1-Z) x (relevant experience)
where Z = credibility factor developed by the method
selected (Limited Fluctuation, Bayesian methods, etc.)
29
Credibility Theory:
Application to UL Lapse Experience
Base estimate (“relevant experience) = LIMRA/SOA Individual Life Insurance Lapse Study (2002‐2004)
Develop own experience (“subject experience”) = Sample of Individual Company Experience for 8 companies (2004‐2005)
Select a credibility method = Limited Fluctuation
and
Develop credibility weighted base assumption
Modify base assumption for actuarial judgment
Bulmann Empirical Bayesian
30
Example: Lapse Experience
Industry A/E Lapse Rates by Product (“Relevant Experience”)
140%
120%
100%
A/E by Company
80%
Policy
Amount
60%
40%
20%
0%
UL
Term
Whole Life
VUL
Relevant Experience = 2004‐2005 LIMRA/SOA Industry Lapse Experience (industry)
Expected Basis = 2002‐2004 LIMRA/SOA Study Industry Lapse Rates
31
Example: Universal Life Lapse Experience
By Company A/E Lapse Rates (“Subject” Experience)
by Policy
by Amount
250.0%
200.0%
150.0%
Overall A/E Count
100.0%
Overall A/E Amount
50.0%
0.0%
0
1
2
3
4
5
6
7
8
Subject Experience = 2004‐2005 LIMRA/SOA Industry Lapse Experience (by individual company)
Expected Basis = 2002‐2004 LIMRA/SOA Study Industry Lapse Rates
32
Example: Universal Life Lapse Experience Credibility Methods Selected
Limited Fluctuation Method
Buhlmann Empirical Bayesian Method
• Sampling theory method • Bayesian method • Assumptions required
• Confidence interval
• Standard level (100%, industry weighted average) • Assumptions required
• Prior distribution of the form: Z x (true A/E) + W
• Data Required • own experience (subject)
• standard assumption (relevant)
•Data Required • own experience (subject)
• other companies experience
• standard assumption (relevant)
33
Example: Universal Life Lapse Experience
Credibility Weighted Base Lapse Assumptions
Policies
Co
A
B
C
D
E
F
G
H
Relevant Experience
120.0%
120.0%
120.0%
120.0%
120.0%
120.0%
120.0%
120.0%
Credibility Factors (Z) by Method
Subject Experience
131.0%
96.0%
117.2%
92.7%
96.4%
106.8%
123.5%
87.8%
Amount
Co
A
B
C
D
E
F
G
H
Limited Fluctuation
100.0%
100.0%
100.0%
100.0%
100.0%
5.1%
100.0%
100.0%
Buhlmann
99.5%
95.7%
98.5%
97.6%
97.4%
5.2%
99.7%
95.5%
Credibility Factors (Z) by Method
Relevant Experience
129.4%
129.4%
129.4%
129.4%
129.4%
129.4%
129.4%
129.4%
Subject Experience
135.8%
60.4%
142.3%
89.2%
98.3%
208.0%
142.1%
83.8%
Limited Fluctuation
100.0%
57.6%
84.3%
96.7%
69.9%
2.2%
95.0%
57.7%
Buhlmann
99.7%
91.9%
98.2%
97.9%
96.4%
5.1%
98.6%
94.0%
34
ASOP 25 Considerations for
UL Lapse Experience Example
 Data Selection (subject and relevant)
 More recent experience versus longer experience period (may
be different for lapse vs mortality)
 Impact of subject experience being included in relevant
experience
 Homogeneity of the data – segments which may not be
representative
 Assumption that “true” lapse rates are a constant multiple of a
standard table
 Credibility Procedure Selection
 Practicality to implement vs. Added Benefit of Complexity
 Appropriateness of varying levels of Z produced
35
So what do we weight against?
A case study
Credibility and the Impact on Assumption Setting
Session 78
Rob Foster, FSA
VP Pricing
Context - Credibility in Fully Underwritten Life Pricing
Reinsurance pricing is obvious application for credibility
 Lots of quote requests
 Clients can submit varying amounts of experience
 Reinsurers “obviously” have lots of mortality experience
 Need to use that information to set mortality assumptions
Pricing Strategy Implications
 Competitive bidding
 Credibility rating


price for clients with high mortality
price for clients with low mortality
So, use CAREFULLY!
37
Case Study – Impact of Credibility on Reinsurance Pricing
Deals
priced
1999 2002
 We asked client for…
 Mortality experience on same/similar business
 Underwriting requirements, preferred criteria, exception
rules, etc.
 Relevant experience
 UMAP – Underwriting assessment tool
 Score mapped to percentage of 1975-80 table as the
expected level of mortality
 Embodies expert opinion of the mortality result from:
 Client underwriting rules
 Client expertise
 Distribution channel
 Underwriting manual used
 Results are presented on 1975-80 basis for consistency
38
Case Study – UMAP vs Client Reported Experience
Companies are
different!
Even companies
with similar
underwriting* have
different mortality
experience
*As demonstrated by the
consistent UMAP score
39
Case Study – UMAP vs Client Reported Experience
Companies are
different!
Companies with
significantly different
underwriting can
have similar
mortality experience
40
Companies are Different
2008-09 Individual Life Experience Report
 Followed practice started with 2005-07 report of presenting company
information by quintiles
 Companies ranked by overall mortality and assigned to quintile
 One standard deviation is ~ 0.3%, on 219,608 claims
Table 7 – Company Experience Grouped into Experience Rank Quintiles Select Period,
Issue Ages 18+, Observation Periods 2008-09
Expected Basis = 2008 VBT Primary Tables
Experience Rank Quintile
A/E
Ratio
1
2
3
4
5
All
78.0%
83.8%
89.6%
97.9%
122.6%
89.7%
Ratio of A/E’s
Quin5 / Quin1
157%
41
Credibility Theory Practices Report - 2009
Mortality Study Results
Company
NS Mortality
A/E Ratio by
Amount
Number of
Deaths
A
106.0%
1,430
B
118.5%
1,038
C
63.5%
668
 “Actuaries … commonly find that the highest company
A/E ratio is about twice the lowest company A/E ratio”
D
89.2%
228
E
61.4%
13,409
 Wide range of results:
 B - 118%
 E - 61%
 Overall 77%
F
71.6%
1,988
G
36.8%
3
H
81.2%
9,978
I
82.8%
3,609
J
97.9%
1,349
Overall
77.0%
33,700
 In SOA mortality studies, individual company results
are not presented to protect confidentiality
 This study only used a portion of each company’s
experience
Mortality results for UL policies
Excerpt from 2001 VBT source data
42
Conclusion #1
Anecdotal Evidence So Far:
 Companies are different
 Industry tables are averages of widely-varying company experience
43
“The Funnel Effect” – The Messenger Dec 2013-Jan 2014 Issue
 Similar underwriting guidelines can produce different mortality results,
even after controlling for age, gender, class
 Studied 20 companies’ experience from different time periods using
information from a large industry mortality study
David Wylde
 “Funnel Effect” – Company’s mortality result is partially determined by
the population funneled to it
 Distribution channel
 Kind of Marketing
 Regional concentrations
 Affects the ‘average’ mortality of company – may be higher or lower
than others
 Indications that the Funnel Effect does not wear off over time
 Companies with higher early duration mortality will tend to have
higher later duration mortality
44
“The Funnel Effect” – Test #1, Ranking of A/E Ratios for Seven of Twenty
Companies
Issue years / Durations Observed in study
45
“The Funnel Effect” – Test #2, Plotting A/E Ratios for All 20 Companies
 Wylde: Similar underwriting
guidelines are producing
different mortality results, even
after controlling for age,
gender, class, etc
 17 of 20 companies are near or
above the 45° line
 The differences persist…
46
Does It Matter Which Table You Use?
Question was examined in 2010 – Session 66 SOA Annual Meeting
 Methodology
 Used select period data
from 2005-2007 SOA study
 Regress experience against
1975-80, 2001, 2008 VBTs
 Slope of the regression line
 Up = table is too flat
 Flat = table is just right
 Down = table is too steep
compared to data
 Also measured volatility around
the regression
Illustration by David Wylde
47
Does It Matter Which Table You Use?
 Sharon Brody did extensive testing, and presented those results at the
conference
 Conclusions of the study
 Male <70 – all tables fit reasonably well
 Male >70 – all tables have significant variability
 Female <70 – all tables fit reasonably well
 Female >70 – fit better than males, though not great
 Recommended 2008 VBT – aggregate fit, A/Es closer to 100%
 Bottom Line: It Doesn’t Matter (at least for the first 15 years)
48
Conclusion #2
Anecdotal Evidence So Far:
 Companies are different
 Differences don’t wear off, no movement towards a single “ultimate” level
 Industry tables are averages of widely-varying company experience
49
Case Study Part 2 – Mortality Results 2004-2011

Compare Actual mortality results against five different sets of credibility-weighted expected
 Reference is UMAP-adjusted 1975-80 mortality table
 Experience is the client-reported experience
 Simple summation of the difference in A/E ratios, claims
Sum of A-E
Mort Basis
Full Cred
Percentages
$ millions
Experience
0
17%
21
P=90, r=10
271
27%
27
P=90, r=5
1082
53%
47
P=90, r=3
3007
68%
58
Reference
NA
90%
75
50
Case Study Part 2 – Mortality Results 2004-2011
51
Conclusion #3
Anecdotal Evidence So Far:
 Companies are different
 Differences don’t wear off, no movement towards a single “ultimate” level
 Industry tables are averages of widely-varying company experience
 Historical experience performed better than other Limited Fluctuation methods
in predicting future claim levels
Tentative Conclusion: Credibility weighting may not be
appropriate in this context
52
ASOP 25
 When do we need to use credibility? (Black text is from the ASOP)
…
Be careful of statements like “We assigned full credibility to the
client experience.”
53
Cautions
 Not a statistically rigorous development
 Indicates an area for future study
 Did not look at Bayesian credibility
 Extremely data intensive
 There is an example in the Credibility Theory Practices
Report
 Given how poorly the expert opinion faired in this case
study, Bayesian credibility probably would not have helped
54
Questions?
55