Presentation

Current Practices are
Threatening Past Performance
as an Effective Tool
Breakout Session #: D01
Gary Poleskey, Colonel, USAF (Ret)
Vice President, Dayton Aerospace, Inc.
CPCM, Fellow
Date: Tuesday, July 26
Time: 11:15am-12:30pm
1
About Dayton Aerospace
• Small veteran-owned business
established in 1984
• Provide management and
technical consulting services
Experience that matters…
solutions that count!
– Specialize in hard-to-do tasks requiring experienced
acquisition and logistics people
• Highly Experienced – average over 30 years
– AFMC Center Commanders (previously product,
logistics, and test)
– PEOs, System Program Directors, Product Support
Managers, and key program managers
– Lead functional experts – program, center, and
command level
• Balanced Perspective
– Broad experience with both
Industry and Government
organizations
We provide government
and industry teams with
reach back to former
senior level personnel
who have “been there,
done that.”
2
Why are we having this discussion?
Avoid Past Mistakes
“Those who fail to learn from history are
doomed to repeat it”
– Winston Churchill
3
Why are we having this discussion?
Avoid Past Mistakes
“Those who cannot remember the past are
condemned to repeat it.”
– George Santayana (1905)
4
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance
Policy Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy
Changes
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
5
Poleskey Disclaimer
I’m not pining away for the “Good Old Days”!
•
My Background
– I was a member of Air Force Tiger Team in 1987
• Examine the treatment of past performance in source
selection and recommend changes
•
– Helped write the revised USAF Past Performance
policy
– PRAG Chair on first major Source Selection using
the “new” past performance assessment process
– Been intimately involved over the nearly 30 years
since the policy’s creation
BUT……….
– Strong believer in continuous improvement
– Strong believer in flexible policy
– Dedicated to telling you four things you did not
know before this session started!!
6
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance
Policy Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy
Changes
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
7
Why & How Past Performance Policy was
Changed – 1988 Tasking & Findings
•
•
Air Force Systems Command Commander’s frustration
– “I know things about these Companies and Programs that
are never presented to either the SSAC or to the SSA.
Why is that???”
Study Team’s findings:
– Process was very “vertically focused” – identify a past
contract that was exactly like the planned new one
– Relevancy was very product focused
•
•
•
Aircrew Training System = Aircrew Training System
Army Training System ≠ Aircrew Training System
Development contracts ≠ Production contracts
– Past Performance Evaluators tended to be very junior
members of the Government team
– Very difficult to obtain access to knowledgeable people
– Thus, only big positives or big negatives were ever raised
8
Why & How Past Performance Policy was
Changed – 1988 Major Study Changes – Part 1
•
•
•
Goal: Raise stature and perspective of past performance
evaluation
Established new risk factor
– Technical Rating
– Proposal Risk
– Added: Past Performance Risk
Past Performance Risk assessed against source
selection criteria
–
–
•
Evaluation became a horizontal skills assessment vs. vertical
product assessment
Gather higher fidelity information on how well offerors had
demonstrated the skills and expertise to be successful in future
Created Performance Risk Analysis Group (PRAG) to
assign performance risk rating
–
–
–
Staffed with more experienced people to evaluate information
Select evaluators with knowledge and experience with technology
and product or service involved in the source selection
Provide integrated and consistent picture to the decision makers
9
Why & How Past Performance Policy was
Changed – 1988 Major Study Changes – Part 2
•
•
Goal: Address Data Source Problem
Team was not anxious to introduce a new Past
Performance data base
– Many had failed of their own weight
– But: the need to collect contemporaneous
information was great
•
Established new Contract Score Card System –
CPAR
– Contract Performance Assessment Report
– Nine Principal scoring areas result of brainstorming
important program “issue areas”
•
CPAR 3-signature structure designed to ensure
accuracy
– Program Manager
– Contractor
– PEO or PM’s Boss
10
Why & How Past Performance Policy was
Changed – 1988 Major Study Changes
– Critical Point
• AFSC Commander’s “Ah Ha” moment
with PRAG and CPAR
– Marginal and poor current performance
would place winning new business at risk
• This Linkage is also the reason these
changes have been deployed throughout
the Federal Government for over 25 years
& endured – until now ?
11
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance
Policy Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy
Changes
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
12
Past Performance Evaluation Process
Sequence of Events
Make
Preliminary
Assessment
Determine
Relevancy
Categorize &
Evaluate
Data
Identify
Concerns to
Offers
Offerors
Provide
Feedback
Assign
Final Past
Performance
Confidence
13
Baseline Past Performance Evaluation Process
Section M Examples
•
(USAF) Aircraft Avionics Modification RFP ($50 Mil)
– Recency: Five years (Active or completed within period)
– Relevancy: Past Performance evaluation will be
conducted using Section M sub-factors
•
•
•
•
•
•
Systems Engineering
FAA Airworthiness Assessment
Military (Mil Hdbk 516) Airworthiness Assessment
Aircraft Integration
Training Device Integration
(Army) Excalibur 1b
– Recency: Three years (Active or completed within period)
– Relevancy: The Government will consider the relevancy
of the data as it relates to the present solicitation (Clear
reference to Section M).
•
•
•
•
Compliance with Performance Specifications
Producibility (Including transition to production)
Management Oversight
Systems Engineering
14
Past Performance Evaluation Process
Contract Relevancy Matrix
Sub-Factors
1
Contract 1
Prime
Contracts
2
3
4
X
X
X
Contract 2
X
X
Contract 3
X
X
•
X
6
X
X
X
X
X
X
•
Teammate
Contracts
5
X
•
X
•
X
•
X
X
X
•
X
Contract N
X
Total
3
6
X
X
4
3
X
4
5
15
Past Performance Evaluation Process
Inside the Evaluator’s Mind
Software Development
Relevancy
Quality Inputs
Judgment
CPAR
Blk #14A (2)
(1 = Low -- > 5 = High)
4. Yellow
8. Green
8
4
Questionnaire
11. Green
19. Yellow
25. Blue
25
Past Performance
Confidence
Assessment
Contract 4=5
Contract 8=3
Contract 11=2
Contract 19=4
Contract 25=1
Limited
Confidence
Ingredients
19
11
Total Inputs •
5
•
•
•
Complexity
Lines of Code
Program Stage
Re-Use
16
Past Performance Evaluation Process
Scoring Roll-up
Sub-Factors
1
Contract 1
Prime
Contracts
2
3
4
X
X
X
Contract 2
X
X
Contract 3
X
X
•
X
6
X
X
X
X
X
X
•
Teammate
Contracts
5
X
•
X
•
X
•
X
X
X
•
X
Contract N
X
Total
3
6
X
X
4
3
X
4
5
Sub Sub Sub Neu Sat Sat
Substantial Confidence
17
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance Policy
Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy Changes
1. Relevancy Assessment Criteria
2. Relevancy Assessment Scoring
3. Confidence Ratings
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
18
Troubling Policy Changes
1. Relevancy Assessment Criteria
Rating
USAF
2008
and
Prior
Description
“The Past Performance Evaluation will be accomplished by
……..focusing on and targeting performance which is
relevant to Mission Capability sub-factors and the Cost
factor.”
Instructions substantially
unchanged since 1988 Study
was implemented
•
•
No mention made of source
selection criteria Factors or
Sub-Factors
•
Only two of the examples are
actually common “aspects”
that could relate one
procurement to another
procurement
Others describing contract
•
DoD
2011
•
DoD
2016
Changes
“The criteria to establish what is …relevant shall be
unique to each source selection and stated in the
solicitation”
“…consideration should be given to those aspects of an
offeror’s contract history that would give the greatest
ability to measure whether the offeror will satisfy the
current procurement.”
“Common aspects of relevancy include similarity of
service/support, complexity, dollar value, contract type,
and degree of subcontracting (or) teaming.”
Essentially the same language as DoD 2011
•
No Change
19
Troubling Policy Changes
1. Relevancy Assessment Criteria – My Perspective
•
USAF 2008 & Prior
– Focus on Mission Capability Factors and Sub-Factors
was done exactly because they ARE the criteria that
provide “the greatest ability to measure” future success
•
DoD 2011 & 2016
– Revised language provides little guidance to Source
Selection teams on how to select criteria in the absence of
a reference to Mission Capability Factors and Sub-Factors
– Only two of the example “Common aspects of relevancy”
are actually aspects, while other “aspects” drive teams to
think vertically – How does past contract relate to new one?
– As in 1987, vertical thinking drives product to product
comparison rather than skills and capability comparisons
– Even though the use of Sub-Factors are still acceptable,
there is no policy language to encourage teams to think that
way
20
Troubling Policy Changes
1. Relevancy Assessment Criteria – Aircraft Avionics RFP
Confusing Relevancy Assessment Criteria Language
– Example
•
Technical Sub-Factors:
– Systems Engineering
– FAA Airworthiness Assessment
– Military (MIL-HDBK-516) Airworthiness
Assessment
– Aircraft Integration
– Training Device Integration
•
Relevancy Assessment
– How closely do past products or services relate
to Sub-Factors
– Government will only consider specific efforts
(present and past contracts) that involve
Avionics and Training Device modifications
21
Troubling Policy Changes
2. Relevancy Assessment Scoring
Rating
Description
Changes
USAF
2005
Very Relevant – Relevant – Semi Relevant – Not Relevant
(USAF Past Performance Guide)
Definitions substantially
unchanged since 1988
USAF
2008
Very Relevant – Relevant – Somewhat Relevant – Not
Relevant (USAF Past Performance Guide)
Definition wording streamlined
& “SR” redefined
DoD
2011
DoD
2016
Alternative 1:
Very Relevant – Relevant – Somewhat
Relevant – Not Relevant
Alternative 2:
Relevant – Not Relevant
LPTA
Relevant – Not Relevant
Alternative 1:
Very Relevant – Relevant – Somewhat
Relevant – Not Relevant
Alternative 2:
Acceptable – Unacceptable
•
•
•
•
•
•
LPTA
Acceptable – Unacceptable
Teams given choice
between two scoring
schemes
“NR” includes “little”
LPTA relevancy scoring
actually not specified
Alt 1 – no change from 2011
Alt 2 – Uses confidence
term for relevancy scoring
LPTA relevancy scoring still
not specified
22
Troubling Policy Changes
2. Relevancy Assessment Scoring – My Perspective
•
DoD 2011
– Trade-off Source Selections:
• Requires buying team to decide if past performance will
“require less discrimination” in order to choose
between Alternative 1 and 2
• Requires a judgement that cannot normally be made
during solicitation development phase
• However, buying team will know that Alternative 2 is
easier & faster
– Teams will opt for the path of least resistance
– LPTA:
• Assessing past contracts as either Relevant or Not
Relevant is reasonable (Does not apply to PerformancePrice Tradeoff)
•
DoD 2016
– Same issues as DoD 2011 language
– Requires team to figure out what acceptable or
unacceptable relevancy might be for Alternative 2
and LPTA
23
Troubling Policy Changes
3. Confidence Ratings
Rating
USAF
2005
USAF
2008
DoD
2011
Description
High Confidence – Significant Confidence – Satisfactory
Confidence – Unknown Confidence – Little Confidence – No
Confidence (USAF Past Performance Guide)
Definitions substantially
unchanged since 1999 when
Confidence Rating introduced
Substantial Confidence – Satisfactory Confidence – Limited
Confidence – No Confidence – Unknown Confidence
(USAF Past Performance Guide)
Eliminates distinction between
outstanding and good
performance
Substantial Confidence – Satisfactory Confidence – Limited
Confidence – No Confidence – Unknown Confidence
No significant change from
USAF definitions
LPTA: Acceptable or Unacceptable
Departure from Confidence
Scoring & equates Unknown
Confidence with Acceptable
Alternative 1
DoD
2016
Changes
Substantial Confidence – Satisfactory
Confidence – Neutral Confidence – Limited
Confidence – No Confidence
•
•
Alternative 2
Satisfactory Confidence – Neutral Confidence –
Limited Confidence – No Confidence
•
LPTA
Acceptable or Unacceptable
•
Alt 1 – no change to
definitions
Unknown Confidence renamed and moved (See 2005)
Alt 2 – Eliminates distinction
between acceptable and good
performance
LPTA no change from 2011
24
Troubling Policy Changes
3. Confidence Ratings – My Perspective
•
USAF 2008
– Losing the ability to distinguish between “Blue” and
“Purple” performance hurt industry and evaluators
•
DoD 2011
– LPTA: Equating “Unknown” with “Acceptable” will
bother some SSAs
•
DoD 2016
– Trade-off Source Selections
• Requires buying team to decide if past performance will
“require less discrimination” in order to choose between
Alternatives 1 and 2
• That judgment cannot normally be made during RFP
development
• Alternative 2 loses the ability to distinguish between “Blue”
and “Green” performance – really hurts industry and
evaluators & looks like LPTA scoring
– Using “Past Performance” scoring for “Experience” is
misguided
• Just “doing it” does not equal Confidence
• Much better fit as part of Technical or Proposal Risk rating
25
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance
Policy Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy
Changes
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
26
Concerns & Consequences
Advances of 1988 Study Are In Danger of Reversal
1988 Past Performance Study Policy
Changes
Impact Of DoD 2016 Policy
Changes?
1. Focus on skills and expertise
Focus will be on product
characteristics, e.g. product
similarity, complexity, dollar value
2. Evaluation became a horizontal
skills assessment vs. vertical
product assessment
Product to product comparison will
drive vertical orientation
3. Use more experienced past
performance evaluators
There will be little to no need for
senior, experienced evaluators
4. Establish visible link between
current performance and future
business
A “pass vs. fail” past performance
source selection environment will
greatly reduce seriousness of CPAR
risk for industry
5. Add Past Performance Sub-Factor
No impact
27
Concerns & Consequences
Future Quote from Source Selection Authority – 2017
“I know things about these companies and
programs that are never presented to either
the SSAC or to the SSA. Why is that???”
28
Past Performance Tools Under Siege
Overview
• Poleskey Disclaimer
• Why & How Was Past Performance
Policy Changed in 1988?
• Baseline Past Performance Evaluation
Process
• Troubling Past Performance Policy
Changes
• Concerns & Consequences
• Alternative Recommendation
• What Did You Learn Today?
29
Alternative Recommendation
Streamline Vice Reversing The Past
•
Apply DoD 2016 Policy for all Evaluation Factors
and Sub-Factors (Emphasis added) to Past Performance
– “Factors and sub-factors represent those specific
characteristics that are tied to significant RFP
requirements and objectives having an impact on the
source selection decision and which are expected to be
discriminators or are required by statute/regulation.
They are the uniform baseline against which each
offeror’s proposal is evaluated, allowing the
Government to make a best value determination.” (2.3.1)
– “When developing source selection criteria, consider
hybrid approaches, applying subjective and objective
criteria as appropriate to evaluate elements of the
proposal.”(1.3)
– “Source selections can be simplified when only those
requirements that are critical to the user are
subjectively evaluated by the SST and the rest of the
requirements are evaluated on an
acceptable/unacceptable basis”(1.3.1.2)
30
Alternative Recommendation
Hybrid Structure For Past Performance Confidence Ratings
Example #1 – Aircraft Avionics Modification ($50 Mil)
•
Technical Sub-Factors:
1. Systems Engineering
2. FAA Airworthiness Assessment
3. Military (MIL-HDBK-516) Airworthiness Assessment
4. Aircraft Integration
5. Training Device Integration
•
Hybrid Approach
– Relevancy Criteria: Sub-Factor level
– Relevancy Scoring:
•
•
–
Confidence Scoring
•
•
•
–
S/F 1, S/F 2, S/F 3 = (Alt 2) Acceptable – Unacceptable
S/F 4 & S/F 5 = (Alt 1) Four level scoring (VR-R-SR-NR)
Combine S/F-1, S/F-2, & S/F-3 = (Alt 2) (Sat – Neutral – Limited – No)
S/F-4 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F-5 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F scoring requires a waiver – In my view SSAs should be
given this option
31
Alternative Recommendation
Hybrid Structure For Past Performance Confidence Ratings
Example #2 – Excalibur 1b ($500+ Mil)
•
Technical Sub-Factors:
1. Compliance with Performance Specifications
2. Producibility (Including transition to production)
3. Management Oversight
4. Systems Engineering
•
Hybrid Approach
– Relevancy Criteria: Sub-Factor level
– Relevancy Scoring:
•
•
–
Confidence Scoring
•
•
•
–
S/F 3 & S/F 4 = (Alt 2) Acceptable – Unacceptable
S/F 1 & S/F 2 = (Alt 1) Four level scoring (VR-R-SR-NR)
Combine S/F 3 & S/F 4 = (Alt 2) (Sat – Neutral – Limited – No)
S/F-1 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F-2 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F scoring requires a waiver – In my view SSAs should be
given this option
32
What Did You Learn Today?
1. Winston Churchill was not the first person to warn
us of the dire consequences of failing to learn from
history
–
Warning applies to Past Performance policy today
2. Why CPARs and Past Performance Evaluation
Teams exist
3. Implementation of the current policy poses a threat
to effective Past Performance scoring in source
selection as well as a risk to the utility of the CPAR
4. There is an alternative to evaluating Past
Performance in source selection drawn directly from
what the current policy recommends for all other
Factors and Sub-Factors
–
–
–
–
Requires less manpower than traditional scoring
Is much, much more effective than “Alternative 2” scoring
Requires a slight change to the policy to allow Past Performance
scoring at the Sub-Factor level
Strengthens link between past performance track record and the
ability to win new business
33
Contact Information
Gary Poleskey, Colonel, USAF (Ret)
Vice President
Dayton Aerospace, Inc.
[email protected]
937.426.4300
4141 Colonel Glenn Hwy, Suite 252
Dayton, Ohio 45431
34
Back Up Slides
35
Past Performance Evaluation Process
Total Team Evaluation (Hypothetical Example)
Sample
Source
Selection
Sub-Factors
1
Ops Utility
2
Software
Development
3
Training Effect.
4
Integ. Log. Sup.
Cost
Factor
Prime
Contractor
ABLE Div. A
(Airframe)
Subcontractor
ABLE Div. B
(Offensive
Avionics)
Sub - 1
(Aircrew
Training
System)
Sub-2
(CLS)
Unnamed
Subs
(Radar)
(Other
Avionics)
X
X
---
---
N/A
X
X
X
---
N/A
X
---
X
---
N/A
X
---
X
X
N/A
X
X
X
X
N/A
36
Alternative Recommendation
Hybrid Structure For Past Performance Confidence Ratings
Example #3 – Small Diameter Bomb II ($500+ Mil)
•
Technical Sub-Factors:
–
–
–
–
•
S/F-1:
S/F-2:
S/F-3:
S/F-4:
Adherence to cost & schedule
Capability to deliver system required by RFP
Systems Engineering
Management Effectiveness
Hybrid Approach
– Relevancy Criteria: Sub-Factor level
– Relevancy Scoring: (Alt 1) Four level scoring (VRR-SR-NR)
– Confidence Scoring
• Combine S/F-1, S/F-3, & S/F-4 = (Alt 2) (Sat – Neutral –
Limited – No)
• S/F-2 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
– S/F scoring requires a waiver – In my view SSAs
should be given this option
37
Alternative Recommendation
Hybrid Structure For Past Performance Confidence Ratings
Example #4 – Missile Guidance System ($30 Mil)
•
Technical Sub-Factors:
1. Guidance System Design
2. Software Design And Re-use
3. Subcontract Management
4. Management Effectiveness
5. Systems Engineering
•
Hybrid Approach
– Relevancy Criteria: Sub-Factor level
– Relevancy Scoring:
•
•
–
Confidence Scoring
•
•
•
–
S/F 1, S/F 4, S/F 5 = (Alt 2) Acceptable – Unacceptable
S/F 2 & S/F 3: (Alt 1) Four level scoring (VR-R-SR-NR)
Combine S/F-1, S/F-4, & S/F-5 = (Alt 2) (Sat – Neutral – Limited – No)
S/F-2 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F-3 = (Alt 1) (Sub – Sat – Neutral – Limited – No)
S/F scoring requires a waiver – In my view SSAs should be
given this option
38