MANAGEMENT TOWARDS SUCCESS—DEFENSE BUSINESS SYSTEM
ACQUISITION PROBABILITY OF SUCCESS MODEL
by
Sean Tzeng
A Dissertation
Submitted to the
Graduate Faculty
of
George Mason University
in Partial Fulfillment of
The Requirements for the Degree
of
Doctor of Philosophy
Systems Engineering and Operations Research
Committee:
Dr. Kuo-Chu Chang, Dissertation Director
Dr. Kathryn Blackmond Laskey, Committee
Member
Dr. Larrie Ferreiro, Committee Member
Dr. Shih-Chun Chang, Committee Member
Dr. Ariela Sofer, Department Chair
_________________________________
Dr. Kenneth S. Ball, Dean, Volgenau School
of Engineering
Date:
Spring Semester 2015
George Mason University
Fairfax, VA
Management Towards Success—Defense Business System Acquisition Probability
of Success Model
A Dissertation submitted in partial fulfillment of the requirements for the degree of
Doctor of Philosophy at George Mason University
by
Sean Tzeng
Master of Science
George Washington University, 2007
Bachelor of Science
Virginia Polytechnic Institute and State University, 2005
Director: Kuo-Chu Chang, Professor
Department of Systems Engineering and Operations Research
Spring Semester 2015
George Mason University
Fairfax, VA
This work is licensed under a creative commons
attribution-noderivs 3.0 unported license.
ii
DEDICATION
This dissertation is dedicated to my family: my wonderful wife Ju, and kids, Aaron and
Kailee.
iii
ACKNOWLEDGEMENTS
I would like to acknowledge everyone I have worked with during my career in the federal
government, as well as all of my fellow classmates and professors throughout my
academic career. Each and every one of you had influenced my experiences and
knowledge in some fashion, enabling me to complete this journey in this unique way.
Special thanks to my dissertation advisor, Dr. KC Chang, and the dissertation committee
members Dr. Kathryn Blackmond Laskey, Dr. Larrie Ferreiro, Dr. Shih-Chun Chang, and
formerly, the late Dr. Andrew Sage. Lastly, I would like to thank my wife and kids, who
had to make sacrifices to make this research possible.
iv
TABLE OF CONTENTS
Page
List of Tables .................................................................................................................... vii
List of Figures .................................................................................................................. viii
List of Equations ................................................................................................................ xi
List of Abbreviations ........................................................................................................ xii
Abstract ............................................................................................................................ xiv
1
2
3
4
Introduction ................................................................................................................. 1
1.1
Problem Statement ............................................................................................... 3
1.2
Motivation and Background ................................................................................. 4
1.3
Research Hypothesis and Goals ........................................................................... 5
1.4
Dissertation Contributions.................................................................................... 6
1.5
Research Design ................................................................................................... 6
Literature Review and Discussion ............................................................................... 9
2.1
Defense Business System Acquisition ................................................................. 9
2.2
Systems Engineering .......................................................................................... 12
2.3
Project and Program Management ..................................................................... 24
2.4
Complex Adaptive Systems ............................................................................... 36
2.5
Probability of Program Success ......................................................................... 42
2.6
Evidential Reasoning and Probability Systems Overview ................................. 49
2.7
Bayesian Network and Knowledge Representation ........................................... 55
Bayesian Network Prototype ..................................................................................... 64
3.1
Prototype Model Framework ............................................................................. 64
3.2
Probability Specification .................................................................................... 66
3.3
Description of Nodes and States ........................................................................ 69
3.4
Prototype Model Evaluation............................................................................... 78
3.5
Bayesian Network Prototype Model Conclusion ............................................... 84
Evidence Analysis and Organization ......................................................................... 86
v
4.1
Evidence Analysis .............................................................................................. 86
4.2
Evidence Organization ....................................................................................... 91
4.3
Evidential Reasoning.......................................................................................... 96
5
Model Structure ....................................................................................................... 102
5.1
Model Foundations ........................................................................................... 102
5.2
Model Nodes .................................................................................................... 105
5.3
Model Arcs ....................................................................................................... 121
5.4
Complete Model ............................................................................................... 128
6
Network Structure and Probability Specification .................................................... 132
6.1
Knowledge Elicitation ...................................................................................... 132
6.2
Expert Data Conversion ................................................................................... 136
6.3
Evidence Nodes ................................................................................................ 138
6.4
Knowledge Checkpoint Nodes ......................................................................... 141
6.5
Knowledge Area Nodes ................................................................................... 144
7
Model Analysis ........................................................................................................ 169
7.1
Assessment Guideline ...................................................................................... 170
7.2
Case 1—Sensitivity Test 1.1 at Milestone A ................................................... 173
7.3
Case 2—Sensitivity Test 2.1 at Milestone A ................................................... 184
7.4
Cases 3–10........................................................................................................ 192
7.5
Case 11—RDT&E Program to Milestone B .................................................... 208
7.6
Case 12—Program B Major Release Program to Milestone C ........................ 218
7.7
Model Analysis Summary ................................................................................ 229
8
Conclusion ............................................................................................................... 232
9
Future Research ....................................................................................................... 236
10
Appendix A—DAPS Evidence Taxonomy ......................................................... 238
11
Appendix B—DAPS Network Structure and Probability Specification Data
Collection ........................................................................................................................ 245
11.1
Subject Matter Expert Interview Data Collection Sheet .................................. 245
11.2
Data and Statistics ............................................................................................ 259
References ....................................................................................................................... 264
vi
LIST OF TABLES
Table
Page
Table 1, SME Summary Static and Description ............................................................. 135
Table 2, Evidence Node Data Summary Statistics ......................................................... 139
Table 3, Knowledge Checkpoint Ranking Summary Statistics ...................................... 142
Table 4, KC Arc Value Calculations .............................................................................. 143
Table 5, Time KA Ranking Summary ............................................................................ 147
Table 6, Cost KA Ranking Summary ............................................................................. 148
Table 7, Quality KA Ranking Summary......................................................................... 150
Table 8, Scope KA Ranking Summary ........................................................................... 152
Table 9, Procurement KA Ranking Summary ................................................................ 153
Table 10, SE KA Ranking Summary .............................................................................. 155
Table 11, GM KA Ranking Summary ............................................................................ 157
Table 12, Time KA Varc Table ...................................................................................... 159
Table 13, Cost KA Varc Table ....................................................................................... 159
Table 14, Quality KA Varc Table ................................................................................... 160
Table 15, Scope KA Varc Table ..................................................................................... 160
Table 16, Procurement KA Varc Table .......................................................................... 161
Table 17, SE KA Varc Table .......................................................................................... 161
Table 18, GM KA Varc Table ........................................................................................ 162
Table 19, Success Factor Table ...................................................................................... 172
Table 20, Case 1 DAPS Model Output ........................................................................... 176
Table 21, Case 2 DAPS Model Output ........................................................................... 188
Table 22, Cases 1-10 Summary ...................................................................................... 207
Table 23, Case 11 Results Summary .............................................................................. 216
Table 24, Case 11 DAPS Output Profile ........................................................................ 217
Table 25, Case 12 Results Summary .............................................................................. 227
Table 26, Case 12 DAPS Output Profile ........................................................................ 228
vii
LIST OF FIGURES
Figure
Page
Figure 1 Realignment of Major DoD Process for Business Systems (Adapted from DAU
2012) ................................................................................................................................. 10
Figure 2 Business Capability Lifecycle Acquisition Model (Adapted from DAU 2012) 11
Figure 3 Three Activities of Systems Engineering Management (Adapted from DAU
2001) ................................................................................................................................. 14
Figure 4 Life Cycle Integration (Adapted from DAU 2001) ............................................ 15
Figure 5 Systems Engineering Process circa 2001 (Adapted from DAU 2001)............... 16
Figure 6 2003 Systems Engineering Process Model (Adapted from DAU 2012) ............ 17
Figure 7 2009 Systems Engineering Process Model (Adapted from DAU 2012) ............ 18
Figure 8 Technical Process Interfaces (Adapted from DAU 2012) .................................. 19
Figure 9 Technical Management Process Interactions (Adapted from DAU 2012) ......... 22
Figure 10 PMBOK DoD Extension—Primary Linkages Between PMI PMBOK Guide
Knowledge Areas (Adapted from DAU 2003) ................................................................. 31
Figure 11 PMBOK DoD Extension—The Program Management Environment (Adapted
from DAU 2003) ............................................................................................................... 32
Figure 12 PMBOK DoD Extension—The Major DoD Decision Systems (Adapted from
DAU 2003)........................................................................................................................ 33
Figure 13 GAO Knowledge Points (Adapted from GAO 2005) ...................................... 34
Figure 14 Relations between Complexity Concepts and the Information Age Enterprise
(Adapted from Atkinson & Moffat, 2005) ........................................................................ 41
Figure 15 Industrial Age (top) versus Information Age (bottom) Management (Adapted
from Atkinson & Moffat, 2005)........................................................................................ 41
Figure 16 Naval PoPS Structure (Adapted from Department of the Navy 2012) ............ 45
Figure 17 Naval PoPS v2 Factor Scores (Adapted from Department of the Navy 2012) 46
Figure 18 PoPS Criteria Example (Adapted from Department of the Navy 2012) .......... 47
Figure 19 Simple Bayesian Network Example (Adapted from Anaj, 2006) .................... 57
Figure 20 Knowledge Representation in the OODA Loop, (Original adapted from Moran,
Patrick E., 2008) ............................................................................................................... 60
Figure 21 BN Naval PoPS Prototype Netica Model ......................................................... 66
Figure 22 BN Prototype Program Success Node CPT...................................................... 70
Figure 23 BN Prototype Program Requirements Factor Node CPT ................................. 71
Figure 24 BN Prototype Program Resources Factor Node CPT....................................... 71
Figure 25 BN Prototype Program Planning and Execution Factor Node CPT ................. 72
Figure 26 BN Prototype External Influencer Factor Node CPT ....................................... 73
Figure 27 BN Prototype Parameter Status Factor Node CPT ........................................... 73
viii
Figure 28 BN Prototype Scope Evolution Node CPT ...................................................... 74
Figure 29 BN Prototype CONOPS Node CPT ................................................................. 74
Figure 30 BN Prototype Program Management Node CPT ............................................. 75
Figure 31 BN Prototype Systems Engineering Node CPT ............................................... 75
Figure 32 BN Prototype Budget and Planning Node CPT ............................................... 76
Figure 33 BN Prototype Manning Node CPT................................................................... 77
Figure 34 BN Prototype Fit in Vision Node CPT ............................................................. 77
Figure 35 BN Prototype Program Advocacy Node CPT .................................................. 78
Figure 36 BN Prototype Interdependencies CPT.............................................................. 78
Figure 37 BN Prototype Case 1—Sensitivity Test 1 Output ............................................ 80
Figure 38 Naval PoPS Case 1—Sensitivity 1 Output ....................................................... 81
Figure 39 BN Prototype Case 2—Sensitivity Test 2 Output ............................................ 83
Figure 40 Naval PoPS Case 2—Sensitivity Test 2 Output ............................................... 84
Figure 41 DAPS Categories of Recurrent Forms of Evidence ......................................... 88
Figure 42 Sample of Evidence Taxonomy by Knowledge Area ...................................... 93
Figure 43 DAPS Knowledge Inference Structure ........................................................... 103
Figure 44 Business Capability Lifecycle Model - 15 Knowledge Checkpoints ............. 107
Figure 45 Knowledge Area to Evidence (KA2E) Arc Example ..................................... 122
Figure 46 Knowledge Area to Knowledge Area (KA2KA) Graph Structure ................. 125
Figure 47 Knowledge Area to Knowledge Checkpoint (KA2KC) Arcs Example ......... 126
Figure 48 Knowledge Area @ KC1 to Knowledge Area @KC2 (KA2KAi+1) Arc
Example .......................................................................................................................... 128
Figure 49 DAPS Complete Model .................................................................................. 131
Figure 50 Evidence Node Probability Profile ................................................................. 140
Figure 51 Evidence Node CPT (Typical) ....................................................................... 140
Figure 52 Knowledge Checkpoint Ranking Observations.............................................. 141
Figure 53 Knowledge Checkpoint Ranking Radar Chart ............................................... 143
Figure 54 Knowledge Checkpoint Node CPT ................................................................ 144
Figure 55 Time KA Ranking Observations .................................................................... 146
Figure 56 Cost KA Ranking Observations ..................................................................... 148
Figure 57 Quality KA Ranking Observations ................................................................. 149
Figure 58 Scope KA Ranking Observations ................................................................... 151
Figure 59 Procurement KA Ranking Observations ........................................................ 153
Figure 60 SE KA Ranking Observations ........................................................................ 155
Figure 61 GM KA Ranking Observations ...................................................................... 157
Figure 62 Time KA Node CPT at MDD......................................................................... 163
Figure 63 Cost KA Node CPT at MDD .......................................................................... 164
Figure 64 Quality KA Node CPT at MDD ..................................................................... 164
Figure 65 Scope KA Node CPT at MDD ....................................................................... 164
Figure 66 Procurement KA Node CPT at MDD ............................................................. 164
Figure 67 SE KA Node CPT at MDD ............................................................................ 165
Figure 68 GM KA Node CPT at MDD........................................................................... 165
Figure 69 Time KA Node CPT at other KCs .................................................................. 166
Figure 70 Cost KA Node CPT at other KCs ................................................................... 166
ix
Figure 71 Quality KA Node CPT at other KCs .............................................................. 167
Figure 72 Scope KA Node CPT at other KCs ................................................................ 167
Figure 73 Procurement KA Node CPT at other KCs ...................................................... 167
Figure 74 SE KA Node CPT at other KCs ..................................................................... 168
Figure 75 GM KA Node CPT at other KCs .................................................................... 168
Figure 76 Case 1 Sensitivity Test 1.1 ............................................................................. 175
Figure 77 Case 1 P(Success) at Knowledge Checkpoints .............................................. 178
Figure 78 Case 1 DAPS Output at SRR.......................................................................... 179
Figure 79 Case 1 DAPS Output at SFR .......................................................................... 180
Figure 80 Case 1 DAPS Output at PreED ...................................................................... 180
Figure 81 Case 1 DAPS Output at MSB......................................................................... 181
Figure 82 Case 1 DAPS Success Factor Profile ............................................................. 182
Figure 83 Case 1 (What-if Analysis) at Milestone A ..................................................... 183
Figure 84 Case 1 DAPS Success Factor Profile with What-if Scenario ......................... 184
Figure 85 Case 2 Sensitivity Test 2.1 ............................................................................. 187
Figure 86 Case 2 DAPS Success Factor Profile ............................................................. 189
Figure 87 Case 2 (What-if analysis) at Milestone A....................................................... 190
Figure 88 Case 2 DAPS Success Factor Profile with What-if Scenario ......................... 191
Figure 89 Case 3 at IOC.................................................................................................. 193
Figure 90 Case 4 at FOC ................................................................................................. 195
Figure 91 Case 5 at SFR ................................................................................................. 197
Figure 92 Case 6 (High Risk) at CDR ............................................................................ 199
Figure 93 Case 7 (Moderate Risk) at CDR ..................................................................... 200
Figure 94 Case 8 (Low Risk) at CDR ............................................................................. 201
Figure 95 Case 9 at ASR ................................................................................................. 204
Figure 96 Case 10 at PRR ............................................................................................... 206
Figure 97 Case 11 DAPS Output at MDD ...................................................................... 211
Figure 98 Case 11 DAPS Output at SFR ........................................................................ 212
Figure 99 Case 11 DAPS Output at Pre-ED ................................................................... 214
Figure 100 Case 11 DAPS Output at MS B.................................................................... 215
Figure 101 Case 12 DAPS Output at MDD .................................................................... 221
Figure 102 Case 12 DAPS Output at Milestone B ......................................................... 222
Figure 103 Case 12 DAPS CDR Output ......................................................................... 224
Figure 104 Case 12 DAPS Output at TRR .................................................................... 226
x
LIST OF EQUATIONS
Equation
Page
Equation 1, Arc Value (Varc) ............................................................................................. 67
Equation 2, P(Child Node=True|Parent Nodes)................................................................ 68
Equation 3, Arc Weight .................................................................................................. 134
Equation 4, P(Child Node=True|KAs) ............................................................................ 136
Equation 5, Success Factor ............................................................................................. 171
xi
LIST OF ABBREVIATIONS
Alternative System Review............................................................................................ ASR
Business Capability Lifecycle........................................................................................ BCL
Causal-Influence Relationship ........................................................................................ CIR
Constructive Systems Engineering Cost Model ................................................ COSYSMO
Cost Performance Index .................................................................................................. CPI
Critical Design Review ................................................................................................. CDR
Defense Acquisition Guide ........................................................................................... DAG
Defense Acquisition University .................................................................................... DAU
Defense Business System Probability of Success ....................................................... DAPS
Defense Business System .............................................................................................. DBS
Department of Defense .................................................................................................. DoD
Enterprise Resource Planning ........................................................................................ ERP
Earned Value Management System ........................................................................... EVMS
Full Operating Capability .............................................................................................. FOC
General Management ...................................................................................................... GM
Information Technology Procurement Request ............................................................ ITPR
Initial Operating Capability/Full Deployment Decision ........................................IOC/FDD
Initial Technical Review ................................................................................................. ITR
Knowledge Area .............................................................................................................. KA
Knowledge Area to Evidence ..................................................................................... KA2E
Knowledge Area to Knowledge Area ...................................................................... KA2KA
Knowledge Area to Knowledge Checkpoint ........................................................... KA2KC
Knowledge Area to Proceeding Knowledge Area (Dynamic) ............................KA2KAi+1
Knowledge Checkpoint .................................................................................................... KC
Material Development Decision .................................................................................. MDD
Milestone A................................................................................................................... MSA
Milestone B ................................................................................................................... MSB
Milestone C ................................................................................................................... MSC
Objective Quality Evidence .......................................................................................... OQE
Pre-Engineering Development Review......................................................................Pre-ED
Preliminary Design Review ........................................................................................... PDR
Probability of Program Success .................................................................................... PoPS
Production Readiness Review........................................................................................ PRR
Project Management Body of Knowledge .............................................................. PMBOK
Schedule Performance Index ........................................................................................... SPI
xii
System Functional Review ............................................................................................ SFR
System Requirements Review ....................................................................................... SRR
Systems Engineering ......................................................................................................... SE
Test Readiness Review .................................................................................................. TRR
xiii
ABSTRACT
MANAGEMENT TOWARDS SUCCESS—DEFENSE BUSINESS SYSTEM
ACQUISITION PROBABILITY OF SUCCESS MODEL
Sean Tzeng, Ph.D.
George Mason University, 2015
Dissertation Director: Dr. Kuo-Chu Chang
The great amounts of data and the large number of artifacts generated during the
execution of defense acquisition programs serve as evidence of program progress
and decision support. However, acquisition decision makers have limited means
to determine what all the evidence items collectively indicate and how they can be
used to support decision making in a way that ensures program success. The
Defense Business System Acquisition Probability of Success (DAPS) model is an
evidence-based analytical tool developed to help decision makers analyze and
understand the implications of the abundance of evidence produced during a
Defense Business System (DBS) acquisition. Based on observations and
inferences from evidence, the DAPS can assess program performance in specific
subject matter areas (Knowledge Areas) and ascertain the overall likelihood for
program success through technical reviews and milestone reviews (Knowledge
xiv
Checkpoints). DAPS supports acquisition decision making and is an initial step
forward in improving human understanding and ability to innovate and engineer
systems through evidential reasoning.
xv
1
INTRODUCTION
Information Technology (IT) system development and management came to the
forefront of the U.S. federal government in 1996 when the Clinger-Cohen Act was signed
into federal law, mandating oversight and management of Information Technology within
the federal government. What ensued was a series of Enterprise Resource Planning (ERP)
Defense Business System (DBS) acquisition programs that became too big, too complex,
and took too long:
"Sen. Claire McCaskill (D-Mo.)—If I add up all 11 of the ERP programs
cumulatively, we're $6 billion over budget and 31 years behind schedule. That's a
problem.” (federalnewsradio.com: April 20, 2012)
“GAO March 2012 Report—10 ERPs DoD identified as critical to business
operations transformation, six experienced schedule delays ranging from two to 12 years,
and five faced cost increases totaling nearly $6.9 billion.” (Insidedefense.com: April 18,
2012)
Developing an IT system to meet organizational needs is not a simple task. It can
be very extensive, takes a long time to realize, and is often more costly and difficult than
originally imagined. This is especially true for large IT projects (over $15 million). In a
2012 study, University of Oxford researchers reported that, on average (based on a study
of 5,400 IT projects), large IT projects run 45% over budget, 7% over time, and are
1
delivered with 56% less value (Bloch 2012). The situation seems even worse for
Department of Defense (DoD) Defense Business System (DBS) acquisition programs,
where the majority of programs would meet the University of Oxford researchers’
threshold for large IT projects. A Government Accountability Office (GAO 2012) report
indicates that, of ten Enterprise Resource Planning (ERP) programs the Department of
Defense (DoD) identified as critical to business operations transformation, nine of them
were experiencing schedule delays of up to 6 years, and seven of the programs were
facing estimated cost increases up to or even over $2 billion. This is occurring even
though acquisition laws, regulations, policies, guidance, and independent assessments, as
well as technical reviews and milestone reviews, guide DBS acquisition.
The execution of DBS programs generates great amounts of data and a large
number of artifacts, such as the Integrated Master Schedule (IMS), Earned Value
Management System (EVMS) Metrics, Business Case, and Systems Engineering Plan
(SEP), as well as Risk Reports and various independent assessments. Decision makers
commonly use these types of data/artifacts at technical reviews and milestone reviews as
evidence of program progress to support their decision making. However, this kind of
development and use of evidence to support decision making has not translated to
desirable investment outcomes. This issue is analogous to what other professional
disciplines have been experiencing, such as the intelligence, criminal justice, engineering,
and medical professions. In today’s information age, availability of information and
evidence is no longer the most challenging issue. Often, data/evidence is abundant, but
the availability of analytical tools limits the ability to deduce what all the evidence means
2
collectively and how it supports the hypothesis being examined. Good decision making
requires both sufficient information and evidence as well as proper representation of and
inference from that evidence. Currently, DBS acquisition decision makers have limited
means to aid them in holistically and logically processing what all available evidence
collectively indicates about a program and then using it in a structured manner to support
their decisions.
The Defense Business System Acquisition Probability of Success (DAPS) model
is an evidence-based analytical tool developed to help collectively draw inference from
the abundance of available evidence produced during the course of DBS acquisition.
Based on observations and inferences, the DAPS model is able to assess program
performance in specific subject matter Knowledge Areas and to ascertain the overall
likelihood for program success. DAPS is a way to support acquisition decision making
and an initial step forward in improving humanity’s ability to innovate and engineer
systems through evidential reasoning.
1.1 Problem Statement
The large amount of data/artifacts generated through the execution of DBS
programs is commonly used by decision makers at technical and milestone reviews as
evidence of program progress to support their decisions. However, the development and
use of this evidence has not translated to desirable investment outcomes. In today’s
information age, the availability of information and evidence are no longer the most
challenging issues. Often, data/evidence is abundant, but the availability of analytical
3
tools limits the ability to conclude what all the evidence means collectively and how it
supports the hypothesis being examined. Good decision making requires not only
information and evidence, but also proper representation of and inference from the
evidence to support it. Currently, DBS acquisition decision makers have limited means to
aid them in holistically and logically processing what all the available evidence indicates
about a program and then using that evidence in a structured manner to support decision
making.
1.2 Motivation and Background
Acquisition programs in the DoD experience a great deal of complexities,
difficulties, and inefficiencies. The underperforming ERP programs discussed in the
Introduction are representative of these challenges. Acquisition professionals, including
systems engineers and project/program managers, as well as acquisition decision
authorities, constantly live with the headache of managing the scope, cost, schedule, and
system quality of a program while trying to meet statutory and regulatory acquisition
requirements.
The DAPS model will help acquisition professionals process the abundance of
acquisition data/evidence in a repeatable, reliable, and structured manner, then use this
insight to make better-informed decisions. The DAPS model draws on DBS acquisition
evidence to inform acquisition professionals whether the program is on track toward
success and aid decision authorities in making the difficult decision whether to continue
the program.
4
1.3 Research Hypothesis and Goals
To summarize the research focus, the Research Hypothesis for the DAPS research
is provided below:
A DBS Program’s Probability of Success can be reliably and repeatedly measured
and predicted, in a structured manner, through the collective inference of available
evidence (data, reports, plans, and artifacts produced during acquisition execution)
using Bayesian Networks, and can be used to support acquisition decision making.
The DAPS research goals include the following:
1. Develop a probabilistic reasoning system using Bayesian Networks to collectively
draw inferences from evidence available from DBS acquisition
2. Model complex interrelationships within DBS acquisition to simulate real-world
interactions affecting program success
3. Model dynamic relationships of DBS acquisition to enable the prediction of future
program success/failure
4. Incorporate risk management elements into the model
5. Incorporate evidence-based decision making into the model
6. Tailor the model so it will be for DBS acquisition programs, but can be adapted
for IT and system projects in general
5
1.4 Dissertation Contributions
The contributions of the DAPS research are outlined below:
1. Developed a quantitative system for the fusion of DBS acquisition
data/knowledge to aid decision makers in holistically and logically processing all
available data/evidence by:
a. Measuring performance of key areas of a program (Knowledge Areas)
b. Measuring success at a reviews/milestones (Knowledge Checkpoints)
c. Predicting future program success based on dynamic modeling
2. Conducted Knowledge Engineering of DBS/Defense acquisition from an
evidence-based perspective:
a. Analyzed and constructed taxonomy of DBS acquisition evidence
b. Elicited expert knowledge for the evidential relationships/influences on
program success, including intermediate complex interrelationships
c. Implemented the Knowledge Engineering into the DAPS model
3. Conducted a total of 14 case analyses of hypothetical and real-world cases,
demonstrating the potential to aid evidence-based decision making, as well as the
potential for further research regarding program evidence, program success, and
additional applications and expansions of the DAPS model.
1.5 Research Design
The research was conducted in 4 phases, as listed below:
Phase 0—Conceptualization—Dissertation Proposal
6
Phase 1—Model Formulation and Design
Phase 2—Knowledge Elicitation and Implementation
Phase 3—Model Validation and Documentation
1.5.1 Phase 0—Conceptualization—Dissertation Proposal
The dissertation proposal presented the motivation, hypothesis, background
literature review and discussion, design of the research project, and the preliminary
Bayesian Networks prototype based on the Probability of Program Success (PoPS)
framework. With the successful review and approval of this proposal, the dissertation
research proceeded to the next phase of the research to start the detailed model design.
1.5.2 Phase 1—Model Formulation and Design
Phase 1 of the research started by analyzing and organizing DBS acquisition
evidence items. Based on the evidence organization, the model structure was designed
and built out iteratively, starting from a simple model at a static point containing a small
amount of evidence. The model was then gradually expanded to a comprehensive
dynamic model encompassing the full acquisition lifecycle from Material Development
Decision (MDD) to Full Operating Capability (FOC).
1.5.3 Phase 2—Knowledge Elicitation and Implementation
Phase 2 of the dissertation research consisted of the knowledge elicitation and the
implementation of the acquired knowledge into the DAPS model. The knowledge
elicitation part of the phase started with a smaller set of 5 Subject Matter Expert (SME)
interviews. This provided an opportunity to work out any issues with the model and
7
interview material before interviewing a wider audience set. Each interview was
conducted individually and consisted of a presentation of the research and the preliminary
model, completion of the survey for network structure and probability specification, and
open questions about how to improve the survey or issues with the model.
The initial 5 SME interviews and model implementation did not result in changes
to the DAPS model structure itself. However, data collection sheets were streamlined to
eliminate low value-added data determined not to be used for this dissertation. The data
collection methods will be discussed further in Chapter 6. Expanded data collection
eventually reached 17 SMEs, a sufficient sample size for this dissertation. The analysis
and implementation of the expert data into the final DAPS model is presented in Chapter
6. The interviews with the SMEs also provided initial confirmation that the model
framework is in alignment with expert experiences as well as current works of research in
subject matter domain areas.
1.5.4 Phase 3—Model Validation and Documentation
The final phase of the research was to validate the model and document the model
results through case analyses. In total, 14 case analyses were conducted in this
dissertation. Two were conducted for the prototype model in Chapter 3. Twelve more
were conducted for the final DAPS model in Chapter 7, including 10 hypothetical cases
and 2 real-world cases.
8
2
LITERATURE REVIEW AND DISCUSSION
The background research necessary for the DBS Acquisition Probability of
Success (DAPS) dissertation was expansive. A select sample of the most important
background information to DAPS is provided and discussed in this chapter, including the
following topics:
•
Defense Business System Acquisition
•
Systems Engineering
•
Project and Program Management
•
Complex Adaptive Systems
•
Probability of Program Success (DoD)
•
Evidential Reasoning and Probability Systems Overview
•
Bayesian Network and Knowledge Representation
2.1 Defense Business System Acquisition
DoD Defense Business Systems (DBS) acquisition transitioned to the Business
Capability Lifecycle (BCL) Acquisition Model in 2011 (DAU 2013). The move was
meant to streamline the stringent and complex DoD Acquisition Process to accommodate
the need of IT systems to quickly develop and deploy capabilities to users before
becoming outdated. Although some additional changes were made in 2013 by the Interim
DoD Instruction 5000.02 in November 2013 to allow more ability to tailor and adapt the
9
acquisition model, the original BCL requirements remain in place for DBS acquisition
oversight in the interim instruction (DoD 2013). Figure 1 below depicts the streamlined
acquisition systems.
Figure 1 Realignment of Major DoD Process for Business Systems (Adapted from DAU 2012)
The BCL system reviews and manages a program’s investment worthiness and
acquisition execution. BCL combined the Investment Review Board/Defense Business
System Management Committee (IRB/DBSMC) and the Defense Acquisition System
(DAS). It also eliminated the Joint Capability Integration Development System (JCIDS)
process for DBS acquisition.
The Planning, Programming, Budgeting, Execution (PPBE) system manages the
funding of the DoD programs by considering emergent requirements, risks, and
budgetary constraints. The PPBE system remains unchanged in the updated DBS
Acquisition process. Although not shown here, other process systems that have
significant effects on the acquisition programs include the contracting policies and
10
regulations, such as the Federal Acquisition Regulations (FAR) and Defense Federal
Acquisition Regulations (DFAR), and the Information Assurance (IA) certification and
accreditation process system. Additionally, DBS acquisitions are also governed by
several other IT management directives of the federal government, including the IT
Procurement Request (ITPR) process, data center consolidation (DCC), portfolio
management, and enterprise licensing.
Figure 2 below provides a high level phasing view of the BCL acquisition
process.
Figure 2 Business Capability Lifecycle Acquisition Model (Adapted from DAU 2012)
11
Defense Acquisition Guide descriptions of the three BCL phases are provided below
(DAU 2012):
Business Capability Definition Phase—The rigorous up-front analysis conducted that
results in the Problem Statement section of the Business Case.
Investment Management Phase—Activities that result in the expansion of the Problem
Statement into the Business Case and the development of the Program Charter.
Execution—The last four phases of BCL that result in a fully designed, developed, tested,
deployed, and sustained increment of capability (materiel and non-materiel solution) that
satisfies the specific outcomes defined in the Business Case.
2.2 Systems Engineering
An overview of the systems engineering discipline is discussed in this section,
with specific focus on defense applications.
2.2.1
Systems Engineering Definitions
What is Systems Engineering? A few selected definitions are provided below.
“Simply stated, a system is an integrated composite of people, products, and
processes that provide a capability to satisfy a stated need or objective.” (DAU 2001)
Defense Acquisition University’s (DAU's) definition of systems engineering is as
follows:
“Systems engineering consists of two significant disciplines: the technical
knowledge domain in which the systems engineer operates, and systems engineering
management.”(DAU 2001)
12
Two of the most renowned researchers in the field of systems engineering, Sage
and Rouse, define systems engineering as follows:
“[The] purpose of Systems Engineering is information and knowledge
organization and management to assist clients who desire to develop policies for
management, direction, control, and regulation activities relative to forecasting,
planning, development, production, and operation of total systems to maintain overall
quality, integrity, and integration as related to performance, trustworthiness, reliability,
availability, and maintainability.” (Sage and Rouse 2009)
Systems Engineering Management discussed in the Defense Acquisition
University’s definition of systems engineering contains three major activities:
development phasing, systems engineering process, and life cycle integration. (DAU
2001) Figure 3 provides the Defense Acquisition University's overview of Systems
Engineering Management's three major activities and their overlaps. The three activities
are further discussed below:
Development phasing—Activities that control the design process and provides baselines
that coordinate design efforts. The phasing model, the BCL Model for defense business IT
system acquisition as of 2012, was discussed in Section 2.1.
Systems engineering process—Process that provides a structure for solving design
problems and developing, tracking, verifying, and validating requirements through the
design effort. General Frameworks are presented in Section 2.2.2, Systems Engineering
Process.
13
Life cycle integration—Activity that involves customers in the design process and
ensures that the system developed is viable throughout its life. The 8 primary lifecycle
functions as presented by the Systems Engineering Fundamentals are shown below in
Figure 4. Life Cycle Integration is also commonly referred to as Lifecycle Logistics or
Integrated Product Support. This activity is not further discussed
Figure 3 Three Activities of Systems Engineering Management (Adapted from DAU 2001)
14
Figure 4 Life Cycle Integration (Adapted from DAU 2001)
2.2.2 Systems Engineering Process
The Systems Engineering Process model as described by DAU (DAU 2001) is a
simpler legacy process model that is applicable to most if not all development phasing
frameworks. It is provided below in Figure 5. Defense Acquisition Guide Chapter 4
Systems Engineering provided an updated version of the Systems Engineering Process
model, shown in Figure 6 below dated 2003. A 2009 version is also shown in Figure 7
below.
15
Figure 5 Systems Engineering Process circa 2001 (Adapted from DAU 2001)
The most significant change to the DoD Systems Engineering Process occurred
from the 2001 version to 2003.This change was done to better emphasize the testing,
verification, and validation activities within the systems engineering process, famously
depicted in a “V” shape for verification and validation.
The 2009 version contained name changes of the first three top-down design
processes: Stakeholder Requirements Definition, Requirements Analysis, and
Architecture Design. There are no additional changes from 2003 to 2009. The Technical
Management Process portion of the 2003 and 2009 Systems Engineering Model is an
16
update of the Systems Analysis and Control Process of the legacy process shown in
Figure 5.
Figure 6 2003 Systems Engineering Process Model (Adapted from DAU 2012)
17
Figure 7 2009 Systems Engineering Process Model (Adapted from DAU 2012)
2.2.2.1 Technical Processes
As shown in Figure 7, there are eight Technical Processes in the DoD Systems
Engineering Process. The Technical Process interfaces are shown in Figure 8.
1. Stakeholder Requirements Definition Process—Supports the stakeholders/users in
defining the business requirements, performance parameters, cost-scheduletechnical constraints, Concept of Operations (CONOPS).
2. Requirements Analysis Process—Analyzes use requirements to derive the
system’s functional baseline to facilitate architecture design.
3. Architecture Design Process—Designs the physical system architecture.
18
4. Implementation Process—Realizes the system design, including the alternatives
to buy a commercial system, build a new system, or re-use/modernize an existing
system.
5. Integration Process—Integrates system elements into one system product.
6. Verification Process—Tests the system to verify that the specified requirements
are met.
7. Validation Process—Validates that the right system has been build, beyond the
verification of the requirements. Validates that the system is useful to the user.
8. Transition Process—Transitions to the next phase of the acquisition life cycle.
This could be the next phase of integration, or for an end-item system, this would
be the installation and deployment process for operational use.
Figure 8 Technical Process Interfaces (Adapted from DAU 2012)
19
2.2.2.2 Technical Management Process (Systems Analysis and Control)
The Technical Management Process (Systems Analysis and Control circa 2001) is
an important activity in the Systems Engineering Process. It provides documentation,
quality control, and decision support to the process. The eight Elements of the Technical
Management Processes of Systems Engineering are:
1. Decision Analysis—Discipline that provides decision makers formal analysis of
the available alternatives and recommendations based on fact. Examples include
Business Case Analysis (BCA), Analysis of Alternatives (AoA), design
alternatives, trade-off studies, system analysis, and supportability analysis.
2. Technical Planning—Activities that provide critical input to program planning.
The DoD mandated tool for technical planning is the Systems Engineering Plan,
which contains information on the Integrated Master Schedule (IMS), Work
Breakdown Structure (WBS), Technical Review planning, as well as technical
resources planning.
3. Technical Assessment—Activities that measure program technical progress and
assess both program plan and requirements (DAG 2012). Activities include but
are not limited to Technical Reviews, Technical Performance Measurement
assessments, Earned Value Management System (EVMS) review, and Program
Reviews.
20
4. Requirements Management—Activities consisting of the development and
upkeep of the traceability of requirements throughout the lifecycle, from user
requirements to design specifications to requirements verification and validation,
as well as requirements changes.
5. Risk Management—Activities that encompass identification, analysis, mitigation
planning, mitigation plan implementation, and tracking of Risk Items (DAG
2012).
6. Configuration Management—Activities that establish and control system
attributes and technical baselines for the systems development lifecycle. These
activities control the consistency of the system by making sure system
configurations are correctly documented, approved, and updated.
7. Technical Data Management—Activities to manage the technical data that are
developed throughout the systems development lifecycle. This includes all
technical and systems engineering artifacts and the maintenance of traceability
among all artifacts.
8. Interface Management—Activities to manage and control internal and external
interfaces and ensure certification and accreditation compliance.
The technical management processes operate continuously in concert with one
another to support and control the application of the technical processes (DAU 2012).
Figure 9 below depicts the interactions of the Technical Management Processes with the
Technical Process.
21
Figure 9 Technical Management Process Interactions (Adapted from DAU 2012)
2.2.3 Cost and Schedule Forecasting—Constructive Systems Engineering Cost
Model (COSYSMO)
Cost and schedule forecasting are crucial tools in systems engineering. They are
the basis for program budgeting, planning, and scheduling and are constraints for
program execution. The Constructive Systems Engineering Cost Model (COSYSMO), a
derivation of the Constructive Cost Model (COCOMO), is a cost-modeling tool
specifically developed with systems engineering in mind for government’s complex
22
acquisition/development programs (Valeri 2005). COSYSMO consists of 4 Size Drivers
and 14 Cost drivers, listed below:
Size Drivers:
1. Number of system requirements
2. Number of major interfaces
3. Number of critical algorithms
4. Number of operational scenarios
Cost Drivers:
1. Requirements understanding
2. Architecture understanding
3. Level of service requirements
4. Migration complexity
5. Technology risk
6. Documentation to match life cycle needs
7. Number and diversity of installation/platforms
8. Number of recursive levels in the design
9. Stakeholder team cohesion
10. Personnel team capability
11. Personnel experience/continuity
12. Process capability
13. Multisite coordination
23
14. Tool support
These COSYSMO size and cost drivers were some of the key factors considered
in developing the DAPS Knowledge Areas and evidence taxonomy.
2.3 Project and Program Management
The Project Management Institute (PMI) defines project management in the
Project Management Body of Knowledge (PMBOK) as:
“The application of knowledge, skills, tools, and techniques to project activities to meet
the project requirements.” (PMI 2008)
PMBOK also describes projects as temporary in nature, with beginning and end
dates. PMBOK Guide describes a “program” as a group of projects managed in a
coordinated way to obtain benefits not available from managing them individually (PMI
2008). Nevertheless, many of the same principles and practices apply to both (DAU
2003). To avoid unnecessary confusion, for the purposes of this dissertation the terms
“project” and “program” are seen as interchangeable. However, the term “program” is
used primarily, unless the original source uses the term “project,” such as some of the
sections immediately below.
2.3.1 Defining Project Success
Cook-Davis states that project success is not the same as project management
success (Cook-Davis 2002). Practitioners believe they are related; however, there are
currently no established scientific evidence (Demir 2008), due to difficulties in defining
24
what project success is. There are simply too many different subjective views of project
success.
Cook-Davis further describes project success in a three-level pyramid format
(Cook-Davis 2002). At the top level of the Cook-Davis’s project success pyramid is the
“Project Management Success” level. It asks the question, “Was the project done right?”
In other words, are the requirements to the project satisfied? Is the project successful in
meeting the triple constraints of cost, schedule, and performance?
Cook-Davis has “Project Success” at the second level, asking “Was the right
project done?” That is, outside of meeting the requirements set by the sponsors, users,
and stakeholders, did the sponsor scope the project correctly and did the sponsor choose
the right project to fund?
At the bottom of the pyramid, Cook-Davis specifies “Consistent Project Success.”
It asks, “Were the right projects done right, time after time?” In other words, the bottom
of the pyramid is referring to the governance of the organization in choosing the right
project and doing it the right way to meet the project requirements. For DoD acquisition,
this would be synonymous with success at the Program Executive Office (PEO) level or
higher.
Cook-Davis’s project success definitions are similar to the systems engineering
verification and validation terminologies. The Project Management Success definition
verifies that the project requirements were satisfied and that the project was done right.
The Project Success definition validates that the right project requirements were specified
and that the right project was done. The Consistent Project Success definition is part of
25
the continuous process improvement for project initiation, planning, and execution,
ensuring that project success and project management success can be achieved time after
time.
The focus of this DAPS dissertation research is on supporting conclusions
regarding the questions asked at the first two levels of Cook-Davis’s Project Success
Pyramid. It supports determinations of whether or not “the right project was done” and if
“the project was done right.”
Pinto and Rouhiainen argue that the traditional triple constraints of cost, schedule,
and performance project criteria do not work in the modern business world (Printo and
Rouhiainen 1998). Instead, they insist on Customer Satisfaction as another key criterion.
This is consistent with Cook-Davis’s definition of project success.
Procacino defines project success from the perspective of the software
developer/practitioner. His research indicates that developer motivation is the single
biggest factor in software productivity project success. Also important are planning,
requirements, and stakeholder involvement. Procacino developed a quantitative model for
early assessment of software development success from the developer’s perspective
(Procacino 2001). In his research, Procacino extracted five variables that had the lowest
coefficient of variation in a survey conducted with software developers. These five
variables are (Procacino 2001):
1) It is important to your perception of project success that there is a project plan.
2) It is important to your perception of project success that you have a sense of
achievement while working on a project.
26
3) It is important to your perception of project success that you do a good job (i.e.,
delivered quality) while working on a project.
4) It is important to your perception of project success that the project is well
planned.
5) It is important to your perception of project success that requirements are
accepted by the development team as realistic/achievable.
According to Evans, Abela, and Beltz, the first characteristic of dysfunctional
software projects is a failure to apply essential project management practices (Evans et al
2002). This analysis was derived from a study of 841 risk events in 280 software projects.
480 out of 841 risk events (57%) in software projects were due to not applying essential
project management practices. Jones reports that an analysis of 250 software projects
between 1995 and 2004 revealed six major areas affecting successful and failing projects
(Jones 2004):
1) Project planning
2) Project cost estimating
3) Project measurements
4) Project milestone tracking
5) Project change management
6) Project quality control
Demir described four approaches for software management effectiveness
measurement: subjective evaluation, questionnaire-based measurement, metrics-based
27
measurement, and model-based measurement (Demir 2008). Demir also proposed a
development of the Project Management Model, PROMOL (Demir 2008-2).
Shenhar et al. assert that defining and assessing project success is a strategic
management concept that should help align project efforts with the short- and long-term
goals of the performing organization (Shenhar 2001). They argue that the new success
criteria involve four dimensions developed from two data sets of 127 projects. The four
dimensions are:
1) Project efficiency
2) Impact on the customer
3) Business Success
4) Preparing for the future
2.3.2 Project Management Body of Knowledge (PMBOK) and the DoD
Extension
The PMBOK Guide developed by the Project Management Institute is widely
recognized as one of the leading sources of project management knowledge and best
practices. This section describes the PMBOK DoD Extension, as applied to DoD
acquisition programs (DAU 2003).
The PMBOK Guide describes work as being accomplished by processes (PMI
2008). Processes overlap and interact throughout a project or its various phases.
Processes are described in terms of:
•
Inputs (documents, plans, designs, etc.)
•
Tools and Techniques (mechanisms applied to inputs)
28
•
Outputs (documents, products, etc.)
PMBOK contains 42 processes organized into five basic groups and nine
Knowledge Areas.
The five process groups are:
1. Initiating
2. Planning
3. Executing
4. Monitoring and Controlling
5. Closing
The nine Knowledge Areas are:
1.
Project Integration Management—referring to Project Planning and execution,
configuration control and management, the Project Management Information
Systems (PMIS), and Earned Value Management Systems (EVMS)
2.
Project Scope Management—referring to the management of project scope and
defining and verifying project requirements
3.
Project Time Management—referring to project schedule management
4.
Project Cost Management—referring to cost and funding management
5.
Project Quality Management—referring to product quality that meets project
requirements
6.
Project Human Resource Management—referring to the management of resources
and contractors for the project
29
7.
Project Communications Management—referring to the management of
communication with all stakeholders
8.
Project Risk Management—referring to the risk management process, planning,
and execution
9.
Project Procurement Management—referring to the procurement of material, the
process, and contracts
The five DoD Extensions to the PMBOK are:
1. Project Systems Engineering Management
2. Project Software Acquisition Management
3. Project Logistics Management
4. Project Test and Evaluation Management
5. Project Manufacturing Management
Figure 10 below shows the nine PMBOK Knowledge Areas overlaid with these
five traditional DoD Program Management areas. Figure 10 shows that, although the
DoD program management areas are organized differently, they generally cover the same
program management activities.
As shown in Figure 10, PMBOK DoD Extension has referenced Systems
Engineering in five of the PMBOK Knowledge Areas: Time, Cost, Risk, Integration, and
Quality. However, for a systems engineer in the DoD, as can be illustrated by the DoD
Systems Engineering Plan outline published by the Office of Deputy Assistant Secretary
of Defense (ODASD) Systems Engineering (ODASD 2011), systems engineering
30
responsibilities would include also communications, human resources, and scope.
Furthermore, although procurement is not a primary responsibility of systems engineers,
they are key players in procurement actions due to the development of the statement of
work as well as the technical evaluation to determining the best value for contract
awarding. This dissertation’s research touches all nine Knowledge Areas of PMBOK,
centered on Cost, Time, Scope, and Quality performance as direct measures for program
success.
Figure 10 PMBOK DoD Extension—Primary Linkages Between PMI PMBOK Guide Knowledge
Areas (Adapted from DAU 2003)
31
Figure 11 below displays a high-level view of the operational environment of the
DoD Program Management Office (PMO) and shows the relationships of the PMO to the
Congress and the executive branch of the government as well as the industry supplying
the system product and other organizational entities involved in acquisition.
Figure 11 PMBOK DoD Extension—The Program Management Environment (Adapted from DAU 2003)
Figure 12 below illustrates the three major DoD Decision Systems for defense
acquisition programs. The BCL process for the DBS, circa 2012, was intended to
streamline this process for business IT systems acquisition, as discussed in Section 2.1.
32
Figure 12 PMBOK DoD Extension—The Major DoD Decision Systems (Adapted from DAU 2003)
2.3.3 Knowledge-Based Acquisition/Management
In their 2005 report for NASA acquisition programs, GAO recommended to
NASA, with NASA concurrence, that a transition to a Knowledge-Based Acquisition
framework would improve acquisition program performance. GAO has made the same
recommendation to the DoD in other reports, including their 2011 report (GAO 2011).
DoD has since embedded Knowledge-Based Acquisition into the Defense Acquisition
Guide (DAU 2013). GAO describes Knowledge-Based Acquisition as follows:
“A knowledge-based approach to product development efforts enables developers to be
reasonably certain, at critical junctures or ‘knowledge points’ in the acquisition life cycle,
that their products are more likely to meet established cost, schedule, and performance
33
baselines and, therefore provides them with information needed to make sound
investment decisions.” (GAO 2005)
The 2005 GAO report called for three critical Knowledge Points, as depicted
below in Figure 13.
Figure 13 GAO Knowledge Points (Adapted from GAO 2005)
The 2013 version of the Defense Acquisition Guide (DAG) includes the GAO
recommendations for Knowledge-Based Acquisition:
“Knowledge-Based Acquisition is a management approach which requires adequate
knowledge at critical junctures (i.e., knowledge points) throughout the acquisition
process to make informed decisions.” (DAU 2013)
The Defense Acquisition Guide called for four Knowledge Points instead of the
three GAO recommended:
34
Program Initiation. Knowledge should indicate a match between the needed capability
and available resources before a program starts. In this sense, resources are defined
broadly, to include technology, time, and funding.
Post-Critical Design Review Assessment. Knowledge should indicate that the product
can be built in a way consistent with cost, schedule, and performance parameters. This
means design stability and the expectation of developing one or more workable
prototypes or engineering development models.
Production Commitment. Based on the demonstrated performance and reliability of
prototypes or engineering development models, knowledge prior to the production
commitment should indicate the product is producible and meets performance criteria.
Full-Rate Production Decision. Based on the results of testing initial production articles
and refining manufacturing processes and support activities, knowledge prior to
committing to full-rate production should indicate the product is operationally capable;
lethal and survivable; reliable; supportable; and producible within cost, schedule, and
quality targets. (Defense Acquisition Guide 2013)
The more knowledge is gained, the less risk or uncertainty there is about the
program. Sufficient knowledge reduces the risk associated with the acquisition program
and provides decision makers and program managers higher degrees of certainty to make
better decisions. This knowledge-based approach is synonymous with evidence-based
decision making. The concept of the Knowledge-Based Acquisition is fully adapted in
35
this dissertation and built into the DAPS model. The Knowledge Points mentioned in the
DAG and the GAO reports are called Knowledge Checkpoints in DAPS. At each
Knowledge Checkpoint, the knowledge level in each DAPS Knowledge Area is assessed
based on the relevant evidence. Knowledge represents certainty, which may contribute to
the likelihood of program success, while risk represents uncertainty, which may increase
the likelihood of program failure. The DAPS Knowledge Areas and Knowledge
Checkpoints are further discussed in Chapter 5, Model Structure.
2.4 Complex Adaptive Systems
Defense acquisition programs operate in a complex environment to develop
engineering systems. They interact with several DoD “process systems,” as described in
Section 2.1 of this dissertation: the Business Capability Lifecycle (BCL) acquisition
process system for DBS; the Planning, Programming, Budgeting, and Execution (PPBE)
budgeting process system; the IT Procurement Request (ITPR) approval process system;
contracting policies and regulations; and the Information Assurance certification and
accreditation system. They also interact with external commercial environments and
external threats, as well as technology evolutions.
The study of complex adaptive systems provides valuable insights to the intricacy
of DoD IT acquisitions. These insights were considered and used for the development of
the DAPS model to support decision making in a complex environment. An overview of
the complexity science related to this dissertation is discussed in this section.
36
2.4.1 Complex System and Complex Adaptive System Definitions
A Complex Adaptive System (CAS) consists of a large number of mutually
interacting and interwoven parts and agents (CASG 2014). Examples of CAS include the
stock market, the human body, galaxies, ant colonies, manufacturing businesses, human
cultures, politics, and social systems in general.
Sheard uses the terms “complex system” and “complex adaptive system”
synonymously, and defines complex system as it pertains to systems engineering as
follows:
"Complex systems are systems that do not have a centralizing authority and are not
designed from a known specification, but instead involve disparate stakeholders creating
systems that are functional for other purposes and are only brought together in the
complex system because the individual ‘agents’ of the system see such cooperation as
being beneficial to them."
1. Complex systems have many autonomous components, i.e., the basic building
blocks are the individual agents of the systems
2. Complex systems are self-organizing
3. Complex systems display emergent macro-level behaviors that emerge from the
actions and interactions of the individual agents. The structure and behaviors of a
complex system are not easily deduced or inferred solely from the structure and
behavior of its component parts. Rather, the interactions among the parts matter
dramatically and may dominate the complex system’s structure behavior.
37
4. Complex systems adapt to their environment as they evolve. (Sheard 2008)
Three important principles of CAS include emergence, self-organization, and
path dependence. Emergence describes the tendency of behaviors to “arise” out of more
fundamental entities; yet these emergent behaviors are “novel” or “irreducible” with
respect to their component parts (Stanford 2014). Self-organization, simply put, describes
the behavior of a system’s parts/agents to organize and connect without central direction.
This sometimes includes a change from a bad organization to a good one (Ashby 1962).
Path dependence explains how the set of decisions one faces for any given circumstance
is limited by the decisions one has made in the past, even though past circumstances may
no longer be relevant (Praeger 2007).
2.4.2 Related Complexity Literature
Many thinkers, scholars, and managers have noticed the transformations that have
developed in business since rapid advancements in information technology started taking
place. These people may or may not have been aware of the field of complex adaptive
systems studies. This section surveys selected works on management in CAS
environments.
Norman and Kuras argue that it is useful to examine the differences between
Traditional Systems Engineering (TSE) and Complex Systems Engineering (CSE). They
also indicate that Traditional Systems Engineering and Complex Systems Engineering
should be used concurrently; applying Traditional Systems Engineering to autonomous
38
individual agents in the complex system while applying Complex Systems Engineering to
the overall system realization (Norman and Kuras).
Engineering Complex Systems by Norman and Kuras reveals several heuristics
and ideas about Complex Systems Engineering:
1. Developmental Precepts—These constitute “rules of the game,” and by doing so
stimulate contextual discovery and interaction among autonomous agents.
2. Safety Regulations—In many natural evolutionary complex systems, generations
(populations with slight different capabilities) overlap.… Older components
remain “on-line” (and in use) while “newer” components are brought “in-line,”
then “on-line” as surety against catastrophic complex system failure. This is
illustrative of managed redundancy as a safety regulation in complex systems.
3. Duality—This is the explicit recognition that development cannot be completely
separated from operation in the case of complex system.
4. Collaborative Environments
5. Partnerships
6. Developer Networks—These create opportunities for others.
7. Branding
8. “Co-opetition”—This is the balance between cooperation and competition.
9. Leveraging Other Investments
10. Opportunistic Approach
11. Permitting “Value-Added” Business Models (Norman and Kuras)
39
Eisner’s work highlights the importance of systems thinking: he emphasizes on
fundamentals, simplification, and most important of all, thinking outside the box (Eisner
2005). The text focuses on the guidance needed to be a more creative and innovative
thinker: look at the bigger picture, remove constraints, question conventional wisdom,
and think about crossover (creating leverage) from the start.
Atkinson and Moffat argue that Agile Management is required to control a
dynamically agile system (Atkinson and Moffat, 2005). The authors used Ashby’s Law of
Requisite Variety (Ashby 1957) from cybernetics to support this theory, stating that the
larger the variety of actions available to a control system, the larger the variety of
perturbation it is able to compensate for.
Figure 14 below highlights the relationships between complexity concepts and the
information age enterprise. This figure helps put into perspectives the challenges and
complexities of acquisition programs discussed earlier in this paper. Figure 15 is a
comparison between the management systems of the industrial age and those of the
information age. The information-age organization hierarchy shows much more linkage
among the nodes as well as multidirectional information flow among the nodes. The
industrial-age hierarchy, on the other hand, is strictly top-down information flow. The
enterprises in the information age operating in a complex environment should be flatter
and more empowered, from following orders to self-organization and selfsynchronization.
40
Figure 14 Relations between Complexity Concepts and the Information Age Enterprise (Adapted from Atkinson
& Moffat, 2005)
Figure 15 Industrial Age (top) versus Information Age (bottom) Management (Adapted from Atkinson
& Moffat, 2005)
41
Background research on CAS contributed heavily toward the decision to use the
“probability of program success” as the ultimate indicator for program execution
performance. The “probability of program success” indicator moves program
performance measurement beyond looking at the program’s ability to meet constraints.
Rather, it focuses attention on the program’s ability to be successful and on how far the
program should proceed in the acquisition life cycle.
Instead of looking at cost, schedule, and quality/performance measurements as
hard constraints, they could be seen as dynamic factors that change and require updates
throughout the program life cycle. This is consistent with the Knowledge-Based
Acquisition introduced in Section 2.3.3, which focuses on knowledge and certainty about
a program to make better decisions instead of meeting the traditional measurements used
to establish program baselines.
2.5 Probability of Program Success
The Probability of Program Success (PoPS) is a program assessment model
developed by the DoD to support weapons acquisition. It is closely related to DAPS
research and is discussed in this section.
2.5.1 Defense Acquisition University Probability of Success
The DAU first developed Probability of Success, or P(S), at the request of the
Assistant Secretary of the Army for Acquisition, Logistics, and Technology, Claude
Bolton, in 2002. The stated need was to identify a program’s health assessment with a
single number (Higbee 2005). Higbee defined Program Success as:
42
“The delivery of agreed-upon warfighting capability within established resource (e.g.,
funding, schedule, facilities) constraints.” (Higbee 2005)
The DAU Probability of Success framework consists of three levels. The top level
is the Program Success indicator. The second level contains five factors divided into three
internal program factors, 1) Requirements, 2) Resources, and 3) Execution, and two
external factors, defined as 4) Fit in Vision and 5) Advocacy. The third level contains the
metrics under each of the internal and external factors.
P(S) uses a 100-point scale that is allocated to each top-level factor and then to
each sub-factor. The original weights for the top-level factors, based on a large program
in the technology development and prototyping phase, were as follows:
Requirements:
20 points
Resources:
20 points
Execution:
20 points
Fit in Vision:
15 points
Advocacy:
25 points
It should be noted that program advocacy, an external factor, was determined to
be the most important top-level factor in this original research. Also note that 40% of the
points were allocated to external factors.
The factors and metrics of the DAU Probability of Success Model are provided
with a color rating commonly used within the DoD to indicate their respective status.
Green indicates On Track, No/Minor Issues. Yellow indicates On Track, Significant
43
Issues. Red indicates Off Track, Major Issues. Gray indicates Not Rated or Not
Applicable. The DAPS model’s rating scheme for observations of evidence is derived
from this commonly used DoD scheme. This will be further discussed in Chapters 4 and
5.
2.5.2 Naval Probability of Program Success
Since its inception, the DAU’s P(S) has been adapted by each military branch to
assess the health of defense acquisition programs. As of 2012, the latest version of
Probability of Program Success (PoPS) is the Naval Probability of Program Success
(Naval PoPS) Version 2.2. Figure 16 shows the Naval PoPS structure containing four
levels. The top level is the Program Health indicator. The second level contains four
factors: 1) Program Requirements, 2) Program Resources, 3) Program
Planning/Execution, and 4) External Influencers. The third level contains the metrics
under each of the four factors. The fourth and bottom level contains the criteria
assessments under each of the metrics.
44
Figure 16 Naval PoPS Structure (Adapted from Department of the Navy 2012)
The scoring weight distributions for each factor and metric for Naval PoPS v2 are
provided below in Figure 17. Scoring weights for Naval PoPS v2 evolve throughout the
acquisition lifecycle.
45
Figure 17 Naval PoPS v2 Factor Scores (Adapted from Department of the Navy 2012)
Each metric’s score is calculated by the evaluation of assessment criteria. A
complete list of criteria and scores is available in the Naval PoPS Guide (Department of
the Navy 2012). An example of the criteria assessment is provided below in Figure 18.
Each criterion is rated according to three colors—red, yellow, or green—representing the
state of the criterion. Red generally represents a criterion not meeting requirements.
Yellow generally represents a criterion only partially meeting the requirements. Green
generally represents a criterion meeting or exceeding the requirements.
46
Figure 18 PoPS Criteria Example (Adapted from Department of the Navy 2012)
2.5.3 PoPS Analysis
PoPS and Naval PoPS provide a logical framework to assess an acquisition
program. However, the PoPS framework has three major technical issues where
improvements could be made. Discussions are provided below.
2.5.3.1 No Complex Interrelationship Modeling
Currently, the scoring system in Naval PoPS is built bottom up from the criteria
assessments, each of which is only associated to one metric and one factor. What if the
criteria actually affected multiple metrics and factors? What if one factor is intimately
aligned with another? One such example is the Program Resources Factor and the
Program Planning and Execution Factor. If the program does not have adequate funding
or personnel, it would be difficult to plan and execute. Another example is that the
Program Advocacy and Fit in Vision metrics fall under the External Influencers Factor,
47
and yet they are key causes for adequate funding and personnel allocation. This scoring
system does not have the mechanics to model these complex interrelationships.
2.5.3.2 No Temporal Relationships among the PoPS Entities throughout the
Lifecycle
Naval PoPS is meant to be snapshot of the current progress of the program. The
questions in the criteria assessments address current status. The framework does not
factor in past scores or how the current scores might affect future ones. But, for example,
how well the Initial Capability Document (ICD) is developed actually does have
significant effects on not just the present program status but also the future capability
documents developments, future program planning and execution, and future program
status.
2.5.3.3 Does Not Sufficiently Account for Available Performance Measures
Project performance measurements such as Earned Value Management System
(EVMS) or Program Evaluation and Review Technique (PERT) are common decision
support metrics and evidence for program progress and maturity. However, these
performance measures are not sufficiently factored into the PoPS model. They are only
accounted for as a question in criteria assessment that asks whether they were applied.
2.5.3.4 Way Ahead
Although the qualitative framework of Naval PoPS is sound, the quantitative
system for inferential reasoning can be improved to construct complex interrelationships
and temporal/dynamic relationships and to better account for performance measures
48
already being collected. These three issues identified are also the three top research goals
of the DAPS research listed in Section 1.3.
2.6 Evidential Reasoning and Probability Systems Overview
Evidential reasoning is a key concept of the DAPS model. It is the method used to
analyze the evidence in DBS acquisition and build an inference network oriented around
the hypothesis of program success. This section provides an overview of evidential
reasoning, including a survey of the probability systems commonly used for evidential
reasoning.
2.6.1 Evidential Reasoning
Evidence may be defined as: “a ground for belief; testimony or fact tending to
prove or disprove any conclusion” (Schum 2001). The evidence items within the
framework of a program include the facts, data, and expert assessments that will tend to
support or refute the hypothesis of program success. To better understand the evidence
that emerges, it is beneficial to analyze exactly what kinds of evidence they are. Schum
(2001) has categorized evidence into fifteen categories, outlined below:
Horizontal (evidence) categories:
1. Tangible—Evidence capable of being perceived, especially by the sense of touch;
substantially real; material (Merriam-Webster 2014).
2. Testimonial (unequivocal)—Something that someone says, especially in a court
of law while formally promising to tell the truth (Merriam-Webster 2014), which
is unambiguous.
49
3. Testimonial (equivocal)—Something that someone says, especially in a court of
law while formally promising to tell the truth (Merriam-Webster 2014), which is
ambiguous.
4. Authoritative Records (accepted facts)—Evidence that is accepted as fact.
Vertical (relevance) categories:
1. Direct (direct relevance)—Evidence that is directly relevant to the hypothesis.
2. Circumstantial (direct relevance)—Evidence that is also directly relevant to the
hypothesis but that requires additional reasoning (inference) to get to the
hypothesis being sought, such as fingerprints at the scene of a crime.
3. Ancillary (indirect relevance)—Evidence about evidence (Schum 2001).
The study of evidence is bound together with studies of inductive reasoning and
probability. Today, we use the term induction to refer to reasoning that provides only
some but not complete grounds for reasoning (Schum 2001). Inductive reasoning of
evidence can be bi-directional, either from the evidence to the hypothesis or from the
hypothesis to the evidence. Inductive reasoning from both directions is very useful for
inference. From the evidence to hypothesis, it can be induced how risk, uncertainty, or
knowledge will affect a hypothesis of successful program completion. From hypothesis to
evidence, it can be induced how a hypothesis of good knowledge or lack thereof will
affect the quality of evidence. Bi-directional inductive reasoning is adapted in the DAPS
model.
50
Is probability based on frequency? Does it have to be? This is answered by
examining the differences between enumerative and non-enumerative probabilities.
Enumerative or aleatory probability is associated with the chance or luck, applicable to
events that can be counted. Non-enumerative probability is associated with probability
assessments not based on counting and frequency.
The study of evidence falls under the non-enumerative category, since the
present-day probability assessment of evidence is not countable. Instead, evidence items
are subjectively assessed, often by people. The measurement of the force of evidence is
also another area of study and discussion. There are many ways evidence can be
measured and looked at, similar to how student grades can be measured: percentages,
letter grades, GPA, or narrative assessments. The next section examines four nonenumerative probability systems that can be used to measure the force of evidence and
their applicability to this dissertation’s thesis.
2.6.2 Survey of Non-Enumerative Probability Systems
Schum (2001) has conducted a comprehensive analysis of the major strengths of
four non-enumerative probability systems that could possibly be used to construct the
DAPS model: Bayesian Inference, Dempster-Shafer Belief Function, Baconian
Eliminative and Variative Inference, and Fuzzy Set. They are discussed below as pertains
to the DAPS research.
2.6.2.1 Bayesian Inference
Research on project control and quality control has started adapting Bayesian
inference methods to help improve project forecasting and to enable updates. Examples
51
of such research can be seen from the doctoral dissertation works by Kim (Kim 2007)
and Yoo (Yoo 2007). Performances of these forecasting models have shown mixed
results, but generally they have performed better than traditional methods such as Earned
Value metrics.
A Bayesian approach can construct an extensive inference network to examine the
likelihood of successful program completion and is the best probability system to handle
evidential subtleties and complexities. According to Schum (2001), it has several major
strengths that would benefit the DAPS model:
1. It is able to cope with inconclusiveness in evidence.
2. It is able to cope with dissonance in evidence.
3. It is able to cope with source credibility issues.
4. It is able to capture a wide variety of evidential subtleties or complexities.
5. It has applications in current inference network technologies
6. It provides a familiar way to express uncertainties.
However, the Bayesian Inference approach does not allow withholding of belief
to allow for imprecise conclusions based on assumptions that cover uncertainties in
program evidence or portions of the program yet completed. The Dempster-Shafer
system, on the other hand, readily allows this kind of non-commitment and imprecision.
2.6.2.2 Dempster-Shafer Belief Function
The Dempster-Shafer belief function is one of the two probability systems
described by Schum (2001) that can readily cope with ambiguity, imprecision, and
judgmental indecisions of evidence. Due to this desirable feature, several researchers
52
have investigated the use of Dempster-Shafer Belief functions for project/program
management applications, including Taroun and Yang (Taroun and Yang 2011).
Liu et al. (2002) summarize the advantages of the Dempster-Shafer theory as
follows:
1. It has the ability to model information in a flexible way without requiring a
probability to be assigned to each element in a set.
2. It provides a convenient and simple mechanism (Dempster’s combination rule)
for combining two or more pieces of evidence under certain conditions.
3. It can explicitly model ignorance.
4. It rejects the law of additivity for belief in disjoint propositions.
Liu et al. also list the disadvantages of the Dempster-Shafer theory as follows:
1. The theory assumes pieces of evidence are independent, which is not always a
reasonable assumption.
2. The computational complexity of reasoning within the Dempster-Shafer theory
could be one of the major points of criticism if the combination rule is not used
properly.
3. Dempster-Shafer theory only works on exclusive and exhaustive sets of
hypotheses.
The most prevalent issue regarding the use of belief function is the independence
assumption of evidence. While the Bayesian system can use conditional probabilities to
factor the dependence of evidence, the Dempster-Shafer belief function cannot.
53
2.6.2.3 Baconian Eliminative and Variative Inference
The Baconian system of probability is based on eliminative experimental tests to
narrow hypotheses down to the correct one. The Baconian system has ordinal properties.
The hypotheses can only be compared one with the other, but cannot be algebraically
combined in any way, although Schum does not believe this is a handicap (Schum 2001).
Developing a DAPS model based on Baconian probabilities is possible and
certainly could be an interesting system. Instead of using subjective ratings to evaluate
the evidence, inductive reasoning would be used to eliminate incompatible hypotheses to
result in a hypothesis that will not be wrong. Nonetheless, such analysis can be
cumbersome and time consuming. In addition, the Baconian probability is not strong on
coping with ambiguities and indecision, according to Schum (2001).
2.6.2.4 Fuzzy Sets
Fuzzy sets present a very attractive feature in using words to approximate the
force of evidence. According to Schum (2001), it is also strong in coping with
ambiguities and indecisions. Fuzzy sets also provide a way to combine approximate
reasoning to make them useful for further analysis. Furthermore, fuzzy control theory has
very great potential in formulating automated decision support in program management
and systems engineering. Fuzzy sets, however, have the disadvantage of computational
complexity. Furthermore, aggregation of the evidence may not be suitable for producing
reliable overall assessment according to Tah and Car (Tah and Car 2001).
54
2.6.2.5 Survey Conclusion
Based on this survey and considering the application, a Bayesian approach was
chosen as the probability system for constructing the DAPS model. Bayesian Inference is
the most fitting system to cope with complexities and complicated interrelationships that
exist in DBS acquisition programs. Although the Dempster-Shafer Belief Function
presents an interesting advantage in the ability to withhold belief, it is much more
difficult to build a complex inference network as compared to the Bayesian Inference
approach. Furthermore, Dempster-Shafer’s independence assumption of evidence simply
would not work well for DAPS’s purposes since evidence items have high degrees of
dependency when they are used to infer a hypothesis of program success.
2.7 Bayesian Network and Knowledge Representation
The Bayesian Inference probability system described in Section 2.6 is further
discussed in this section.
2.7.1 Bayesian Inference
Bayes’ Rule is the foundation of Bayesian Inference. Bayes’ Rule calculates the
Posterior Probability, P(H|E), of a hypothesis conditional on evidence:
P H|E =
P E|H P H
P E
P(H) is the Prior Probability of Hypothesis (H). P(E) is the probability of Evidence (E).
P(E|H) is the conditional probability of E given H. Bayes Rule uses new evidence E to
update beliefs about a hypothesis H, obtaining the Posterior Probability P(H|E). This rule
is the basis for Bayesian-based inference networks, known as Bayesian Networks. Hoff’s
55
textbook (Hoff 2009) provides additional information on Bayesian Inference and
statistical methods.
2.7.2 Relevance, Dependence, and Causality
The Causal-Influence relationship (CIR) is an important concept in Bayesian
inference. It is defined as the relationship between an event (the cause) and a second
event (the effect), where the second event is understood as a consequence of the first
(Random House Reference 2005).
Pearl sees causality as fundamental to our ability to process information and sees
CIR as a much stronger and more stable relationship than non-causal relationships, such
as those based on only correlation or conditional probability.
“Causal claims are much bolder than those made by probability statements; not only do
they summarize relationships underlying the data, but they also predict relationships that
should hold when the distribution changes… a stable dependence between X and Y… that
cannot be attributed to some prior cause common to both… and is preserved when an
exgenous control is applied to X.” (Pearl 1988)
Koski and Noble, along with Pearl and several other authors, have indicated that
directed acyclic graphs, such as Bayesian Networks, are natural representations of
Causal-Influence Relationships (CIRs) (Koski and Noble 2012).
2.7.3 Graphical Probability Models—Bayesian Networks
A Bayesian Network is a directed acyclic graph with a set of local probability
distributions. The nodes in the graph represent random variables (hypotheses), arcs
56
represent conditional dependency relationships, and local distributions give conditional
probability distributions for the random variables given the values of their parent random
variables. A Bayesian Network defines a joint probability distribution on its random
variables. (Neapolitan 2004)
A simple example is provided below in Figure 19.
Figure 19 Simple Bayesian Network Example (Adapted from Anaj, 2006. Retrieved from:
http://upload.wikimedia.org/wikipedia/commons/0/0e/SimpleBayesNet.svg)
For this simple example, the graph represents two possible events that could cause
the grass to become wet, by sprinkler or by the rain. The nodes present the events and
their possible states (T, F). Directed arrows show the conditional dependencies of the
57
nodes. Rain does not have any parent and has a prior probability of 0.2. The sprinkler
system has the ability to detect rain and is designed not to turn on when there is rain. The
sprinkler system, however, is not perfect. The sprinkler has probabilities that are
specified as conditional on whether it rains or not. The grass being wet is conditional on
the sprinkler and the rain.
The probability table below the “Grass Wet” node shows the four possible
outcomes of a Grass Wet event. “F-F” indicates that the neither the sprinkler nor the rain
caused the grass to be wet. The three other possible outcomes have caused the grass to
become wet, by rain, sprinkler, or both. The probabilities shown in Figure 19 are
specified by the example. Based on the graph and the probability specifications, one can
use Bayesian Network methods to calculate the probability of rain or probability of the
sprinkler turning on, given that the grass is wet.
In this example, whether the grass is wet is the evidential observation being made
to infer the likelihood of it being caused by the rain or the sprinkler. A conditional
probability example to calculate the probability of rain, P(R=T), given that the grass is
wet P(G=T), is provided below:
= | =
=
= , =
=
The same principle would be used to construct the DAPS model, assessing the
observable evidence to infer the likelihood of program success.
To construct the Bayesian Networks in DAPS, the Netica software package
(Norsys 2010) was used, eliminating the need to custom develop scripts or code to build
the DAPS model. It greatly reduced the amount of the work involved. Instead of doing
58
complex mathematical calcuations, writing and debugging code, the focus and the efforts
of the research was instead put on the analysis, formulation, and implementation of the
DAPS model.
2.7.4 Knowledge Representation
Knowledge Representation is defined by Smith as follows:
“Any mechanically embodied intelligent process will be comprised of structural
ingredients that a) we as external observers naturally take to represent a propositional
account of the knowledge that the overall process exhibits, and b) independent of such
external semantic attribution, play a formal but causal and essential role in engendering
the behavior that manifests that knowledge.” (Smith 1985)
A Knowledge Representation model is based on experts’ belief about the real
world. Such a model is useful to help combine broad expert opinions into a single view
and model the expert-based decision process. Knowledge Representation is an especially
useful tool to process complex/large sets of information necessary to make sound
decisions, which are difficult for the human brain to logically, holistically, and
structurally process in a repeatable and reliable manner. The DAPS model is a
Knowledge Representation model, constructed to support the DBS acquisition process.
Within the framework of the Observe, Orient, Decide, Act (OODA) decision
cycle, a decision entity would first make observations of the available
information/evidence relevant to the decision, including data/information collected from
sensors. Then, the decision entity would attempt to orient and process the
59
information/evidence to try to make sense of them. DAPS, through knowledge
representation methods, is the tool developed in this research to help decision entities
orient and process the available information/evidence. Once the information/evidence is
logically and holistically processed and oriented toward the hypothesis being examined,
then the decision entity would be able to make a sound decision based on the evidence.
Finally, based on the decision made, action can be initiated to execute it. A depiction of
the OODA decision cycle and DAPS knowledge representation is provided in Figure 20.
Figure 20 Knowledge Representation in the OODA Loop, (Original adapted from Moran, Patrick E.,
2008. Retrieved from http://upload.wikimedia.org/wikipedia/commons/3/3a/OODA.Boyd.svg)
The systems engineering process to develop the Knowledge Representation model
is referred to as knowledge engineering. The DAPS dissertation constitutes part of the
60
knowledge engineering life cycle process to develop a Knowledge Representation model
for DBS acquisition. The knowledge elicitation and implementation process to develop
the DAPS model is discussed in Chapter 6.
2.7.5 Related Bayesian Applications
In recent years, there have been several relevant works of research conducted
utilizing Bayesian inference and statistical methods for applications similar to DAPS. In
Section 2.6.2.1, two such works of research were mentioned. Kim investigated using
Bayesian statistical methods to forecast project progress and provide warnings for project
overruns (Kim 2007). Yoo utilized Bayesian methods to develop a decision-making
framework to evaluate and forecast project cost and completion date (Yoo 2007). Kim
and Yoo both utilized Bayesian methods in forecasting and updating cost and time of a
project. In addition, Khayani utilized Bayesian methods to develop a model for
calculating contingency levels to control project or portfolio cost over-runs (Khayani
2011).
Lewis utilized Bayesian methods to develop a quantitative approach to measure
overall system level technical performance as well as system-level technical risk (Lewis
2010). Lewis’s research on technical performance is related to the quality/performance
Knowledge Area in the DAPS research.
Bunting’s research utilized Multi-Entity Bayesian Networks (MEBN) to develop
a method to assess an IT project’s alignment with top-level enterprise initiatives (Bunting
2012). Bunting’s research is most closely aligned with the scope management Knowledge
Area in the DAPS research.
61
Lastly, Doskey has conducted related research that focuses on the systems
engineering effectiveness in government acquisition of complex information systems
utilizing Bayesian network methods (Doskey 2014). Doskey’s research analyzed the
systems engineering, management, and acquisition elements of an acquisition program
and assessed systems engineering performance at a specified point in time. Doskey’s
research shares many similarities with the DAPS research while also having many
differences. Similarities include the use of Bayesian Networks, using government IT
acquisition as the application area, and the use of experts to develop a model based on
expert specifications. The intent of the research and the issue being examined between
the two works of research were similar. However, the models developed as the result of
the respective works of research are noticeably different.
Doskey’s resultant model, the Systems Engineering Relative Effective Index (SE
REI), assesses systems engineering and management “effectiveness” at a specified point
in time. Comparatively, the DAPS model developed in this research measures the
program’s probability of “success” in achieving the current acquisition checkpoint and
predicts the probability of success in achieving future acquisition checkpoints, tailoring
the temporal relationship to the DoD acquisition process. Thus, the top-level
measurement as well as the type of the model is different. SE REI can be described as a
descriptive model while DAPS is a predictive model with prescriptive potential, which
will be discussed in Chapter 7. The resultant Bayesian Network structures vastly differ
between SE REI and DAPS. While SE REI takes an assessment criteria approach similar
to Naval PoPS discussed in Section 2.5.2, DAPS takes an evidence-based approach that
62
assesses the state of objective quality evidence items (OQEs) from DBS acquisition.
Furthermore, SE REI does not contain complex interrelationships modeling or the
dynamic modeling present in the DAPS model.
63
3
BAYESIAN NETWORK PROTOTYPE
This chapter presents a prototype Bayesian Network model developed to test the
use of Bayesian Networks for estimating a program’s probability of success. The
prototype is focused on the research goal of complex interrelationship modeling, also
identified as one of the major issues of Naval PoPS discussed in Section 2.5.3. The
resultant prototype model is compared directly to the Naval PoPS output through two
case analyses. The chapter concludes with an assessment of the value of using Bayesian
Networks for this research.
3.1 Prototype Model Framework
The Bayesian Network (BN) Prototype is specifically developed to formulate a
potential solution for the first major issue of PoPS discussed in Section 2.5.3.1, the lack
of complex interrelationships. For direct comparison and analysis, the prototype is based
on the Naval PoPS framework.
The original Naval PoPS framework, as shown in Figure 16, has a 4-level
structure that includes the 1) top PoPS node, 2) the 4 Factor nodes, 3) the 22 metrics
nodes, and 4) criteria nodes for evidential observations, which represent another 29 nodes
just for Gate 1 assessment. For reasons of efficiency for the prototype construction, this is
de-scoped to a 3-level structure containing: 1) 1 top PoPS node, 2) 4 Factor nodes, and 3)
10 metrics nodes. Criteria nodes are excluded from this Bayesian Network Prototype. The
64
evidential observations for the BN prototype are made at the metrics level instead of the
criteria level. This effectively decreases the number of nodes from 56 to 15 and greatly
decreases the number of probabilities that need to be specified.
The Bayesian Network Prototype reduces the number of metrics nodes by
combining the 10 Program Planning/Execution metrics from Naval PoPS into two higher
level metrics nodes:
•
TOC Estimate, Contract Planning and Execution, Acquisition Management,
Industry/Company Assessment, and Government Program Office Performance were
combined into a “Program Management” metric node. This is reasonable since these
five metrics are elements of Program Management.
•
Test and Evaluation, Technical Maturity, Sustainment, Software, and Technical
Protection were combined into a “Systems Engineering” metric node. This is believed
to be reasonable since they are technical elements in the lifecycle management of a
system.
All the nodes in the Bayesian Network Prototype are constructed with two states.
The top PoPS node has {True, False} as states while all the other nodes have {Good
Health, Not} as states. The Bayesian Network Prototype structure utilizes the hierarchical
relationship of the Naval PoPS framework as shown in Figure 16 with logical CIRs added
to demonstrate the Bayesian Network’s ability to model complex relationships. All the
arcs are established using CIRs. The relationships are further discussed in Section 3.3,
Description of Nodes and States. Figure 21 below shows the model as implemented in the
Netica Bayesian network package (Norsys 2010).
65
Budget and Planning
Good Health
72.9
Not
27.1
Paramater Status
Good Health
Not
75.0
25.0
ScopeEvolution
Good Health
Not
75.0
25.0
Program Requirements Factor
Good Health
Not
Program Resources Factor
73.5
26.5
CONOPS
Good Health
75.0
Not
25.0
Good Health
Not
Program Success
True
70.7
False
29.3
75.0
25.0
Systems Engineering
Good Health
52.8
Not
47.2
Good Health
Not
75.0
25.0
Fit in Vision
Good Health
75.0
Not
25.0
Program Management
Good Health
Not
Manning
72.9
27.1
External Influence
Good Health
Not
74.6
25.4
Program Advocacy
Good Health
75.0
Not
25.0
Interdependencies
Good Health
75.0
Not
25.0
Program Planning/Execution
Good Health
62.6
Not
37.4
Figure 21 BN Naval PoPS Prototype Netica Model
3.2 Probability Specification
Each node in the Bayesian Network Prototype, shown in Figure 21, contains a
Conditional Probability Table (CPT) specifying the inferential force of its dependency
relationships in terms of probabilities. For this prototype, the Naval PoPS scoring weights
shown in Figure 17 were used as the basis of the CPT specifications. The following
criteria were established for the conversion of the Naval PoPS scoring weights to CPTs,
which is used again in Chapter 6 for the final DAPS model:
1. Ability to convert arc weights (elicited SME-specified weights or the Naval PoPS
weights) into the CPTs, representing the relative inferential forces of the parent
nodes to the child node.
66
a. If weights are equal, the probability influence shall be equal
(approximately); ratio of the inferential force would be 1 (or
approximately 1)
b. If the weights are unequal, the probability influences shall reflect the
scoring weights (approximately); ratio of the arcs’ inferential forces shall
reflect the arcs’ weight ratios
2. Ability to be applied repeatedly in the same manner for nodes with different sets
and numbers of arcs
3. The Probabilities must sum to 1.
Using the conversion rule criteria as the guide, the conversion rule to convert the
Naval PoPS weights to CPT is as follows:
•
For each node with multiple parents in Figure 21, the Naval PoPS weights from
Figure 17 are used as the arc weights. These weights are normalized and used as
the arc values, Varc, for CPT calculations. Equation 1 below is used for Varc
calculations.
Equation 1, Arc Value (Varc)
=
/
67
o
is the arc weight of the factor or metric parent node, obtained from
the respective Naval PoPS specified weight from Figure 17 for each factor
or metric parent node
o
o
•
is the total sum of all parent
is the value of the arc influence from the parent node to the child node
The influence of the parent node on the child node when the parent node is true is
Varc. An additive rule is used to combine the Varcs to calculate the total influence
of the parent nodes on the child node, adding the influences of the arcs when the
parent is true. If the parent node is false, then the influence of the parent node on
its child node would be zero. Equation 2 below represents the additive rule used.
Equation 2, P(Child Node=True|Parent Nodes)
ℎ ! = "#$| %"$&'( =
•
*
)
+, -. /+
Specifically for the four factor nodes in the prototype, if the metrics nodes under
the factor nodes are all false, then the probability assigned for that condition is
(Good Health = 1, No = 99). This is done to be more consistent with Naval
PoPS’s additive scoring framework. In the original Naval PoPS framework, if the
metrics underneath the factor are all false, the factor would get no points toward
the Program Success score.
68
•
The CPTs for root nodes are set at 75% Good Health with no observation,
reflecting approximately a moderately high likelihood.
•
The System Engineering node is the only node with one parent node. The CPT
specification for the Systems Engineering node is described in the next section.
By using the conversion rules formulated above, the CPTs for the prototype were
able to meet the conversion criteria established. The resultant Varcs used for the CPT
calculations are reflective of the Naval PoPS weights and converted to probabilities using
Equation 2. The probabilities would sum to 1 for each possible outcome at a node. These
conversion rules can be applied repeatedly in the same manner for all applicable nodes in
the Bayesian Network prototype.
The next section provides descriptions of the nodes of the Bayesian Network
prototype, their states, dependent relationships, and the respective CPTs built according
to the conversion rule described above.
3.3 Description of Nodes and States
1. Program Success—This is the top-level node that indicates the Probability of
Program Success. It is dependent on the four Factor nodes: 1) Program Requirements,
2) Program Planning/Execution, 3) Program Resources, and 4) External Resources.
•
States: {True, False}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 22
69
Figure 22 BN Prototype Program Success Node CPT
2. Program Requirements Factor—The Program Requirements Factor node is dependent
on the three metrics nodes under it as well as the Program Management node. The
rationale to include Program Management is that good program management would
cause good Program Development.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 23
70
Figure 23 BN Prototype Program Requirements Factor Node CPT
3. Program Resources Factor—The Program Resources Factor node is dependent on the
two metrics nodes under Program Resources.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 24
Figure 24 BN Prototype Program Resources Factor Node CPT
71
4. Program Planning and Execution Factor—The Program Planning and Execution
Factor node is dependent on the two metrics nodes under Program Planning and
Execution as well as the Program Resources Factor. The rationale is that without
adequate program resources (funding and manning), a program cannot execute.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 25
Figure 25 BN Prototype Program Planning and Execution Factor Node CPT
5. External Influencers Factor—The External Influencers Factor node is dependent on
the three metrics nodes under External Influencers.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 26
72
Figure 26 BN Prototype External Influencer Factor Node CPT
6. Parameter Status—This is a metric node from Naval PoPS. No dependencies are
constructed for this node. The probability is set at 0.75 if there is no evidential
support.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 27
Figure 27 BN Prototype Parameter Status Factor Node CPT
7. Scope Evolution—This is a metric node from Naval PoPS. No dependencies are
constructed for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 28
73
Figure 28 BN Prototype Scope Evolution Node CPT
8. CONOPS—This is a metric node from Naval PoPS. No dependencies are constructed
for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 29
Figure 29 BN Prototype CONOPS Node CPT
9. Program Management—This is modeled as a metric node that consolidates several of
the Naval PoPS metric nodes, as discussed in Section 3.1. No dependencies are
constructed for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 30
74
Figure 30 BN Prototype Program Management Node CPT
10. Systems Engineering—This is modeled as a metric node that consolidates several of
the Naval PoPS metric nodes, as discussed in Section 3.1. There is one dependency to
Program Requirements, an additional relationship to Naval PoPS’s hierarchical
relationships. The probabilities specified reflect that good Program Requirements are
a good basis for Systems Engineering execution during the lifecycle of the program,
and bad Program Requirements will have catastrophic effect on the program. The
probability specifications are set in Figure 31 below to reflect a condition where,
given that the Program Requirements Factor is in Good Health, Good Health in
Systems Engineering is more likely (~70%), and given that the Program
Requirements Factor is Not (in a Good Health), Systems Engineering is nearly certain
(~95%) to be in Good Health.
•
States: {Good Health, Not}
•
The CPT specified according to Section 3.2 and the explanation above is
shown below in Figure 31
Figure 31 BN Prototype Systems Engineering Node CPT
75
11. Budget and Planning—This is a metric node from Naval PoPS. There are three
dependencies added that do not exist in the Naval PoPS hierarchical relationships.
Good Program Management can influence the Budgeting and Planning process.
Budgeting and Planning is also simultaneously affected by the program’s “Fit in
Vision” and “Program Advocacy” external influencers. Thus, if the program fits in
the wider DoD vision and there are many advocates in the high ranks, they would be
more likely to receive the funding requested.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 32
Figure 32 BN Prototype Budget and Planning Node CPT
12. Manning—This is a metric node from Naval PoPS. No dependencies are constructed
for this node.
•
States: {Good Health, Not}
76
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 33
Figure 33 BN Prototype Manning Node CPT
13. Fit in Vision—This is a metric node from Naval PoPS. No dependencies are
constructed for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 34
Figure 34 BN Prototype Fit in Vision Node CPT
14. Program Advocacy—This is a metric node from Naval PoPS. No dependencies are
constructed for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 35
77
Figure 35 BN Prototype Program Advocacy Node CPT
15. Interdependencies—This is a metric node from Naval PoPS. No dependencies are
constructed for this node.
•
States: {Good Health, Not}
•
The CPT built according to the method described in Section 3.2 is shown
below in Figure 36
Figure 36 BN Prototype Interdependencies CPT
3.4 Prototype Model Evaluation
Two cases, Sensitivity Test 1 and 2, are formulated in the sections below to
evaluate the performance of the Bayesian Network Prototype to estimate the probability
78
of program success, using the Naval PoPS outputs as direct comparisons. These two
cases are expanded and reused in Chapter 7 for testing of the final DAPS model.
3.4.1 Case 1—Sensitivity Test 1
Sensitivity Test 1 specifies the 10 metric node observations of the Bayesian
Network Prototype as follows:
1. Good Parameter Status
2. (Scope Evolution had 0 weight for Gate 1)
3. Not Good CONOPS
4. Good Program Management
5. Not Good Systems Engineering
6. Good Budget and Planning
7. Not Good Manning
8. Good Fit in Vision
9. Good Program Advocacy
10. Good Interdependencies
Figure 37 provides the Netica output for Case 1, which calculated a 71.8%
Probability of Program Success in the Bayesian Model. Out of the 10 metric node
observations, 7 are Good while 3 are Not. The Not observations in CONOPS and
Systems Engineering resulted in a low Program Requirements Factor rating of 60.7%.
Systems Engineering’s Not observation also contributed to the low Program Planning and
Execution Factor rating at 54%. Lastly, the Not observation in Manning affected the
79
Program Resources Factor rating, which then added to the Program Planning and
Execution Factor’s low rating.
Budget and Planning
Good Health
100
Not
0
Paramater Status
Good Health
Not
100
0
ScopeEvolution
Good Health
75.0
Not
25.0
Program Requirements Factor
Good Health
60.7
Not
39.3
CONOPS
Good Health
0
Not
100
Program Resources Factor
Good Health
Not
Program Success
True
False
Program Management
Good Health
100
Not
0
Good Health
Not
Good Health
Not
External Influence
Good Health
99.0
Not
1.0
0
100
0
100
Fit in Vision
71.8
28.2
Systems Engineering
Good Health
Not
Manning
76.0
24.0
100
0
Program Advocacy
Good Health
Not
100
0
Interdependencies
Good Health
100
Not
0
Program Planning/Execution
Good Health
54.4
Not
45.6
Figure 37 BN Prototype Case 1—Sensitivity Test 1 Output
For Case 1, the model results seem reasonable. Bad CONOPS, bad Systems
Engineering, and a lack of staff should mean the program is not making proper progress,
even if you have adequate funding, good program management, and good external
influencers.
Figure 38 below provides the Naval PoPS output based on Case 1 parameters. The
score of 81.8% is significantly higher than the value of 71.8% from the Bayesian
Network prototype.
80
Figure 38 Naval PoPS Case 1—Sensitivity 1 Output
The Naval PoPS output correctly identifies the troublesome areas as well as areas
that need improvement and corrective action. A score of 81.8% seems high for the
program described, but a yellow status does provide a good indicator that there are causes
of concern for the program. Of the 7 program deliverable metrics in the Program
Requirements and Program Planning/Execution factors, 5 are in bad shape or
nonexistence while only two metrics are on target. In other words, the program has
performed much worse than expected. It is arguable that the score of 71.8% from the
Bayesian Network Prototype model better reflects the state of the hypothetical program.
The lower Bayesian Network Prototype score is generated using the same Naval
PoPS weights converted to probability. The lower score is a result of the complex
81
interrelationships established in the prototype model, which simulates the real-world
synergetic effects when all elements are considered in an integrated manner. While Naval
PoPS’s additive scoring framework is not able to model these synergetic effects from
complex interrelationships, Bayesian Network models provide a mechanism to do so.
3.4.2 Case 2—Sensitivity Test 2
Sensitivity Test 2 specifies the 10 metric node observations of the Bayesian
Network Prototype as follows:
1.
Good Parameter Status
2. (Scope Evolution had 0 weight for Gate 1)
3. Good CONOPS
4. Good Program Management
5. Good Systems Engineering
6. Not Good Budget and Planning
7. Not Good Manning
8. Not Good Fit in Vision
9. Not Good Program Advocacy
10. Not Good Interdependencies
Case 2 shows that the program resources have been completely wiped out and that
the external influencers are against the program. Figure 39 shows the Bayesian Network
Prototype model output. Figure 40 shows the Naval PoPS output. Comparing the two
models, the Bayesian Network Prototype provides a score of 46.5% while the Naval
82
PoPS model provides a score of 69.2%. For a program has no money, no resources, does
not fit in the enterprise vision, and has no external support, especially in the early stages
of acquisition, it should not matter how well the program requirements have been
gathered and how well the program has been executed. Insufficient funds, insufficient
resources, and insufficient support are high risk indicators for program success; the
program therefore should be considered for shut-down. This again displays the Naval
PoPS’s shortfall of not establishing complex interrelationship modeling among the
various factors and metrics. While Naval PoPS is unable to represent the synergetic
compounding effects from complex interrelationships, the Bayesian Network Prototype
provides a mechanism to do so.
Budget and Planning
Good Health
0
Not
100
Paramater Status
Good Health
Not
100
0
ScopeEvolution
Good Health
75.0
Not
25.0
Program Requirements Factor
Good Health
99.9
Not
.072
CONOPS
Good Health
100
Not
0
Program Resources Factor
Good Health
Not
Program Success
True
False
Program Management
Good Health
100
Not
0
Good Health
Not
Good Health
Not
External Influence
Good Health
1.0
Not
99.0
100
0
0
100
Fit in Vision
46.5
53.5
Systems Engineering
Good Health
Not
Manning
1.0
99.0
0
100
Program Advocacy
Good Health
Not
Interdependencies
Good Health
0
Not
100
Program Planning/Execution
Good Health
60.4
Not
39.6
Figure 39 BN Prototype Case 2—Sensitivity Test 2 Output
83
0
100
Figure 40 Naval PoPS Case 2—Sensitivity Test 2 Output
3.5 Bayesian Network Prototype Model Conclusion
Section 2.5.3 analyzed Naval PoPS and identified three major issues. The first one
was the lack of complex interrelationship modeling. Second was the lack of temporal
relationships for PoPS through the acquisition lifecycle; each PoPS assessment and each
evidence (artifacts and documents, such as capability documents) are all assumed
independent through time. Third was not being able to incorporate other program
performance measurement tools such as defect rate and Earned Value Management
System (EVMS) into the PoPS to support the program success assessment.
84
The prototype analysis presented in this chapter was focused on the first major
issue identified, developing the interdependencies among the Naval PoPS program
success factors and metrics. The probabilities specified were based on the Naval PoPS
Gate 1 score weights, as shown in Figure 17. From the two cases presented, it is shown
that the cumulative synergetic effects in measuring Probability of Program Success can
be improved by constructing interrelationships between DBS acquisition model elements
using Bayesian Networks.
The Naval PoPS framework is used in this chapter both to present the DoD’s
current practice for measuring probability of success and for demonstrating how the
Bayesian approach can be useful to improve the current practice. Although many
elements of PoPS are adapted in the final DAPS model, its structure and organization is
not adapted. Instead, the DAPS model structure is developed from the bottom up through
evidential analysis of DBS acquisition and incorporates the best practices and literatures
discussed in Chapter 2. The rest of this paper presents the organization, structure, and
analysis of the final DAPS model while examining all three major issues of Naval PoPS
and more. Twelve more case analyses are presented in Chapter 7 to support the
discussion.
85
4
EVIDENCE ANALYSIS AND ORGANIZATION
The Defense Business System Acquisition Probability of Success (DAPS) model
is constructed to combine assessments of various evidence items that emerge over the
course of a program to estimate its probability of success. It utilizes Bayesian Networks
to measure the belief in the program success based on evidence, which is continuously
updated throughout the acquisition program lifecycle. This chapter identifies, analyzes,
and organizes the observable evidence items from programs that will tend to support or
refute the hypothesis of program success. This chapter also discusses the evidential
reasoning approach used in the DAPS model.
4.1 Evidence Analysis
Evidence of DBS acquisition programs is the available and observable acquisition
artifacts, plans, assessments, information, and data that tend to support or refute the
hypothesis of acquisition program success. Especially important evidence items are
Objective Quality Evidence items (OQEs) that pertain to program success or failure.
OQE is a term used in the US Navy community and is defined as a statement of fact,
either quantitative or qualitative, that pertains to the quality of a product or service based
on observations, measurements, or tests that can be verified.
OQEs are considered in DAPS only to be evidence that holds direct relevance to
the program success hypothesis, including both Direct and Circumstantial evidence.
86
Indirect or Ancillary evidence are not considered OQE in DAPS and are not included in
the evidence taxonomy used for this research. The OQEs of the DAPS model would
include information, documentation, and artifacts required by the DoD’s DBS acquisition
process, as well as select program management best practice artifacts that are verifiable.
This could include acquisition statutory and regulatory requirements, contract solicitation
artifacts, contract deliverables, systems engineering artifacts, and other official DoD
memorandums produced throughout the acquisition lifecycle.
Figure 41 below depicts the categories of evidence as described by Schum (2001),
filled in with select examples of evidence from DBS acquisitions. The green colored
blocks in Figure 41 provide the specific categories of evidence included in the DAPS
model.
87
Tangible
Testimonial
(unequivocal)
Testimonial
(equivocal)
Missing tangibles or
testimony
Authorative records
(accepted facts)
Direct relevance
Direct*
Circumstantial*
Design,
Systems Engineering
Architecture,
Plan, Project
Plans,
Management Plan,
Requirements
Project Charter,
scope
Technology
Risks,
Opportunities,
Stakeholder
assessments,
Cost/Time
Estimates,
Cost/Time Buffers
Unobserved req'd Unobserved req'd
evidence nodes
evidence nodes
Indirect Relevance
Ancillary*
Laws, Regulations,
Policy, Guidance
Performance History
Politics, Economy,
Market Conditions
Unobserved evidence
nodes
Schedule Progress,
Cost Expended,
Time elapsed,
Operation data
(RAM), System
defects and errors, Funding, Staffing,
Testing Results
Contract
Figure 41 DAPS Categories of Recurrent Forms of Evidence
Tangible evidence items are real material artifacts to the program success
hypothesis, but that are not accepted facts or considered authoritative records. Direct
tangible evidence items shown in Figure 41 are some of the DBS acquisition artifacts that
are certified and agreed upon by stakeholders and that directly represent the performance
of the program. Circumstantial tangible evidence shown in Figure 41 are some of the
DBS acquisition artifacts agreed upon and certified by stakeholders or experts that
88
requires some reasoning and inference to interpret their representation of the program’s
performance.
Indirect tangible evidence items are not used in the DAPS research. Examples
shown in Figure 41 included laws, regulations, policy, and guidance for acquisition such
as the Federal Acquisition Regulations (FAR), Defense Acquisition Guide (DAG), and
Clinger-Cohen Act for Information Management. These items are considered influencers
on the directly relevant evidence for program success. However, since DAPS will draw
inferences from the directly relevant evidence in relation to program success, observation
of the indirect tangible type of evidence is not necessary. From the graphical modeling
perspective, the ancillary evidence would become conditionally independent to program
success due to the observation of the directly relevant evidence.
Testimonial (unequivocal) evidence items are assessments or analyses performed
by experts during an acquisition that are not ambiguous in their assertion of program
success prediction. These would include any Risk Reports, Technical Review Reports,
Independent Cost Estimates (ICE), Analysis of Alternatives (AoA), and Independent
Logistics Assessments (ILAs), among others. These evidence items could be direct or
circumstantial to the program success hypothesis, although the examples shown in Figure
41 are only for direct evidence. An example of Indirect Testimonial (unequivocal)
evidence could be the Performance History of a contractor, which is ancillary evidence to
observations about the contract signed and the contract performance.
89
Testimonial (equivocal) evidence items are assessments or analyses that are
ambiguous and open for interpretation. Since only OQEs are used for DAPS, no such
evidence will be used in the research.
Missing tangible or testimonial evidence is an important part of the evidential
reasoning in DAPS. DAPS evidence taxonomy includes the statutory and regulatory
artifacts for DBS acquisition as well as commonly available acquisition program artifacts.
Thus, if something is missing, it may represent a knowledge gap that affects program
success. There are two ways which DAPS captures this evidence category. First, the
evidence could be expectedly missing, due to not being required by the decision authority
or not being applicable. This expectedly missing evidence node will simply go
unobserved and will not negatively affect the program success measurement. Second, the
evidence could be unexpectedly missing, due to oversight, delay, or other reasons. This
unexpectedly missing evidence node will be accordingly assessed as unacceptable and
will negatively affect the program success measurement.
Authoritative records (accepted facts) shown in Figure 41 would include many of
the artifacts produced in DBS acquisition that are considered factual, such as the schedule
performance, cost expenditure, and system test results. These are authoritative records
that provide the actual results of program progress and maturity.
In summary, the DAPS model evidence items are OQEs which can be categorized
as tangible, unequivocally testimonial, authoritative records (accepted facts), or missing
(tangible, testimonial), while directly relevant (direct and circumstantial), The DAPS
90
model does not include evidence items which are indirectly relevant (ancillary), and does
not include those testimonial evidence which are considered equivocal.
4.2 Evidence Organization
The DAPS evidence taxonomy is organized two ways in this research. First is by
the subject matter Knowledge Area where the evidence provides information and
inferential force. Second is by the Knowledge Checkpoint where the evidence would be
observed to support decision making. Each item of evidence in each Knowledge Area and
each Knowledge Checkpoint is considered through separate observations and is modeled
as separate evidence nodes in DAPS. The complete evidence taxonomy is provided in
Appendix A.
4.2.1 Evidence by Knowledge Area
The Knowledge Areas is used in DAPS both to organize evidence and as the
intermediate building blocks to construct the complex interrelationships within the
model. The DAPS Knowledge Areas are derived from the PMBOK Knowledge Areas
(PMI 2008), diverging from the purely program management discipline to include the
systems engineering and technical process aspects of Defense acquisition. There are
seven Knowledge Areas in DAPS. Detail discussions on the DAPS Knowledge Areas are
provided in Chapter 5, Model Structure.
The seven Knowledge Areas are divided into Measurable Knowledge Areas,
which contain direct evidence of program success, and Enabling Knowledge Areas,
which contain circumstantial evidence of program success. Evidence categorization
91
definitions and specific applications were discussed in Section 2.6 and Section 4.1.
Enabling Knowledge Areas are also referred to as Indirect Knowledge Areas in this
dissertation. These Knowledge Areas are indirect in the sense that the inferences of
program success require an extra step and do not contain direct CIRs to program success.
These Knowledge Areas do not contain ancillary evidence considered to have indirect
relevance to the program success hypothesis. Ancillary evidence is not included as part of
the DAPS evidence taxonomy. Rather, the Enabling (Indirect) Knowledge Areas contain
circumstantial evidence items, which are directly relevant to the program success
hypothesis. There are four Measurable (Direct) Knowledge Areas and three Enabling
(Indirect) Knowledge Areas:
Measurable (Direct) Knowledge Areas:
1. Time Management Knowledge Area
2. Cost Management Knowledge Area
3. Quality Management Knowledge Area
4. Scope Management Knowledge Area
Enabling (Indirect) Knowledge Areas
1. Procurement Management Knowledge Area
2. Systems Engineering Management Knowledge Area
3. General Management Knowledge Area
92
A table of sample DAPS evidence organized by the seven Knowledge Areas is shown in
Figure 42 below.
ENABLING (INDIRECT) Knowledge Areas
General
Management
Systems
Engineering
MEASURABLE (DIRECT) Knowledge Areas
Personnel/Staffing AoA
Procurement Scope
Acquisition System
Plan
Architecture
Business Case/
Market
Problem Statement Research
Acquisition
Strategy
Program Charter
Systems
Engineering
Plan
Test and
Program Budgeting Evaluation
and Funding
Strategy/Plan
CAE Memo
Program
Certification
Acquisition
Decision
Memorandum
Material
Development
Decision
Memorandum
Investment
Decision
Memorandum
RFP
Source
Selection
Plan
DoDAF
Architecture
Data
Functional
Baseline
(Functional
Requirements)
Allocated
Baseline System
Requirements
Allocated
Baseline Interface
Requirements
Cost
Time
CARD/BOM/Item
List
POA&M
Program Cost
Estimate
Program
Schedule
Progress
Product
Inspect/Accept
Test Report
Independent
(contractor),
Government Cost
sub-level,
Estimate
EVMS - Time integration
Test Report
(GOV)—
Verification,
Validation and
EVMS – Cost
Time Elapsed Acceptance
Time Risk
Report/
Independent Defect Report/
Expenditure
ERAM/
Defect Rate
Life Cycle
Sustainment Vendor
Plan
Questions
Program
Management
Plan/ Software
Acquisition
Cost Risk Report/
Development
Program
Independent
Plan
SSEB Report Baseline
ERAM/
Product
Risk
Baseline Management
System Design
Plan
SSAC Report Document
Technical
Review Reports CPARS
Detailed
Interface
Description
SSA Selection Information
Justification Support Plan
Test Cases
Figure 42 Sample of Evidence Taxonomy by Knowledge Area
93
Quality
CDRL
Inspect/Accept
DIACAP
Authority To
Operate Status
Program
Protection Plan
Prototype
Performance
Independent
Logistics
Assessment
Independent
Testing
Assessment
Each evidence item is modeled as a separate evidence node in DAPS. For artifacts
that provide information about more than one Knowledge Area, they are considered
separate instances of evidence and are modeled as separate evidence nodes. Such
evidence includes output from the Earned Value Management System (EVMS), a data
source that provides evidence for program progress and certainty in two Knowledge
Areas, cost and time. A second example is the Risk Report, which often provides risk
assessment in specific Knowledge Areas.
4.2.2 Evidence by Knowledge Checkpoint
The Knowledge Checkpoints are the technical reviews and milestone reviews
within the DBS acquisition process. At each of the Knowledge Checkpoints, DoD has
defined the required or recommended artifacts/documentation to support decision
making. The DAPS evidence taxonomy is thus also organized by Knowledge
Checkpoint, a temporal way of organizing of the DAPS evidence items.
DAPS contains fifteen Knowledge Checkpoints. They are discussed in detail in
Chapter 5, Model Structure. A brief overview and sample evidence items are provided
below:
1) Material Development Decision (MDD)—Problem Statement and Business
Process Re-engineering used to decide whether to develop the system being
proposed; decision memo for material development.
2) Initial Technical Review (ITR)—Cost Analysis Requirements Document (CARD)
and Analysis of Alternatives (AoA) Study Plan setting the scope and boundary of
cost analysis as well as Analysis of Alternatives (AoA).
94
3) Alternative System Review (ASR)—Analysis of Alternative (AoA); cost
estimates.
4) Milestone A (MSA)—Business Case; Program Charter; Systems Engineering
Plan and other technical plans.
5) System Requirements Review (SRR)—High-level requirements such as
Architecture artifacts, requirements specifications, as well as other progress
evidence such as prototype performance, risk reports, technical review report.
6) System Functional Review (SFR)—System functionality documentation such as
functional requirements specifications, architecture artifacts, and other technical
planning artifacts, as well as procurement artifacts such as the procurement
request.
7) Pre-Engineering Development Review (PreED)—Primarily the Request for
Proposal (RFP); Business Case; progress for Milestone B documents.
8) Milestone B (MSB)—Evidence to start system development and contract
execution, contract solicitation, and award progress; Systems Engineering Plan
(SEP), Business Case, Program Charter, and Acquisition Program Baseline (APB).
9) Preliminary Design Review (PDR)—Evidence of government and contractor
integration—agreement on the government baselines.
10) Critical Design Review (CDR)—Evidence of system design completion such as
systems design document (SDD), Interface Design Document (IDD), and
completion of architecture development; sub-system level development
completion and test reports.
95
11) Test Readiness Review (TRR)—Evidence of readiness to test such as completion
of contractor tests, completion of design documents, completion of test cases and
scripts, and test environment.
12) Milestone C (MSC)—Evidence of completion of system development—
verification of the system; continued validity of need for use based on the system
developed—Updated Business Case review.
13) Production Readiness Review (PRR)—Evidence showing the readiness to start
initial system go-live and deployment such as test reports, user training
completion, and network performance.
14) Initial Operating Capability/Full Deployment Decision (IOC)—Evidence showing
completion of initial deployment; readiness and need to start full deployment of
the system.
15) Full Operating Capability (FOC)—Evidence showing the system in a steady state
with completion of deployment; steady state of user counts as well as defects
being reported.
The associations of the evidence items to the fifteen Knowledge Checkpoints are outlined
in Appendix A.
4.3 Evidential Reasoning
Evidential reasoning regarding the hypothesis of program success is conducted
against the program baseline, which is often captured in the Acquisition Program
Baseline (APB). The risks and progress of a program are measured against the baseline.
96
This section discusses the DAPS approach to incorporating program risk and program
progress into the DAPS model, the approach for differentiating the inferential forces of
evidence, and the approach for the observations of evidence.
4.3.1 Program Risk
A risk can be defined as an uncertainty that may lead to future loss. A risk has
three components (DAU 2006):
•
A future root cause (yet to happen), which, if eliminated or corrected, would
prevent a potential consequence from occurring,
•
A probability (or likelihood) of that future root cause occurring that is assessed at
the present time, and
•
The consequence (or effect) of that future occurrence.
Risk is the anticipated uncertainty, or the known unknowns in the plan or baseline
established for execution. This is considered against the knowledge certainty of a
program. Risk is an important factor in inferring the potential for program success. The
DAPS model takes account of program risk in three ways.
First, risk assessment is included as part of the observation of each evidence
(item) node in the DAPS model. The evidence nodes operate similarly to sensors of a
Command, Control, Communications, Computers, Intelligence, Surveillance, and
Reconnaissance (C4ISR) network, providing information back to the Knowledge Area
node the evidence node is organized under. The Knowledge Area node cumulates risk
observations at all the evidence nodes for a combined risk assessment of the Knowledge
97
Area and provides input to the program success measurement at the Knowledge
Checkpoint.
Second, although Risk Reports are typically recorded as a single artifact, in the
DAPS model they are separated into four individual evidence nodes for observation of
the four Direct Knowledge Areas: Cost, Time, Quality, and Scope. This is possible since
the Risk Report commonly provides risk assessment in each of these Knowledge Areas
separately, each acting as an individual sensor to the four Measurable Knowledge Areas.
Lastly, the Risk Management Plan that documents the program’s processes for
risk assessment, analysis, mitigation, and tracking is modeled as another evidence node
under the Systems Engineering (SE). This allows evaluation of the Risk Management
process as part of the evidential reasoning regarding program success.
4.3.2 Program Progress
The progress of the program can be categorized as authoritative records or
accepted facts in the assessment of evidence. Program progress is commonly measured as
Earned Value Metrics in terms of Cost Performance Index (CPI) and Schedule
Performance Index (SPI). CPI and SPI are defined by dollar value of actual work divided
by dollar value of planned work. A value of CPI or SPI greater than 1 indicates the actual
work accomplished during the term of schedule or cost is ahead of the planned work. A
CPI or SPI less than 1 indicates the opposite. In short, a value greater than 1 is good,
while a value less than 1 is not good.
The progress of the system product performance is the hardest one to evaluate. It
is normally done by testing to verify the system product performance against
98
specifications. However, because of cost constraints, testing is often done with selected
cases instead of all of them. Technical or quality performance progress can be measured
by percentage of requirements met, not met, and not tested. Such arrangements would
not capture the fact that some requirements are harder to satisfy than others, but this
would be the only way an authoritative record can be obtained for performance and be
consistent with the cost and schedule measurements.
As a program progresses, the observations and inferences from evidence require
updates to incorporate newly available information. This update is accomplished in
DAPS by observing a new set of evidence nodes at the present Knowledge Checkpoint.
The idea is not to update the previous evidence nodes observed, but to use evidence
nodes at the present Knowledge Checkpoint to update the assessment of the Knowledge
Area. Within the DAPS model, this is achieved by constructing dynamic relationships
among Knowledge Area nodes throughout the DBS acquisition life cycle. For the same
Knowledge Area, the prior Knowledge Area node will have a CIR arc onto the next
Knowledge Area node. This is further discussed in Chapter 5.
4.3.3 Inferential Force of Evidence
The inferential force of evidence varies in DBS acquisition. In reality, it is
possible that each individual evidence item has a different inferential force on the
hypothesis of program success. However, for reasons of knowledge elicitation
efficiencies with SMEs, a more streamlined approach was taken. Instead of specifying
each of the 258 evidence nodes, a typical and common Conditional Probability Table
(CPT) was used for all evidence nodes.
99
The differentiation of the inferential forces of evidence is accomplished through
the network structure and the CIRs among Knowledge Areas and the Knowledge
Checkpoints, rather than at the individual evidence level. The CIRs between the evidence
and Knowledge Area are considered typical and equal for all such relationships. This is
based on the assumption that each Objective Quality Evidence (OQE) in the DAPS
evidence taxonomy is solid and provides quality evidence toward the program success
hypothesis, and that all of them have approximately the same inferential force with
respect to the Knowledge Areas they are grouped under. As an end result, all evidence
under a Knowledge Area would have equal inferential force. However, evidence under
different Knowledge Areas would have different inferential forces due to the network
structure of the Knowledge Area nodes and the Knowledge Checkpoint nodes. Chapters 5
and 6 further discuss DAPS’s model structure and probability specifications.
4.3.4 Observations of Evidence
DBS acquisition artifacts, program information, and deliverables are typically
reviewed, assessed, or approved by a designated authority. In some instances, the
available observations of evidence would be rated as Acceptable or Unacceptable for
approval. In other instances, there may be more comprehensive assessments and ratings.
Often in the Department of Defense, a three-level color rating system is used: green for
ready or low risk, yellow for caution or moderate risk, and red for not ready or high risk.
The DAPS model utilizes these reviews, assessments, and approval records as
observations of evidence for the model. The DAPS model does not require additional
assessments.
100
The DAPS model has constructed a comprehensive evidence taxonomy for DBS
acquisition in the form of observable evidence nodes. Depending on the situation and
state of the program, some evidence nodes could go unobserved due to being expectedly
missing. If evidence is unexpectedly missing, the observation of evidence would be rated
as Unacceptable or Red. Chapter 5 further discusses how the evidence items are used as
observation nodes in the model structure. Chapter 7 presents the case analyses that will
demonstrate the use of the model based on the observations of evidence.
101
5
MODEL STRUCTURE
The Defense Business System Acquisition Probability of Success (DAPS) model
is developed to support the Research Hypothesis stated in Section 1.3:
“A DBS Program’s Probability of Success can be reliably and repeatedly measured
and predicted, in a structured manner, through the collective inference of available
evidence (data, reports, plans, and artifacts produced during acquisition execution)
using Bayesian Networks, and can be used to support acquisition decision making.”
This chapter presents the structure of DAPS model. It starts by introducing the
underlying foundations of the DAPS model in Section 5.1. Section 5.2 provides the
information regarding the types of DAPS model nodes. Section 5.3 provides the
information regarding the types of DAPS model arcs. Finally, Section 5.4 summarizes the
DAPS model structure with an overview of the complete model.
5.1 Model Foundations
The DAPS model structure is based on the concept of Knowledge-Based
Acquisition described in Section 2.3.3. The concept centers around attaining a certainty
level at checkpoints prior to moving on to the next phase of the program, in order to make
informed decisions and reduce the risk of the program going forward. With the concept
of Knowledge-Based Acquisition, it is critically important to have the program provide
102
evidence of knowledge at Knowledge Checkpoints of Technical Reviews and Milestone
Reviews.
The static model of DAPS is based on three types of nodes to infer the knowledge
certainty achieved, as depicted in Figure 43 below.
Figure 43 DAPS Knowledge Inference Structure
The Knowledge Checkpoint is the top-level node that cumulates all information
about the acquisition program at that decision point and that measures the Probability of
Success. It supports decision makers in deciding whether to move on to the next phase of
the acquisition program. Knowledge Checkpoints are modeled as leaf nodes. They have
103
no children nodes and contain direct Knowledge Area Nodes as parent nodes. The arcs of
the Direct Knowledge Area Nodes to the Knowledge Checkpoint Node model the CIRs
of the knowledge certainty level attained in the Direct Knowledge Area that affects the
measure of success or failure at the Knowledge Checkpoint.
Knowledge Areas are the second-level nodes that measure the knowledge
certainty level attained for that particular subject matter area at the Knowledge
Checkpoint. They contain both the Direct and Indirect Knowledge Areas mentioned in
Section 4.2. Direct Knowledge Areas are Knowledge Areas that directly affect program
success in DAPS. Indirect Knowledge Areas are Knowledge Areas that affect program
success in DAPS indirectly. The DAPS Model uses this second level to model the
complex interrelationships and effects within DBS acquisition, and also to combine the
observations of various evidence in the subject matter areas. The arcs among the
Knowledge Area Nodes, shown in Figure 43, model the CIRs between different
Knowledge Areas. The arcs from Knowledge Area Nodes to Evidence Nodes model the
CIR of how the state of knowledge [Good, Marginal] in the Knowledge Area affects the
outcome observed via the Evidence [Outstanding, Acceptable, Unacceptable]. Not shown
in Figure 43 are dynamic CIRs that model the effect of knowledge in Knowledge Areas
through time, from one Knowledge Checkpoint to the next.
The third and bottom level is the Evidence Nodes—the observation nodes in the
DAPS model. Observations of the evidence are entered here in this level to build the
inference of program success. The only CIRs for this level are the arcs from Knowledge
Area Nodes to Evidence Nodes described in the above paragraph.
104
The CPT of a node in the Bayesian Network quantifies how likely the node is to
be in certain states given the values of its parent node(s). The marginal probability of the
node then summarizes the overall belief in a certain state, incorporating uncertainty about
the state of the parent(s).
For Knowledge Checkpoint Nodes, the overall likelihood of Program “Success”
or “Failure” is measured.
Knowledge Area Nodes represent knowledge certainty in the subject matter area.
The two node states are Good and Marginal. The probability for the Knowledge Area
Nodes measure the likelihood the program has attained Good knowledge in the subject
matter Knowledge Area, or otherwise Marginal or worse.
For Evidence Nodes, the probabilities measure the likelihood of the evidence
items being Outstanding, Acceptable, or Unacceptable. Since these are the observation
nodes, one of the states is chosen to describe the real-world observation. These
observations provide information to the parent Knowledge Area Nodes, which update the
belief in the state of the subject matter (knowledge) areas.
The DAPS model adapts several of the specific adjectives and definitions used in
DoD acquisition source selection (DoD 2011) to describe DAPS node states. The
definitions of these states are further discussed in the following sections.
5.2 Model Nodes
As discussed in Section 2.7.3, nodes in the Bayesian Networks represent
hypotheses being examined. There are three types of nodes in the DAPS Model:
105
•
Knowledge Checkpoint Nodes (KC)—total of 15 nodes
•
Knowledge Area Nodes (KA)—total of 105 nodes
•
Evidence Nodes (E)—total of 258 nodes based on 62 different kinds of evidence
items
The model nodes are used in conjunction with the model arcs to represent the
DBS acquisition system to predict future program success based on the evidence
produced thus far. The three types of DAPS model nodes are further discussed in sections
below.
5.2.1 Knowledge Checkpoint Nodes
Knowledge Checkpoint (KC) Nodes are the program success indicators at certain
stages of the acquisition process. Fifteen Knowledge Checkpoint Nodes are used in the
DAPS Model, based on the Systems Engineering Technical Reviews and DBS
Acquisition Process Milestone Reviews (Defense Acquisition Guide). Figure 44 below
depicts the fifteen Knowledge Checkpoints for DAPS.
106
Figure 44 Business Capability Lifecycle Model - 15 Knowledge Checkpoints
DBS and defense acquisition processes in general use a linear event–sequenced
approach for oversight and review. Within the Knowledge-Based Acquisition
methodology, this is necessary to increase certainty about the program for the decision
makers prior to making a decision whether to proceed or not. All Knowledge Checkpoint
Nodes have two adjectival rating states:
•
Success—Knowledge necessary for Program Success thoroughly achieved at this
Knowledge Checkpoint and is ready to proceed to the next phase.
•
Failure—Otherwise
Full Operating Capability (FOC) is the final KC, thus the ultimate indicator of
Program Success. Program Success is defined in DAPS as meeting Program Cost,
107
Program time, and Program Quality goals from clearly defined Program Scope. These are
the Measurable Knowledge Areas of the DAPS model.
Description of the 15 Knowledge Checkpoints in the DBS Acquisition are
provided in the following sections.
5.2.1.1 Material Development Decision (MDD)
The Material Development Decision (MDD) is part of the Milestone Review
process. It is a Knowledge Checkpoint where the Investment Review Board (IRB) and
the Milestone Decision Authority (MDA) review the supporting elements of the proposed
program to determine whether to develop the system. Examples of the supporting
elements include the Problem Statement and Business Process Re-engineering (BPR)
artifacts.
As of 2012, the IRB in the Business Capability Lifecycle (BCL) process used for
DBS acquisitions is the DBS Management Committee (DBSMC) led by the Defense
Chief Management Officer (DCMO) to conduct oversight of IT investments and
authorize funds execution.
The MDA, a common role within DoD acquisitions, is the decision maker
responsible for the acquisition process. The MDA can make decisions on everything
affecting the acquisition process, including acquisition strategy, contracting type,
review/oversight/reporting requirements, and the ultimate ability to continue the program
or shut it down.
108
5.2.1.2 Initial Technical Review (ITR)
The Initial Technical Review (ITR) is a technical review used in the systems
engineering process to initiate the scope and requirements for a cost estimation basis. One
of the key items of evidence for this review is the Cost Analysis Requirements Document
(CARD). At ITR, the programs also often set up the parameters for the Analysis of
Alternatives (AoA) study.
5.2.1.3 Alternative System Review (ASR)
The Alternative System Review (ASR) is a technical review used in the systems
engineering process to make a selection of possible alternatives.
5.2.1.4 Milestone A (MSA)
Milestone A (MSA) is the first official Milestone Review in the acquisition
process, checking program progress for the determination to proceed or shut down.
Milestone A requires both the Investment Review Board (IRB) review as well as the
Milestone Decision Authority (MDA) review. Milestone A’s intent is to certify the cost,
schedule, and scope/performance baselines of the program.
5.2.1.5 System Requirements Review (SRR)
The System Requirements Review (SRR) is a technical review in the systems
engineering process used to establish the overarching scope, the system performance
specification baseline, such as the Key Performance Indicators (KPI) and Key System
Attributes (KSA). This could be before or slightly after the start of prototype
development in the BCL process.
109
5.2.1.6 System Functional Review (SFR)
The System Functional Review (SFR) is a technical review in the systems
engineering process used to establish the functional baseline of the system, sometimes
based on the prototype in development. It determines functionally what the system would
do and what business processes the IT systems will automate or support. This review
establishes the system functionalities required as part of the Request For Proposal (RFP).
5.2.1.7 Pre-Engineering Development Review (PreED)
The Pre-Engineering Development Review (PreED) is a review held with the
MDA prior to the release of the RFP. The review is meant to finalize “how” the
acquisition will be executed, including decisions on contract type, Statement of Work
(SOW), system specifications, and source selection criteria.
5.2.1.8 Milestone B (MSB)
Milestone B (MSB) is part of the acquisition process performed by the Investment
Review Board (IRB) and the Milestone Decision Authority (MDA) prior to awarding a
contract and starting the execution of the contract/program. Milestone B would occur
sometime in between the release of the RFP and awarding of the contract. Milestone B is
a very important Knowledge Checkpoint that commits/obligates the government to
develop the system. The validity of the need to develop the system is captured in the
Business Case.
5.2.1.9 Preliminary Design Review (PDR)
The Preliminary Design Review (PDR) is a step in the systems engineering
process for the prime contractor and the government to form an integrated plan,
110
expectations, and baseline. From this point forward, it is no longer just the belief and
understandings from the government side contributing to the overall program knowledge
certainty, but also the contractors’. This would produce more varied opinions, and
sometimes dissonant evidence due to more chances for misunderstanding,
miscommunication, untruthfulness, and conflicting priorities.
5.2.1.10 Critical Design Review (CDR)
The Critical Design Review (CDR) is part of the systems engineering process
during the middle of the development process that ensures the complete agreement and
comprehensiveness of the system design prior to full commitment to realizing the design.
For IT systems, this is often when the sub-level systems are completed and ready to start
full integration, testing, and correction at the system level. Consequently, there should be
high degree of belief that the system is designed to meet the required specifications. From
this point forward, the work is to make the entire system work as a whole, which is
perhaps the most difficult part of IT development.
5.2.1.11 Test Readiness Review (TRR)
The Test Readiness Review (TRR) is the part of the systems engineering process
that ensures the program has accomplished everything necessary prior to official
government acceptance testing of the system. This is an important step to ensure that
testing can be performed successfully. This is not a difficult step for IT systems
development if Knowledge-Based Acquisition is used with many Knowledge
Checkpoints and constant two-way communication between the government and
contractor. However, if that is not the case, TRR could prove to be very difficult.
111
5.2.1.12 Milestone C (MSC)
Milestone C (MSC) is part of the acquisition process that certifies the completion
of the system development and decision point for the Investment Review Board (IRB)
and Milestone Decision Authority (MDA) to make the decision about the production of
the system. This is another essential Knowledge Checkpoint to determine whether this
system should enter the acquisition phase for production and deployment. A lot can
influence the decision, including if the system can perform as intended and if the system
is still needed. The problem or the need might have changed since the start of the
acquisition program. The evidence certifying the continued need of the program is the
Business Case at Milestone C, which is also a piece of Milestone B evidence.
5.2.1.13 Production Readiness Review (PRR)
The Production Readiness Review (PRR) is a part of the systems engineering
process that ensures that the program has completed all facets of testing, preparation, and
user training for the system to go live for limited deployment by a smaller group of users,
not the full user group.
5.2.1.14 Initial Operating Capability/Full Deployment Decision (IOC)
The Initial Operating Capability (IOC) and Full Deployment Decision (FDD) is a
Knowledge Checkpoint that is a part of the acquisition process as well as the systems
engineering process. It certifies successful deployment of the system in a limited capacity
and readiness to proceed to full deployment to all users.
112
5.2.1.15 Full Operating Capability (FOC)
Full Operating Capability (FOC) is a part of the acquisition process as well as the
systems engineering process that certifies successfully reaching a steady state in the use
of the system. FOC officially enters the acquisition program into the Operation and
Sustainment phase.
5.2.2 Knowledge Area Nodes
The Knowledge Area (KA) Nodes are derived from the Project Management
Body of Knowledge’s (PMBOK’s) nine Knowledge Areas (PMI 2008) and the Defense
Acquisition Systems Engineering processes (Defense Acquisition Guide). These
Knowledge Areas are an important part of the DAPS model. They are used to model the
complex and dynamic interrelationships of the DBS Acquisition, organize the evidence of
acquisition, measure the performance in specific subject matter Knowledge Areas, and
provide information to Knowledge Checkpoint Nodes for the measurement of program
success. The seven Knowledge Areas are further defined into Measurable (Direct) and
Enabling (Indirect) Knowledge Areas to Program Success.
The Measurable (Direct) Knowledge Areas to Knowledge Checkpoint are:
1. Time Management—Schedule Plan, Schedule Progress, Schedule Performance,
Earned Value Schedule Metrics
2. Cost Management—Cost Estimate, Cost Expenditure, Cost Performance, Earned
Value Cost Metrics
3. Scope Management—Scope of program: Objectives, Goals, BPR, Requirements
and Specifications, Statement of Work
113
4. Quality Management—Product Performance, Defects, Product Verification,
Validation, Acceptance, Product Supportability (ILS elements), Data Deliverable
Direct Knowledge Areas in DAPS are considered directly measurable; the effects of the
Knowledge Area can be directly quantified and are considered quantifiable measures of
final program success outcome.
The Enabling (Indirect) Knowledge Areas to Knowledge Checkpoint are:
1. Procurement Management—Acquisition Planning and execution, contract
solicitation, contract terms, software licensing agreements
2. Systems Engineering Management—Defense Acquisition Guide (DAG)’s
Systems Engineering Technical Management Processes, including Project
Integration and Project Risk of the PMBOK Knowledge Areas, as well as DAG’s
Systems Engineering Technical Processes
3. General Management—Staffing and Human Resources management,
Communication, environmental management, budgeting, and funding, Program
Management Plan, Program Charter
Indirect Knowledge Areas in DAPS are not considered directly measurable to program
success; the effects of the Knowledge Area are not easily quantifiable and are not
commonly used as measures of final program success outcome.
All KA Nodes have two adjectival rating states:
114
•
Good—Thorough understanding and demonstrated performance of the
Knowledge Area at this Knowledge Checkpoint. Minor Risk and Uncertainties to
Program Success identified in the Knowledge Area at this Knowledge Checkpoint.
•
Marginal—Otherwise
Detailed descriptions of the seven Knowledge Areas are provided in the following
sections.
5.2.2.1 Time Management Knowledge Area
The Time Management Knowledge Area, or “Time KA” for short, is the subject
matter area of DBS acquisition regarding the time duration management of the program.
The Time KA is a Direct Knowledge Area in DAPS, considered a direct, finite, and
quantifiable measure of success. Time duration of the program is often used as a key
decision factor. If the program still has not been successfully completed a long time after
the original projected completion date or required date, then the program might be
deemed a failure and result in a shut-down.
The Plan of Action and Milestones (POA&M) and program schedule are two of
the most used tools for planning, progress tracking, and integration. The Earned Value
Management System (EVMS) is also an importance evidence for the Time KA if used.
With increasing complexity, time can become a very difficult metric to estimate
and track. Thus, understanding of time management, such as the use of a schedule,
program activities network, critical path/chain methods, EVMS, and prioritization, are
sound evidence items to show knowledge and certainty of time management.
115
Demonstrated performance in time management, such as meeting milestones and key
dates thus far in the program, also provides supporting evidence to meet future key dates
and milestones. Testimonial evidence such as Technical Review reports and Risk reports
also provide additional evidence on the program’s ability to meet future milestone.
5.2.2.2 Cost Management Knowledge Area
The Cost Management Knowledge Area, or “Cost KA” for short, is the subject
matter area of DBS acquisition regarding the cost management of the program. The Cost
KA is a Direct Knowledge Area in DAPS, considered a direct, finite, and quantifiable
measure of success. Cost of the program is often used as a key decision factor through
consideration of the cost-benefit of the program against the finite financial resources of
an organization.
Some evidence items under the Cost KA include cost estimates, cost
expenditures, average cost burn-rate, and EVMS. Understanding in cost estimation best
practices, cost control and management, as well as demonstrated performance in meeting
cost targets at sub-program levels and program phases would provide certainty and
confidence in the successful completion of a program within cost.
Knowledge in the Cost KA, especially early on, would significantly decrease the
risk to program success by correctly funding the program to eliminate surprises, avoid
running out of funds later in the program or asking for more in order to continue.
Knowledge of cost of a program is critical to program success, so the government can
properly fund a program that requires years of proper planning, programming, and
budgeting.
116
5.2.2.3 Quality Management Knowledge Area
The Quality Management Knowledge Area, or “Quality KA” for short, is the
subject matter area of DBS acquisition regarding the quality and system performance
management of the program. The Quality KA is a Direct Knowledge Area in DAPS,
considered a direct and quantifiable measure of success. System quality and performance
is a key factor for program decision making. Ability of the system to perform at the level
of quality required is essential for the decision makers and users to accept the system and
keep the program going. If the program does not or is projected not to meet the desired
functions and quality, or requires significantly more time and money to meet the required
level of quality, the program could be considered for shut-down.
Knowledge of the quality and performance of the system being developed in the
program is thus essential to program success. Understanding in the process of system
quality management, such as systems engineering, documentation, requirements
engineering, configuration management, quality control/assurance, testing process, as
well as the demonstrated performance of product/system quality shown through
component level software product demonstrations and testing, provide certainty and
confidence that the program will be successful.
5.2.2.4 Scope Management Knowledge Area
The Scope Management Knowledge Area, or “Scope KA” for short, is the subject
matter area of DBS acquisition regarding the scope and requirements management of the
program. The Scope KA is a Direct Knowledge Area in DAPS, considered a direct and
qualitative measure of success. The scope of a program is considered a qualitative
117
measure since the exact quality is not quantifiable. The quality of program scope is
defined by:
1) How relevant is the program? Is the right system being built? Is this system
necessary?
2) How well are the requirements documented? Are all the requirements traceable to
a higher level requirement all the way to top level goal/intent of the program?
3) How stable is the program scope? Have there been major changes since program
initiation?
Knowledge certainty of the scope of a program is crucial. Without it, one would
not know what system to develop, would not be able to properly plan for the program,
and would not be able to properly assess cost and time. Thus, evidence supporting or
refuting the knowledge of DBS program scope, such as the Problem Statement, Business
Case, Functional Requirements Specifications, Systems Requirements Specifications,
Software Design Descriptions, and architecture documents, are vital to program success.
5.2.2.5 Procurement Management Knowledge Area
The Procurement Management Knowledge Area, or “Procurement KA” for short,
is the subject matter area of DBS acquisition regarding procurement and contracting of
the program. The Procurement KA is an Indirect Knowledge Area in DAPS. It is not
considered a direct measure of success, but rather a qualitative measure and an enabling
factor to success. The Procurement KA is especially important for DAPS since it is a
model built for DBS acquisition/procurement. The quality of the contract and the
118
solicitation process for the contract can greatly affect an acquisition program. The
contract is the agreement between the government and the contractor on the program. It
documents the requirements and the expectation of the government, as well as the
proposed solution approach of the contractor. Contract helps to establish a mutual
understanding and guide the government and the contractor in communicating and
working together to execute a successful program.
DoD acquisition contracts often contain the contractual data deliverable
requirements documented in the Contract Deliverable Requirements List (CDRL), and
the approach for government inspection and acceptance of the deliverable. These data
requirements and subsequent ratings are the government’s evidence to support their
rating of contractor performance. These deliverable reviews and contractor performance
ratings provide valuable information on the certainty surrounding the contractors’ ability
to successfully execute the program.
5.2.2.6 Systems Engineering Management Knowledge Area
The Systems Engineering Management Knowledge Area, or “SE KA” for short, is
the subject matter area regarding the systems engineering management of DBS
acquisition, as discussed in Section 2.2. The SE KA is an Indirect Knowledge Area in
DAPS. It is not considered a direct measure of success, but rather considered a qualitative
measure and an enabling factor to success. Due to the size and complexity of DBS
acquisition programs, knowledge in systems engineering is critical to enabling program
success. Without systems engineering, it would be challenging to communicate,
119
collaborate, and coordinate with internal and external stakeholders, and difficult to
manage the other Knowledge Areas.
5.2.2.7 General Management Knowledge Area
The General Management Knowledge Area, or “GM KA” for short, is the subject
matter area of DBS acquisition regarding general program management. The GM KA is
an Indirect Knowledge Area in DAPS. It is not considered a direct measure of success,
but rather a qualitative measure and an enabling factor to success. General Management
includes tasks normally performed by the DoD Program Manager and direct staff,
including Planning, Programming, Budgeting, and Execution (PPB&E), human resources
management, communication with stakeholders and sponsors including program reviews
and milestone reviews, as well as program organization planning. Knowledge in General
Management is the foundational subject matter area in DAPS to secure the funding,
staffing, and other resources to enable the program, through successful marketing and
communication about the program. This is critical in the DoD environment, with many
different programs fighting for finite amount of resources.
5.2.3 Evidence Nodes
Evidence (E) Nodes are the fundamental building blocks of the DAPS model.
Used to make inferences about the Knowledge Area Nodes, the Evidence Nodes are
modeled as the observation nodes of the DAPS model. The evidence taxonomy
developed as part of this dissertation is provided in Appendix A. All evidence items in
the evidence taxonomy are built into the DAPS model as separate evidence nodes with
120
one typical Conditional Probability Table (CPT) specification. This is further discussed in
Chapter 6.
The Evidence Nodes have three states:
•
Outstanding—Evidence indicates low risk in the KA to Program Success.
Evidence indicates exceptional understanding and approach of the KA necessary
for program success. Evidence contains KA strengths far outweighing weaknesses
•
Acceptable—Evidence indicates no worse than moderate risk in the KA to
Program Success. Evidence indicates adequate approach and understanding
necessary for program success. Evidence contains KA strengths and weaknesses
which are offsetting
•
Unacceptable—Evidence indicates high risk in the KA to program success.
Evidence indicates no clear understanding and approach in KA for program
success. Evidence contains one or more deficiencies indicating Program failure.
Evidence is unexpectedly absent
5.3 Model Arcs
The DAPS model is constructed with four types of arcs, representing the four
types of Causal-Influence Relationships (CIRs) among the three types of nodes in DAPS.
The four types of arcs in the DAPS Model are as follows:
1. Knowledge Area Node to Evidence Node (KA2E)
2. Knowledge Area Node to Knowledge Area Node (KA2KA)
3. Knowledge Area Node to Knowledge Checkpoint Node (KA2KC)
121
4. Knowledge Area Node at the prior Knowledge Checkpoint to the same
Knowledge Area Node at the posterior Knowledge Checkpoint (KA2KAi+1)
The first three types of arcs, KA2E, KA2KA, and KA2KC arcs, represent static
relationships among the nodes. The last arc, KA2KAi+1, is a dynamic arc modeling the
temporal relationships in DAPS. Detailed discussions of the model arcs are provided
below.
5.3.1 KA2E Arc
The Knowledge Area Node to Evidence Node (KA2E) is the first type of arc in
DAPS. It represents the static relationship between a Knowledge Area Node and an
Evidence Node, where the state of knowledge, [Good, Marginal] in the Knowledge Area
causes the Evidence Node to be at a certain state [Outstanding, Acceptable,
Unacceptable]. The Knowledge Area Node is the parent node, with many possible
children Evidence Nodes. Figure 45 below provides an example graph of the KA2E arc.
Sys_Engineering_KnowledgeArea
Good
Marginal
80.8
19.2
Risk_Management_Evidence
Tech_Review_Report_Evidence
Good
Acceptable
Unacceptable
Good
Acceptable
Unacceptable
0
100
0
Test_and_Eval_Plan_Evidence
Good
Acceptable
Unacceptable
AoA_Evidence
Good
Acceptable
Unacceptable
0
100
0
0
100
0
Sys_Eng_Plan_Evidence
Good
0
Acceptable
100
Unacceptable
0
Figure 45 Knowledge Area to Evidence (KA2E) Arc Example
122
0
100
0
The KA2E structure, shown in Figure 45, models the Evidence Nodes as sensors
for the inference and measurement of the Knowledge Area. Using Figure 45 as an
example, the Acceptable observations of the 5 Evidence Nodes are indicators of the
program’s Systems Engineering knowledge certainty, which cumulates to a measurement
of 80.8% likely to be Good in this example
This KA2E structure as shown in Figure 45 assumes that all the Evidence Nodes
are conditionally independent given the state of the Knowledge Area Node.
Hypothetically, if the SE KA is known and given as Good, then the outcome of Evidence
Nodes in Figure 45 would be influenced by the good systems engineering state, but not
by the states of other Evidence Nodes. For example, given that the SE KA is good, the
Analysis of Alternatives (AoA) could be observed as unacceptable, but it would not
affect the observation of the Test and Evaluation Plan (TEP). The probability of the TEP
outcome is only affected by the given SE KA state, and is not more likely to be
unacceptable because of the AoA’s observation. This is a reasonable representation of the
real world. The quality state of the TEP is dependent on the systems engineering
knowledge of the program staff, and not dependent on the production quality state of
another evidence item.
5.3.2 KA2KA Arc
The Knowledge Area Node to Knowledge Area Node (KA2KA) is the second
type of arc in DAPS. It represents the static relationship between two different
123
Knowledge Area Nodes, where the state of knowledge [Good, Marginal] in the first
Knowledge Area affects the state of knowledge [Good, Marginal] in the second
Knowledge Area. This relationship models the CIRs among the seven Knowledge Area
Nodes. Figure 46 below provides the adapted KA2KA graph structure for the DAPS
Model. To provide an example of the relationship, the SE KA Node has one arc coming
in from the GM KA and three arcs going out to the Quality KA, the Scope KA, and the
Procurement KA in Figure 46. The knowledge in the SE KA is thus influenced by the
knowledge in the GM KA, and the SE KA influences the knowledge in the Quality KA,
the Scope KA, and the Procurement KA, as represented by the model. Adding the KA2E
relationship shown in Figure 45, the KA2E relationships would update the SE KA. The
SE KA would next update all KA Nodes related to the SE KA, including the GM KA,
Quality KA, Scope KA, and Procurement KA, according to their respective CIRs with the
SE KA.
124
Cost_Knowledge
Good
Marginal
Time_Knowledge
Good
57.8
Marginal
42.2
56.7
43.3
Quality_Knowledge
Good
66.7
Mariginal
33.3
Scope_Knowledge
Good
53.2
Marginal
46.8
Procurement_Knowledge
Systems_Engineering_Knowledge
Good
Marginal
Good
Marginal
57.9
42.1
72.0
28.0
General_Management_Knowledge
Good
Marginal
70.0
30.0
Figure 46 Knowledge Area to Knowledge Area (KA2KA) Graph Structure
The KA2KA structure is one of the most important features of DAPS. These arcs
are the part of the model that simulates the complex, synergetic relationships among the
various subject matter areas in DBS acquisition. It is what makes DAPS a unique and
innovative model for DBS acquisition or system development programs in general.
5.3.3 KA2KC Arc
The Knowledge Area Node to Knowledge Checkpoint Node (KA2KC) is the third
type of arc in DAPS. It represents the static relationship between the Measurable (Direct)
Knowledge Area Nodes and the Knowledge Checkpoint Node, where the state of
knowledge [Good, Marginal] in the Knowledge Area affects the state [Success, Failure]
measured at the Knowledge Checkpoint. Figure 47 below provides an example graph of
the KA2KC arcs in the DAPS model.
125
Knowledge_Checkpoint
Success
70.0
Failure
30.0
0±0
Time_KnowledgeArea
Good
100
Marginal
0
Scope_KnowledgeArea
Good
Marginal
Cost_KnowledgeArea
Good
100
Marginal
0
Quality_KnowledgeArea
Good
0
Mariginal
100
100
0
Figure 47 Knowledge Area to Knowledge Checkpoint (KA2KC) Arcs Example
Continuing with the SE KA examples used previously, Figure 46 shows the CIRs
from the SE KA to two of the Measurable Knowledge Areas, the Quality KA and the
Scope KA. The SE KA observations of evidence shown in Figure 45 thus indirectly
influence the program success measure at the Knowledge Checkpoint through these two
Measurable Knowledge Area Nodes. Furthermore, the SE KA also has CIRs with the
Procurement KA and the GM KA, as shown in Figure 46. The Procurement KA has
direct relationships with all four Measureable Knowledge Areas, while the GM KA also
has a relationship with the Procurement KA. The evidence items observed under the SE
KA would thus influence all four Measurable Knowledge Areas through the Procurement
KA. These multiple paths from one set of evidence under one Knowledge Area to the
Knowledge Checkpoint Node highlight the compounding effects of evidence due to the
KA2KA relationships developed under DAPS.
126
5.3.4 KA2KAi+1 Arc
The Knowledge Area Node at the prior Knowledge Checkpoint to the same
Knowledge Area Node at the next Knowledge Checkpoint (KA2KAi+1) is the fourth and
last type of arc in DAPS. It represents the state of knowledge [Good, Marginal] in a
Knowledge Area at a prior Knowledge Checkpoint affecting the state of knowledge
[Good, Marginal] of the same Knowledge Area at the next Knowledge Checkpoint. It is
the only arc in DAPS representing dynamic relationships.
The KA2KAi+1 arcs model the progression of knowledge in one specific
Knowledge Area through time, and do not model possible dynamic relationships among
different Knowledge Areas. DAPS model structure assumes that dynamic relationships
only exist within the same Knowledge Area sequentially through the Knowledge
Checkpoints. Since each Knowledge Area contains this dynamic relationship and DAPS
already constructs the complex interrelationships using the Knowledge Area KA2KA
structure at each Knowledge Checkpoint, it was not necessary to establish additional
dynamic relationships among the different Knowledge Areas.
Figure 48 provides an example graph of the KA2KAi+1 arcs in green arrows from
the MDD KC to the preceding ITR KC.
127
Figure 48 Knowledge Area @ KC1 to Knowledge Area @KC2 (KA2KAi+1) Arc Example
Continuing with the SE KA example, the SE KA at MDD would influence the SE
KA at ITR through its KA2KAi+1 arc. Each of the other KA Nodes would do the same
through their respective KA2KAi+1 arc. The KA2KA relationships for the KA Nodes at
MDD would repeat again for the KA Nodes at ITR. Finally, the Measurable (Direct) KA
Nodes at ITR will provide information for the program success measure at the ITR KC.
5.4 Complete Model
To summarize, Figure 43 shows the inference network at one static point. At this
point, Evidence Nodes are observed to provide information on the assessment of the
128
knowledge certainty in the seven Knowledge Area Nodes through the KA2E arcs. The
assessments are evaluated according to the two Knowledge Area Node states: [Good,
Marginal]. The Knowledge Area Nodes then send each other information according to
the KA2KA arcs to cumulate the belief based on the evidence observed under the
Knowledge Areas. Finally, the Direct Knowledge Areas provide information to the
Knowledge Checkpoint Node to assess the belief in the Knowledge Checkpoint Node
states [Success, Failure] through the KA2KC arcs, which completed the information flow
within a static point at a Knowledge Checkpoint. This static information flow repeats at
each Knowledge Checkpoint for a total of 15 times in the DAPS model.
The information at the static point within a Knowledge Checkpoint is then passed
onto the next Knowledge Checkpoint using the seven Knowledge Area Nodes through
the KA2KAi+1 arc, where Evidence Node assessment observations will again be made.
The information flow process shown in Figure 48 is then repeated 14 times until the last
Knowledge Checkpoint Node—the Full Operating Capability (FOC) Knowledge
Checkpoint Node—is propagated.
The complete DAPS model contains 15 Knowledge Checkpoints. The
complete model is shown below in Figure 49. Each Knowledge Checkpoint has one
Knowledge Checkpoint Node, seven Knowledge Area Nodes, and a number of
Evidence Nodes. The total is:
•
15 Knowledge Checkpoint Nodes
•
105 Knowledge Area Nodes
•
258 Evidence Nodes
129
•
258 KA2E Arcs
•
195 KA2KA Arcs
•
60 KA2KC Arcs
•
98 KA2KAi+1 Arcs
130
Figure 49 DAPS Complete Model (Expanded Model Graph available upon request to author,
[email protected])
131
6
NETWORK STRUCTURE AND PROBABILITY SPECIFICATION
The DBS Acquisition Probability of Success (DAPS) model is a knowledge
representation model, constructed based on expert knowledge elicitation. Seventeen
Subject Matter Experts (SMEs) contributed to this research, providing the expert
knowledge needed to build the DAPS network structure and Conditional Probability
Tables (CPTs). The method of knowledge elicitation is first discussed in Section 6.1. The
expert data conversion method is discussed in Section 6.2. Then, the analysis of the data
and the calculation for the CPTs are presented in Sections 6.3–6.5 for each type of the
DAPS nodes: Evidence Nodes, Knowledge Checkpoint Nodes, and Knowledge Area
Nodes.
6.1 Knowledge Elicitation
Seventeen Subject Matter Experts (SMEs) were interviewed to collect the
necessary data for Network Structure and Probability Specification for the purpose of this
research. The survey template used in the SME interviews as well as the complete data
set can be found in Appendix B. The survey had three parts:
Part 1, the SMEs were asked to provide a common/typical probability
specifications for all the Evidence Nodes, specifying P(E=Outstanding, Acceptable,
Unacceptable| KA=Good) and P(E=Outstanding, Acceptable, Unacceptable
|KA=Marginal).
132
Part 2, the SMEs were asked to specify the rankings of the four given KA2KC
arcs, from 1 to 4. A rank of 1 would give the arc a weight of 1 during probability
specification calculations. A Rank of 2 would give the arc a weight of 1/2, a Rank of 3
would give the arc a weight of 1/4, and a Rank of 4 would give the arc a weight of 1/8. If
the arcs are deemed approximately equivalent in importance, then the rank would be
equal, thus receiving equal weight.
Part 3, SMEs were provided the opportunity to select up to three KA2KA arcs for
each Knowledge Area Node. It was also possible for the SME to select no arcs, and
designate the Knowledge Area Node as a root node. In addition, the SMEs were asked to
rank the KA2KA relationship arcs they have selected for each KA Node. The same
ranking and weighting scheme as Part 2 is used.
Originally, the SME survey template contained an additional section, Part 4, to
elicit expert knowledge for the relative arc rankings for the dynamic KA2KAi+1 arcs at
Knowledge Area Nodes. However, this part of the survey presented many difficulties.
There was no easy way to simplify the data collection, although attempts were made to
streamline it. To make it meaningful, it would require data collection for each of the
dynamic arcs, which would add 98 more data points. Furthermore, the predictive function
of the model performs better with a high dynamic arc weight, since it would increase the
relative influence of the dynamic arc. Instead of acquiring this data from the SMEs, an
assumption was made that the weight of the dynamic KA2KAi+1 arc into a KA Node
would be equal to 1. This assumption asserts that the dynamic arc of the prior KA Node
is equivalent to the highest ranked arc from another KA Node.
133
Ranking and weighting of the arcs are mechanisms developed to collect the
relative influences of the arcs from the SMEs. Ranking of the arcs elicits the SMEs’
opinions on the ordered importance of the arcs. Associating certain weights to the ranking
is the mechanism used to convert the SME inputs into the Conditional Probability Tables
(CPTs). Weights also provide SMEs context to the ranking in terms of the arcs’ relative
inferential forces with respect to one another.
A simple ordinal weighting scheme based on inverse power function is used. The
formula of the weighting scheme is provided below in Equation 3:
Equation 3, Arc Weight
=
2
1
234 56
In the formula, warc is the weight of the arc, and rarc is the rank of the arc. With this
weighting scheme, the next ranked arc is assigned half the weight of the previous ranked
arc. An arc assigned rank 2 would be weighted half as much as rank 1, indicating it has
half the inferential force as an arc assigned rank 1. An arc assigned rank 3 would be
weighted ¼ as much as a Rank 1 arc, and ½ as much as a Rank 2 arc. If the SME
believed that selected arcs are equally important, the SME was provided the opportunity
to give the arcs the same rank and therefore indicate equal inferential forces for the arcs.
134
This weighting scheme provides ample differentiation among the weights. This
can be contrasted against a uniform distribution with no SME input, where the arcs would
all be equally weighted. The ratios of 1/2 from the weighting scheme are perhaps the
most simple and common ratios. Simplicity was important to avoid unnecessary
confusion during SME interview and elicitation. Lastly, the weighting scheme would
result in providing sufficient weights to the lowest ranked arcs at 1/8, still a significant
weight to use for probability specification calculations.
The SMEs interviewed for knowledge elicitation were carefully selected to
represent the population of defense acquisition professionals, from both the government
side and the contractor side. Each SME has in-depth knowledge of their subject matter
area within defense acquisition. Table 1 below provides the numbers of SMEs
interviewed categorized into specific SME types. The descriptions of the SME types are
included in Table 1.
Table 1, SME Summary Static and Description
SME Type
Count
Government PM
SME Type Description
SME is responsible for the overall acquisition
4 program success.
Government SE/Technical
Expert
SME is the Systems Engineer or technical expert from
4 the government organization for the acquisition.
Government Contract/Cost
Specialist
SME is the government personnel supporting the
2 government cost estimates or the contracting process.
Contractor SE/Technical
Expert
Contractor PM
Total
SME is the systems engineer or technical expert from
4 the contractor organization for the acquisition.
SME is responsible for the delivery of the cost, time,
and quality performance specified by government
3 contract.
17
135
6.2 Expert Data Conversion
The arc rankings specified by the SMEs for KC and KA Nodes require conversion
to the Conditional Probability Tables (CPTs) to complete the construction of the DAPS
Bayesian Network. There are many methods for CPT conversions. However, with the
novel method used to elicit expert opinion on relative arc influence, a conversion method
tailored to knowledge elicitation was needed. The Chapter 3 Bayesian Network prototype
probability conversion method is updated for DAPS implementation. The updated
conversion steps are outlined below.
•
The SME arc weights are first normalized as the values of arc influence, Varc. This
is calculated by using Equation 1.
•
An additive rule is used to combine the Varcs, or the influence of the parent node
on the child node when the parent node is true, to calculate the total influence of
the parent nodes on the child node, adding the influences of the arcs when the
parent is true. If the parent node is false, then the influence of the parent node on
its child node would be zero. Equation 2 used in Chapter 3 is updated as Equation
4 below for the final DAPS model.
Equation 4, P(Child Node=True|KAs)
ℎ ! 78!$ = "#$|9:( =
)
;<-=
>
136
•
An assumption is made that the weight of a dynamic arc into a KA Node is equal
to 1, as discussed in Section 6.1.
•
Boundary Conditions for the probabilities are capped according to the number of
parent nodes. This was necessary to resolve two issues with the model and
knowledge elicitation:
1. Avoid using the absolute values 0 and 100 during internal Bayesian
Network probability calculation operations and avoid the possibility of
having these absolute values as an outcome from the model.
2. Due to the way knowledge elicitation was conducted, the CPT for nodes
with only one parent (one incoming arc) would become deterministic. This
was especially problematic for the GM Knowledge Area Nodes, since the
only incoming arc for these nodes is the dynamic arc from the prior GM
Knowledge Area Node. This causes all 15 GM Knowledge Area Nodes to
retain the same probabilities across the complete model, resulting in no
loss of information for all future predictions. To mitigate this issue, a rule
for the linear decrease of probability boundary condition limits was
implemented in the model as a rule according to the number of parent
nodes. This rule assumes that the more parent nodes there are for a node,
the stronger the possible evidence support there is for it. Therefore, the
certainty of the possible consequences of the node would be higher as
compared to nodes with less incoming arcs, and have larger boundary
condition limits.
137
The Boundary Conditions for the probability specifications are:
•
Nodes with 4 parents have Boundary Condition [1, 99]
•
Nodes with 3 parents have Boundary Condition [5, 95]
•
Nodes with 2 parents have Boundary Condition [10, 90]
•
Nodes with 1 parent have Boundary Condition [15, 85]
•
Nodes with 0 parents have Boundary Condition [30, 70]
By using the conversion rules above, the CPTs in DAPS are able to meet the
conversion criteria established in Section 3.2. The resultant Varcs used for the CPT
calculations are reflective of the arc influence weights specified by the SMEs; and they
were converted to probabilities using Equation 4. The probabilities sum to 1 for each
possible outcome at a node. These conversion rules are able to be applied repeatedly in
the same manner for all applicable nodes in the DAPS Bayesian Network.
6.3 Evidence Nodes
6.3.1 Data Analysis
Part 1 of the Knowledge Elicitation asked the SME to specify the discrete
conditional probabilities of the Evidence Nodes:
P(E=Outstanding, Acceptable, Unacceptable| KA=Good)
P(E=Outstanding, Acceptable, Unacceptable| KA=Marginal)
The summary statistics for the data sample is provided below in Table 2. The
completed data set for Evidence Nodes can be found in Appendix B.
138
Table 2, Evidence Node Data Summary Statistics
P(E| KA=Good)
E
Mean St. Dev. Median Mode Max Min
Outstanding
25.29
6.95
25
30
35
10
Acceptable
62.06
6.39
60
60
75
50
Unacceptable 12.65
6.87
10
10
30
5
P(E| KA=Marginal)
E
Mean St. Dev. Median Mode Max Min
Outstanding
5.35
3.20
5
5
10
0
Acceptable
43.24
5.57
45
40
50
35
Unacceptable 51.41
6.58
50
50
65
40
The sample mean was used as the probability distribution for the DAPS Evidence
Node out of this data set, since it is the only central tendency statistic which can be used
without renormalization. Using Median and Mode based on this data collection could
yield a probability distribution that does not add up to 100%. For example, as observed
from Table 2, P(E| KA= Good) Median values 25, 60, and 10 would add up to 95, not
100.
The standard deviations as well as the Max and Min statistics show the general
agreement of the SMEs on the evidence probabilities. Figure 50 provides a graph of the
Evidence Node probability distribution based on the mean from Table 2.
139
Evidence Node Probability Profile
(Typical)
80.00
60.00
40.00
20.00
0.00
Outstanding
Acceptable
Good Knowledge
Unacceptable
Marginal Knowledge
Figure 50 Evidence Node Probability Profile
6.3.2 Probability Specification Calculations
Probability specifications for Evidence Nodes use the mean values shown in
Table 2 above. No additional calculations were performed. Figure 51 provides the
Conditional Probability Table (CPT) from Netica using the rounded mean values from
Table 2.
Figure 51 Evidence Node CPT (Typical)
140
6.4 Knowledge Checkpoint Nodes
6.4.1 Data Analysis
Part 2 of the Data Collection for Knowledge Checkpoint data asked the SMEs to
rank the four KA2KC arcs given from the Cost KA, Time KA, Scope KA, and Quality
KA Nodes. Figure 52 shows the counts of rank observations for each arc. Quality and
Scope most frequently received a rank of 1, translating to a weight of 1, while Time and
Cost most frequently received a rank of 2, translating to a weight of 0.5.
Knowledge Checkpoint Ranking
Observations
10
8
Rank 1
6
Rank 2
4
Rank 3
2
Rank 4
0
Quality
Scope
Time
Cost
Figure 52 Knowledge Checkpoint Ranking Observations
Summary statistics of the Knowledge Checkpoint rankings are provided below in Table 3
141
Table 3, Knowledge Checkpoint Ranking Summary Statistics
KA
Quality
Scope
Time
Cost
Mean
Median Mode
St. Dev
Max
0.720588
1
1 0.360472
0.75
1
1 0.318689
0.507353
0.5
0.5 0.266893
0.566176
0.5
0.5 0.312867
1
1
1
1
Min
0.125
0.25
0.125
0.125
To select the central tendency that best summarizes the data collected, the Mean,
Median, and Mode summary statistics were compared. Mode would provide the most
frequent (most popular) rank. Mean would provide the average. Median would provide
the middle ordered value of the sample. To compare the three central tendencies, a radar
chart was formulated, shown in Figure 53 below. Median and Mode values are the same
for the Knowledge Checkpoint rankings while the Mean value displayed a similar radar
shape/trait with less variance/difference. The Median/Mode values accentuate the
differences among the four KAs more clearly than the Mean, which supports better model
performance. Observing Figure 52, Quality and Scope distributions are also significantly
skewed toward Rank 1, making Median a better summarizing statistic than Mean.
Considering these comparisons, the Median value is used to summarize the arc ranking
and associated weighting data for probability specification calculations.
142
Knowledge Checkpoint Ranking
Radar Chart
Quality
1
0.5
Cost
Mean
Scope
0
Median
Mode
Time
Figure 53 Knowledge Checkpoint Ranking Radar Chart
6.4.2 Probability Specification Calculations
The probabilities for the Knowledge Checkpoint Nodes are obtained by first
calculating the arc value, Varc, for the KA2KC arcs using Equation 1. The output is
summarized below in Table 4.
Table 4, KC Arc Value Calculations
First
Quality
Scope
Time
Cost
Second
KC
KC
KC
KC
Wtot =
Warc
1
1
0.5
0.5
3
Varc
0.333
0.333
0.167
0.167
Varc is then used to calculate the probability for each possible combination of the
four Knowledge Area States. The Knowledge Checkpoint Node success probabilities are
obtained by adding the Varcs from the KA where KA=Good, as shown in Equation 4, and
143
by following the conversion rule discussed in Section 6.2. Figure 54 below shows the
Conditional Probability Table (CPT) from Netica for the Knowledge Checkpoint Node,
encompassing the probabilities for the sixteen possible combinations of the four
Knowledge Area States. This is the adapted CPT for all Knowledge Checkpoint Nodes in
the DAPS model.
Figure 54 Knowledge Checkpoint Node CPT
6.5 Knowledge Area Nodes
6.5.1
Data Analysis
Part 3 of the knowledge elicitation collected the necessary data for the selection of
KA2KA arcs as well as the weighting data needed for probability specification. However,
in order to implement this data set into the DAPS model, there had to be a way to
144
summarize the varied opinions of the 17 SMEs into one model structure and common set
of CPTs.
The approach taken to summarize this data set is by using a median weight
method, using the weights associated with the rankings to determine arc selection and
weight. To use this method, a weight of 0 was assigned to each potential arc for all SME
interview instances where the arc was not selected. Then, the median value was taken for
each potential arc, yielding two possible outcomes:
•
In the case that less than half of the SMEs selected the arc, less than 9 out of 17, the
median weight value would be 0 and the arc would not be selected for the network
structure.
•
In the case more than half of the SME selected the arc, 9 or more out of 17, the
median weight value would be used as the arc weight, warc, and the arc would be
selected for the network structure
Figure 46 shown above represents the final network structure based on the SME rankings
using the median weight method. The SMEs’ independent assertions mostly resulted in
general agreements on the causal influence relationships (CIRs) among these KA Nodes,
although there are a few situations where significant minorities were present. The expert
data analysis for each of the Knowledge Areas is discussed in the sections below.
6.5.1.1 Time Management Knowledge Area
Figure 55 provides the distribution of the SME arc ranking observations for the
Time KA. The results are also summarized below in Table 5. The complete data can be
found in Appendix B. Figure 55 shows the count of observations where the SME
145
selected and ranked the arc indicating that there is a Causal Influence Relationship (CIR)
between the respective KA Nodes. The “Root” (first) column in Figure 55 graph shows
the number of observations where the SME asserted the Time KA Node as a root node,
with no parent nodes of CIR. The rest of the columns show the number of observations
where the SME asserted CIR and ranking from the respective KA Node to the Time KA
Node.
As described earlier, the median weight is used to summarize the KA sample data.
For this research with 17 SME data points, at least 9 had to select the arc for it to be
included. Based on majority SME agreement, arcs from Quality, Procurement, and Scope
KAs were selected as part of the DAPS network structure. The others were excluded.
There are no significant omissions following the arc selection rule.
Time Knowledge
Ranking Observations
18
16
14
12
10
8
6
4
2
0
Rank 4
Rank 3
Rank 2
Rank 1
Figure 55 Time KA Ranking Observations
146
Table 5, Time KA Ranking Summary
First
Rank 1
(Wt. 1)
Rank 2
(Wt. 0.5)
Rank 3
Rank 4
Median Arc
(Wt. 0.25) (Wt. 0.125) Wt.
0
0
0
0
2
2
0
1
12
0
0
0.5
Quality
Procure
Second
Root
Time
Time
General
Management
Time
1
0
0
0
0
Systems
Engineering
Scope
Cost
Time
Time
Time
0
6
0
1
3
0
0
2
0
0
0
0
0
0.5
0
0
12
3
6.5.1.2 Cost Management Knowledge Area
Figure 56 provides the distribution of the SME arc ranking observations for the
Cost KA. The results are also summarized below in Table 6. The complete data can be
found in Appendix B. Based on SME majority agreement, arcs from the Procurement,
Quality, and Time KAs were selected as part of the DAPS network structure. The others
were excluded. There are no significant omissions following the arc selection rule.
147
Cost Knowledge
Ranking Observations
18
16
14
12
10
8
6
4
2
0
Rank 4
Rank 3
Rank 2
Rank 1
Figure 56 Cost KA Ranking Observations
Table 6, Cost KA Ranking Summary
First
Procure
Quality
Time
Second
Root
Cost
Cost
Cost
General
Management
Systems
Engineering
Scope
Rank 1
(Wt. 1)
Rank 2
(Wt. 0.5)
Rank 3
(Wt. 0.25)
Rank 4
(Wt. 0.125)
Median Arc
Wt.
12
3
5
4
5
9
1
8
2
0
0
0
1
0.25
0.5
Cost
0
0
0
0
0
Cost
Cost
0
0
0
0
1
0
0
0
0
0
148
6.5.1.3 Quality Management Knowledge Area
Figure 57 provides the distribution of the SME arc ranking observations for the
Quality KA. The results are also summarized below in Table 7. The complete data can be
found in Appendix B. Based on majority SME agreement, arcs from the Systems
Engineering and Procurement KAs were selected as part of the DAPS network structure.
The others were excluded. There are no significant omissions following the arc selection
rule.
Quality Knowledge
Ranking Observations
18
16
14
12
10
8
6
4
2
0
Rank 4
Rank 3
Rank 2
Rank 1
Figure 57 Quality KA Ranking Observations
149
Table 7, Quality KA Ranking Summary
First
Second
Root
Rank 1
(Wt. 1)
Rank 2
(Wt.
0.5)
Rank 3
(Wt.
0.25)
Rank 4
(Wt.
0.125)
Median Arc
Wt.
0
0
0
0
0
Systems
Engineering
Procure
Quality
Quality
15
8
1
8
0
0
0
0
1
0.5
General
Management
Time
Cost
Scope
Quality
Quality
Quality
Quality
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6.5.1.4 Scope Management Knowledge Area
Figure 58 provides the distribution of the SME arc ranking observations for the
Scope KA. The results are also summarized below in Table 8. The complete data can be
found in Appendix B. Based on majority SME agreement, the arc from the SE KA was
the only arc selected as part of the DAPS network structure. The others were excluded.
There is one significant omission following the median weight method for the
Scope KA data set: the arc from the GM KA. Seven SMEs selected the GM KA arc, but
it was excluded due to not reaching the majority count of nine. The SMEs who selected
this arc believed there is a direct relationship from the GM KA to the Scope KA. Several
of the SMEs who did not select the arc from the GM KA believed that, although there is a
relationship from the GM KA to the Scope KA, it is an indirect relationship through the
SE KA, and thus they did not select the arc.
150
The Scope KA is the only KA, other than the GM KA, specified as a root node by
any SME. The rationale of the three SMEs that made the root node selection is the belief
that the other six KAs had no causal effects on scope, while scope affected all other six
KAs. However, the majority of the SMEs did not see things this way.
Scope Knowledge
Ranking Observations
12
10
8
6
Rank 4
4
Rank 3 (Wt. 0.25)
2
Rank 2 (Wt 0.5)
0
Rank 1
Figure 58 Scope KA Ranking Observations
151
Table 8, Scope KA Ranking Summary
First
Second
Root
Rank 1
(Wt. 1)
Rank 2
(Wt.
0.5)
Rank 3
(Wt.
0.25)
Rank 4
(Wt.
0.125)
Median Arc
Wt.
3
0
0
0
0
Systems
Engineering
Scope
8
3
0
0
0.5
General
Management
Procure
Quality
Time
Cost
Scope
Scope
Scope
Scope
Scope
4
0
0
0
0
2
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
6.5.1.5 Procurement Management Knowledge Area
Figure 59 provides the distribution of the SME arc ranking observations for the
Procurement KA. The results are also summarized below in Table 9. The complete data
can be found in Appendix B. Based on majority SME agreement, arcs from General
Management, Systems Engineering, and Scope KAs were selected as part of the DAPS
network structure. The others were excluded. There are no significant omissions
following the arc selection rule.
152
Procurement Knowledge
Ranking Observations
18
16
14
12
10
8
6
4
2
0
Rank 4
Rank 3
Rank 2
Rank 1
Figure 59 Procurement KA Ranking Observations
Table 9, Procurement KA Ranking Summary
First
Second
Root
General
Management Procurement
Systems
Engineering
Scope
Quality
Time
Cost
Procurement
Procurement
Procurement
Procurement
Procurement
Rank 1
(Wt. 1)
Rank 2
(Wt.
0.5)
Rank 3
(Wt.
0.25)
Rank 4
(Wt.
0.125)
Median Arc
Wt.
0
0
0
0
0
13
2
1
0
1
6
6
0
0
0
9
9
0
0
0
1
2
0
1
0
0
0
0
0
0
0.5
0.5
0
0
0
153
6.5.1.6 Systems Engineering Knowledge Area
Figure 60 provides the distribution of the SME arc ranking observations for the
SE KA. The results are also summarized below in Table 10. The complete data can be
found in Appendix B. Based on majority SME agreement, the arc from the GM KA was
selected as part of the DAPS network structure. The others were excluded. One omission
worth noting is the arc from the Scope KA, which had five SME selections, under the
majority requirement of nine. Instead of this arc, more SMEs selected the arc going the
other direction, from the SE KA to the Scope KA. Out of the seventeen SMEs, the arc
from the SE KA to the Scope KA had eleven SME selections, compared to five selections
of the arc from the Scope KA to the SE KA, and one SME selecting neither arc. These
two arc selections are mutually exclusive since the SMEs were instructed not to construct
the CIR arcs with circular logic.
154
Systems Engineering Knowledge
Ranking Observations
18
16
14
12
10
8
6
4
2
0
Rank 4
Rank 3
Rank 2
Rank 1
Figure 60 SE KA Ranking Observations
Table 10, SE KA Ranking Summary
First
Second
Root
Rank 1
(Wt. 1)
Rank 2
(Wt. 0.5)
Rank 3
(Wt. 0.25)
Rank 4
(Wt. 0.125)
Median
Arc Wt.
0
0
0
0
0
General
Systems
Management Engineering
14
3
0
0
1
Procurement
Systems
Engineering
0
0
1
0
0
Scope
Systems
Engineering
5
0
0
0
0
Quality
Systems
Engineering
0
0
0
0
0
Time
Systems
Engineering
0
1
0
0
0
Cost
Systems
Engineering
0
1
1
0
0
155
6.5.1.7 General Management Knowledge Area
Figure 61 provides the distribution of the SME arc ranking observations for the
GM KA. The results are also summarized below in Table 11. The complete data can be
found in Appendix B. Fourteen of seventeen SMEs agreed that the GM KA is a Root
Node. The only other arc any SME selected was the arc from the Scope KA to the GM
KA, with three SME selections. The three SMEs who selected the arc from the Scope KA
to the GM KA believed that knowledge of the scope of a program affects the knowledge
in overarching general management, including program communications, human
resources management, and budgeting and funding. However, these SMEs were in the
minority. A majority of the SMEs did not believe there is a CIR from the Scope KA to
the GM KA.
156
General Management Knowledge
Ranking Observations
16
14
12
10
8
Rank 4
6
Rank 3
4
Rank 2
2
Rank 1
0
Figure 61 GM KA Ranking Observations
Table 11, GM KA Ranking Summary
First
Second
Root
Rank 1
(Wt. 1)
Rank 2
(Wt.
0.5)
Rank 3
(Wt.
0.25)
Rank 4
(Wt.
0.125)
Median Arc
Wt.
14
0
0
0
1
General
Management
0
0
0
0
0
General
Procurement Management
0
0
0
0
0
Scope
General
Management
3
0
0
0
0
Quality
General
Management
0
0
0
0
0
Time
General
Management
0
0
0
0
0
Cost
General
Management
0
0
0
0
0
Systems
Engineering
157
6.5.2 Probability Specification Calculation
With the KA2KA network structure constructed through the median weight
method, the last step in the construction of the DAPS Bayesian Network model involved
the specifications of the probabilities in the Knowledge Area Node CPTs. Using the
KA2KA network structure shown in Figure 46 and the KA2KAi+1 dynamic arcs, the
conversion method of Section 6.2 used for Knowledge Checkpoint Node probability
specifications is used again for the Knowledge Area Node probability specification
calculations.
The probabilities for the Knowledge Area Nodes are obtained by first calculating
the Varc for the Knowledge Area Nodes, including the KA2KA static arcs and the
KA2KAi+1 dynamic arcs. Equation 1 is used for this calculation, similar to the
Knowledge Checkpoint Node calculations. Table 12 to Table 18 below provides the KA
Node Varc calculation results. Two Varcs are calculated for each KA Node, one at the
Material Development Decision (MDD) KC with no KA2KAi+1 arcs, being the first KC;
and one for all other KCs with KA2KAi+1 arcs.
158
Table 12, Time KA Varc Table
First
Quality
Procure
Second
Time
Time
Time
MDD
Other KAs
0
0
0.5
0.333333
0.25
0.166667
General
Management
Time
0
0
Systems
Engineering
Scope
Cost
Time Prior
Time
Time
Time
Time
0
0.25
0
0
0
0.166667
0
0.333333
Table 13, Cost KA Varc Table
First
Other
KAs
Procure
Quality
Time
Second
Cost
Cost
Cost
Cost
MDD
General
Management
Cost
0
0
Systems
Engineering
Scope
Cost Prior
Cost
Cost
Cost
0
0
0
0
0
0.363636
0
0.571429
0.142857
0.285714
0
0.363636
0.090909
0.181818
159
Table 14, Quality KA Varc Table
First
Other
KAs
Second
Quality
MDD
0
0
Systems
Engineering
Procure
Quality
Quality
0.666667
0.333333
0.4
0.2
General
Management
Time
Cost
Scope
Quality Prior
Quality
Quality
Quality
Quality
Quality
0
0
0
0
0
0
0
0
0
0.4
Table 15, Scope KA Varc Table
First
Second
Scope
MDD
Other KAs
0
0
Systems
Engineering
Scope
1
0.428571
General
Management
Procure
Quality
Time
Cost
Scope Prior
Scope
Scope
Scope
Scope
Scope
Scope
0
0
0
0
0
0
0
0
0
0
0
0.571429
160
Table 16, Procurement KA Varc Table
First
Second
Procure
MDD
Other KAs
0
0
General
Management
Procurement
0.5
0.333333
Systems
Engineering
Scope
Quality
Time
Cost
Procurement
Procurement
Procurement
Procurement
Procurement
0.25
0.25
0
0
0
0.166667
0.166667
0
0
0
Procurement
Prior
Procurement
0
0.333333
Table 17, SE KA Varc Table
First
Other
KAs
General
Management
Second
Systems
Engineering
Systems
Engineering
Procurement
Systems
Engineering
0
0
Scope
Systems
Engineering
0
0
Quality
Systems
Engineering
0
0
Time
Systems
Engineering
0
0
Systems
Engineering
0
0
Systems
Engineering
0
0.5
Cost
Prior
Systems
Engineering
MDD
0
0
1
0.5
161
Table 18, GM KA Varc Table
First
Systems
Engineering
Procurement
Scope
Quality
Time
Cost
General
Management
Prior
Second
General
Management
General
Management
General
Management
General
Management
General
Management
General
Management
General
Management
General
Management
Other
KAs
MDD
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
Similar to the Knowledge Checkpoint Node calculations, Equation 4 is then used
with the associated conversion rules to calculate the probability specifications for each
possible combination of the parent Knowledge Area States. Figures 62 to 68 below show
the resulting CPTs entered into Netica for the KA Nodes at MDD. MDD is the first
Knowledge Checkpoint in the DAPS model and thus has no prior relationships
represented by the KA2KAi+1 arcs, resulting in a different CPT as compared to the KA
Nodes at all other Knowledge Checkpoints.
To illustrate how the probability specification tables are obtained, Figure 62 has
three columns for the three parents of the Time_MDD, the Time KA Node at MDD:
162
Quality_MDD, Procurement_MDD, and Scope_MDD. From Table 12,
Varc(Quality_MDD) = 0.5, Varc(Procurement_MDD) = 0.25, Varc(Scope_MDD) = 0.25.
Adding the three Varcs together, and following the boundary condition rule for three
parent nodes described in Section 6.2, P(KA=Good|KAs) is calculated to be 0.95. In the
second row, Quality_MDD and Procurement_MDD are in a Good state while
Scope_MDD is in a Marginal state, resulting in P(KA=Good|KAs) = 0.5 + 0.25 = 0.75.
The boundary condition rule did not apply for the second row probability, since it did not
exceed the boundary condition limits.
Figure 62 Time KA Node CPT at MDD
163
Figure 63 Cost KA Node CPT at MDD
Figure 64 Quality KA Node CPT at MDD
Figure 65 Scope KA Node CPT at MDD
Figure 66 Procurement KA Node CPT at MDD
164
Figure 67 SE KA Node CPT at MDD
Figure 68 GM KA Node CPT at MDD
The CPTs for each respective KA are the same for each Knowledge Checkpoint
instance after MDD. Figures 69 to 75 shows the resulting CPTs entered into Netica for
the DAPS KA Nodes for ITR and all other KCs. These probability specification tables
contain one additional column (first column) compared to their counterparts for the MDD
KA Nodes, due to the additional parent node from the KA2KAi+1 relationships.
165
Figure 69 Time KA Node CPT at other KCs
Figure 70 Cost KA Node CPT at other KCs
166
Figure 71 Quality KA Node CPT at other KCs
Figure 72 Scope KA Node CPT at other KCs
Figure 73 Procurement KA Node CPT at other KCs
167
Figure 74 SE KA Node CPT at other KCs
Figure 75 GM KA Node CPT at other KCs
168
7
MODEL ANALYSIS
Twelve cases are presented in this chapter to demonstrate and analyze the
usefulness of the DAPS model. This chapter starts by providing an assessment guideline
of how the DAPS model is used to support acquisition decision making in the Case
Analyses. Hypothetical Case 1 and Case 2 used in Chapter 3, Bayesian Network
Prototype, are expanded here for the DAPS model analysis, applied at Milestone A.
Walkthroughs of the model input, output, and what-if analysis are provided for these two
cases. Cases 3–10 are additional hypothetical cases used to support the analysis of the
DAPS model. They are presented in a summary format.
Cases 11 and 12 are derived from real DoD programs used as analysis cases for
this research. Respectively, they are analyzed at Milestone B and Milestone C. The 12
cases presented in this chapter are outlined below, divided into hypothetical cases and
real-world cases.
Hypothetical Cases:
1) Case 1—Sensitivity Test 1.1 at Milestone A
2) Case 2—Sensitivity Test 2.1 at Milestone A
3) Case 3—Logistics Program at IOC
4) Case 4—ERP Program at FOC
5) Case 5—Research Program at SFR
169
6) Case 6—Program with High Risk Report at CDR
7) Case 7—Program with Moderate Risk Report at CDR
8) Case 8—Program with Low Risk Report at CDR
9) Case 9—COTS Integration Program at ASR
10) Case 10—Expanded Deployment Program at PRR
Real-World Cases:
11) Case 11—RDT&E Program A to Milestone B
12) Case 12—Program B Major Release Program to Milestone C
A summary of the model analysis based on the cases is presented at the end of the
chapter.
7.1 Assessment Guideline
DAPS is an analytic model that assesses program performance in subject matter
Knowledge Areas and ascertains the likelihood for success at Knowledge Checkpoints.
Its basis is observations of evidence already being conducted through acquisition reviews
and oversight. DAPS has the potential to aid decision makers in holistically and logically
processing mountains of evidence to support their acquisition decision making. This
section will discuss how DAPS could be used in the acquisition process to assess a
program’s likelihood for success to support decision making
The highest level of DAPS model output is the probability of success at the
Knowledge Checkpoint (KC) Nodes, indicating the likelihood of success achieved based
on the program knowledge (certainty) level attained. This highest level DAPS model
170
output is the cumulative program assessment metric to support decision making at
Knowledge Checkpoints, aided by the measurements at the second level Knowledge
Areas.
Three alternative views are recommended to observe this top-level output of
DAPS. First is simply the Probability of Success at the Knowledge Checkpoint,
P(KC=Success).
The second recommended view is the translation of the Probability of Success at
Knowledge Checkpoint Nodes into a “Success Factor,” or the ratio of likelihood of
success as compared to failure at each Knowledge Checkpoint. This recommendation is
intended to help decision makers better comprehend the chance for success in terms of
ratios, illustrating the odds that the program is more likely to succeed than fail. The
Success Factor is calculated by taking the ratio of the Knowledge Checkpoint Node’s
Probability of Success over the probability of failure, as represented by Equation 5
below:
Equation 5, Success Factor
Success Factor = P KC = Success /P KC = Failure
The Success Factor is presented in a format similar to the Safety Factor, which is
commonly used in engineering applications as a simple metric to determine the structural
capacity of a component or system beyond its expected load. This format is also similar
to the widely used EVMS metrics of Cost Performance Index (CPI) and Schedule
171
Performance Index (SPI). A Success Factor above 1 indicates that the program is more
likely to succeed than fail, while a Success Factor of less than 1 indicates that the
program is less likely to succeed than fail.
The third alternative view is the use of adjectival ratings (DoD 2011) to describe
the Knowledge Checkpoint assessment level. Table 19 provides the range of Success
Factors used for the case analysis, their respective P(KC = Success) ranges, their
associated adjectival ratings and risk levels, as well as the prescriptive recommended
decisions for the respective range and rating. The ranges and ratings recommended in
Table 19 reflect a risk attitude based on heuristics drawn from safety factor applications.
Each organization or decision maker would be able to change the ranges and associated
ratings based on their own risk attitude.
Table 19, Success Factor Table
P(KC = Success)
> 90%
Success
Factor
>9.0
KC Assessment Level
Recommended Decision
75%-90%
3.0-9.0
Outstanding (Very Low
Risk)
Good (Low Risk)
Proceed, consider decreased
oversight
Proceed
60%-75%
1.5-3.0
Acceptable (Moderate Risk)
Proceed w/caution
44.4%-60%
0.8-1.5
Marginal (High Risk)
Delay or Corrective Action
< 44.4%
<0.8
Unacceptable (Very High
Risk)
Corrective Action or ShutDown
In addition, the decision maker may observe the predicted Probability of Success
measurements or Success Factors at future Knowledge Checkpoints, especially the Full
172
Operating Capability (FOC) Knowledge Checkpoint—the final milestone. A Success
Factor of greater than 1 at FOC, indicating that success is more likely than failure as the
ultimate program outcome, would help to support the decision to proceed. A Success
Factor of less than 1, indicating that failure is more likely than success as the ultimate
program outcome, would help support the decision for Delay, Corrective Action, or ShutDown. Depending on the observations of evidence, the predicted Probability of Success
at future Knowledge Checkpoints may indicate a different trend for success as compared
to the assessment at the current Knowledge Checkpoint. It provides an additional insight
into the program.
7.2 Case 1—Sensitivity Test 1.1 at Milestone A
7.2.1 Model Input
The intent of this case analysis is to test the sensitivity of the model to extreme
but realistic conditions and analyze the result of conflicting evidence of program success.
The case presents a hypothetical program where program management, budgeting, and
funding support are strong, along with an outstanding cost estimate, while
contracting/procurement actions are proceeding with adequate performance. However,
staffing is determined to be inadequate. The program also has not developed a SEP or any
architecture artifacts. Quality Risk is high due to the lack of technology maturity. This
case is applied at Milestone A, and the DAPS model is being used to support the
Milestone Decision Authority (MDA)’s milestone decision to proceed or not to proceed.
The specific Evidence Node observations in DAPS follow:
173
1. Acceptable Business Case
2. Unacceptable Risk Report (scope) due to no architecture development to
adequately define the program scope
3. Missing and therefore unacceptable Systems Engineering Plan due
4. Acceptable Procurement progress and output—Acceptable Acquisition Strategy
5. Acceptable IMP and schedule Progress with acceptable schedule risk
6. Outstanding Program Charter
7. Outstanding Budgeting and Funding
8. Unacceptable Manning/Staffing
9. Outstanding decisions outcome through the Investment Decision Memorandums
(IDM)
10. Unacceptable Quality Risk Report due to technology maturity issues
11. Outstanding Cost Estimates
12. Milestone Acquisition Decision Memorandum (ADM) is unobserved since
decision has not been made.
7.2.2 Model Output
The model’s Evidence Node observation inputs as well as the Knowledge Area
Node and the Knowledge Checkpoint Node results are shown in Figure 76. The
Probability of Success measure at this Knowledge Checkpoint, as indicated by the
Milestone A Knowledge Checkpoint Node, is at 55.8 %. This is the result of the model
even with only four unfavorable observations as compared to twelve favorable. The
program’s knowledge in the Time KA, the Cost KA, the Procurement KA, and the GM
174
KA are likely to be good; while the Scope KA, the Systems Engineering KA, and the
Quality KA are likely to be marginal.
Success
Failure
MSA_KC
55.8
44.2
0±0
Scope_MSA
Good
37.0
Marginal
63.0
Time_MSA
Good
79.6
Marginal
20.4
Quality_MSA
Good
41.4
Mariginal
58.6
Cost_MSA
Good
99.9
Marginal
0.13
Procurement_MSA
Good
87.6
Marginal
12.4
SE_MSA
Good
Marginal
36.6
63.4
GM_MSA
Good
96.6
Marginal
3.45
Acquisition_Strat_Plan_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
IMP_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
IMP_Progress_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
Budgeting_Funding_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
Staffing_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Expenditure_MSA
Outstanding
Acceptable
Unacceptable
Cost_Estimate_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Program_Charter_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
MSA_ADM
Outstanding
24.6
Acceptable
61.4
Unacceptable
14.0
100
0
0
IGCE_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
RiskRep_Cost_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Business_Case_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
SEP_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Time_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
MSA_IDM
Outstanding
100
Acceptable
0
Unacceptable
0
Figure 76 Case 1 Sensitivity Test 1.1
The Probability of Success measurement at Milestone A is derived from the
Scope, Quality, Time, and Cost KA measurements. Although the evidence at this
175
Knowledge Checkpoint strongly supports that the program has attained Good knowledge
in the Time KA at 79.6 %, and in the Cost KA at 99.9 %, the evidence does not support
the same argument for the Quality KA and the Scope KA, measured only at 41.4 % Good
and 37 % Good, respectively. From the elicitation of the expert knowledge conducted in
the research, the DAPS Model specified the weighted influences of the Quality KA and
the Scope KA to be twice as strong as the weighted inferential forces of the Time and
Cost KAs, producing the 55.8 % Success measurement for Milestone A Knowledge
Checkpoint.
Table 20 outlines the Probability of Success, P(Success), for Case 1 at each of the
15 Knowledge Checkpoints and their respective Success Factors, based on the
observation inputs at Milestone A.
Table 20, Case 1 DAPS Model Output
KC
P(Success)
MDD
67.4
ITR
67.1
ASR
64.5
MSA
55.8
SRR
56.3
SFR
56.9
PreED
56.4
MSB
55.2
PDR
53.9
CDR
52.8
TRR
51.9
MSC
51.2
PRR
50.8
IOC
50.5
FOC
50.3
Success Factor
2.067484663
2.039513678
1.816901408
1.262443439
1.288329519
1.320185615
1.293577982
1.232142857
1.169197397
1.118644068
1.079002079
1.049180328
1.032520325
1.02020202
1.012072435
176
Based on the Success Factor of 1.26 at Milestone A, the Knowledge Level of the
acquisition program is rated as Marginal with a recommended action of “Delay or
Corrective Action.” However, the fact that the future Success Factors past Milestone A
are all above 1 bodes well for this program, indicating that the program contains a solid
foundation for possible future success. Within the DAPS model, this can be attributed to
the high GM KA and Cost KA results. The GM KA acts as the root node in each
Knowledge Checkpoint instance computation, and has a strong influence on the other six
Knowledge Areas. The Cost KA is the only leaf node within the Knowledge Area
network structure and is a strong indicator of the adequacy of the other Knowledge
Areas.
With the Marginal rating and recommendation of “Delay or Corrective Action,”
available evidence is not sufficient to either firmly defend a favorable decision to proceed
or an unfavorable decision to shut down the program. However, the predicted future
Success Factors indicate available observations of evidence support a likelihood for
eventual success. Based on this DAPS assessment, the MDA would be advised to delay
the Milestone A decision until the SEP and architecture artifacts are adequately
developed. By that time, the program could be reassessed based on the developed
artifacts and the program’s approach to address the staffing shortage and technology
maturity issues.
Additionally, the phenomenon of the Probability of Success going up from
Milestone A to SFR prior to the gradual decrease to FOC, as shown in Table 20, is worth
177
discussing. The Probability of Success throughout the acquisition lifecycle at the 15
Knowledge Checkpoints is plotted in Figure 77.
DAPS Case 1 P(Success) at KC
80
70
60
50
40
30
20
10
0
P(Success)
Figure 77 Case 1 P(Success) at Knowledge Checkpoints
This phenomenon in Case 1 is attributed to the near certainty measurements for
the GM KA and Cost KA Nodes at Milestone A, which are 96.6% and 99.9% likely to be
Good, respectively. These near certainty measurements were coupled with the low
measurements for the Scope KA, Quality KA, and SE KA Nodes, which are 37%, 41.4%,
and 36.6% Good respectively. Without new observations for the following Knowledge
Checkpoints, this caused the balancing of the probabilities among the KA Nodes in the
following Knowledge Checkpoints until a steady state was reached; increasing the
measurements for Scope KA, Quality KA, and SE KA, while the other KAs are
decreasing. Furthermore, the inferential forces of the Quality KA and Scope KA Nodes to
178
the P(Success) at the Knowledge Checkpoint are also twice as strong as those of the Cost
KA and the Time KA. Thus, the increase in Quality KA and Scope KA had more impact
to the Probability of Success measurement than the decrease in Time KA and Cost KA.
Figure 78 through Figure 81 below, in addition to Figure 76, provide the DAPS
Model output from Netica to illustrate the Bayesian Network probability calculation
progression from Milestone A to Milestone B. From Milestone A to Pre-ED, the
likelihoods for Good Quality and Scope KAs were increasing while the Cost KA and
Time KA are gradually decreasing. This resulted in the overall P(Success) measurements
at the Knowledge Checkpoints to slightly increase from Milestone A to SFR prior to the
gradual decrease until FOC.
Success
Failure
SRR_KC
56.3
43.7
0±0
Scope_SRR
Good
45.8
Marginal
54.2
Time_SRR
Good
63.5
Marginal
36.5
Cost_SRR
Quality_SRR
Good
52.7
Mariginal
47.3
Good
Marginal
Procurement_SRR
Good
71.9
Marginal
28.1
78.4
21.6
SE_SRR
Good
Marginal
Good
Marginal
GM_SRR
77.9
22.1
Figure 78 Case 1 DAPS Output at SRR
179
55.8
44.2
Success
Failure
SFR_KC
56.9
43.1
0±0
Scope_SFR
Time_SFR
Good
Marginal
Good
Marginal
59.2
40.8
Quality_SFR
Good
Mariginal
Cost_SFR
56.8
43.2
Good
Marginal
Procurement_SFR
Good
64.3
Marginal
35.7
Good
Marginal
Good
Marginal
51.0
49.0
67.5
32.5
SE_SFR
59.0
41.0
GM_SFR
66.8
33.2
Figure 79 Case 1 DAPS Output at SFR
Pre_ED_KC
Success
56.4
Failure
43.6
0±0
Scope_PreED
Good
53.0
Marginal
47.0
Time_PreED
Good
57.4
Marginal
42.6
Quality_PreED
Good
57.0
Mariginal
43.0
Procurement_PreED
Good
59.8
Marginal
40.2
Cost_PreED
Good
61.7
Marginal
38.3
SE_PreED
Good
57.6
Marginal
42.4
GM_PreED
Good
60.1
Marginal
39.9
Figure 80 Case 1 DAPS Output at PreED
180
Success
Failure
MSB_KC
55.2
44.8
0±0
Scope_MSB
Good
53.2
Marginal
46.8
Time_MSB
Good
55.9
Marginal
44.1
Quality_MSB
Good
55.7
Mariginal
44.3
Cost_MSB
Good
58.1
Marginal
41.9
Procurement_MSB
Good
Marginal
SE_MSB
56.6
43.4
Good
Marginal
55.5
44.5
GM_MSB
Good
Marginal
56.0
44.0
Figure 81 Case 1 DAPS Output at MSB
The DAPS Success Factor Profile is provided in Figure 82. The Success Factor
profile summarizes the output of the DAPS analysis in the Success Factor form. This
profile can be compared to the P(Success) plot shown in Figure 77, which provides
alternative views of the data provided in Table 20.
181
Case 1 DAPS Success Factor Profile
2.5
2
1.5
1
0.5
0
Case 1 Success Factor
Figure 82 Case 1 DAPS Success Factor Profile
7.2.3 What-If Analysis
Prior to the actual Milestone A Review, the program manager might ask the
question, “What if the Milestone A Review was delayed beyond the threshold date for a
short period in order to develop the SEP and the architecture to an adequate level? What
would that do to the Probability of Success measurement at Milestone A and beyond?”
Figure 83 provides the Milestone A output from DAPS if the SEP and the Scope risk
level becomes Acceptable, while the schedule progress becomes Unacceptable due to the
missed Milestone. This what-if scenario assumes all other observations of evidence for
this case remain the same.
182
MSA_KC
Success
Failure
71.6
28.4
0±0
Scope_MSA
Time_MSA
Good
Marginal
Good
Marginal
64.3
35.7
Quality_MSA
Good
Mariginal
Cost_MSA
53.6
46.4
Good
Marginal
Procurement_MSA
Good
Marginal
80.4
19.6
99.9
0.12
SE_MSA
93.9
6.06
Good
Marginal
81.3
18.7
GM_MSA
Good
Marginal
Acquisition_Strat_Plan_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_MSA
Outstanding
Acceptable
Unacceptable
0
0
100
IMP_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_Progress_MSA
Outstanding
Acceptable
Unacceptable
0
0
100
98.3
1.71
Budgeting_Funding_MSA
Outs tanding
100
Acceptable
0
Unacceptable
0
Cost_Expenditure_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Staffing_MSA
Outs tanding
Acceptable
Unacceptable
0
0
100
Program_Charter_MSA
Outs tanding
100
Acceptable
0
Unacceptable
0
Cost_Estimate_MSA
Outstanding
Acceptable
Unacceptable
24.9
61.7
13.3
100
0
0
IGCE_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
MSA_ADM
Outs tanding
Acceptable
Unacceptable
Business_Case_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
SEP_MSA
Outstanding
Acceptable
Unacceptable
RiskRep_Time_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Cost_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
0
100
0
RiskRep_Scope_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
MSA_IDM
Outs tanding
Acceptable
Unacceptable
100
0
0
Figure 83 Case 1 (What-if Analysis) at Milestone A
As shown in Figure 83, if the program manager worked to complete the missing
artifacts and delayed the Milestone A Review beyond the acceptable range, the
Probability of Success at Milestone A would have been improved from 55.8 % to 71.6 %,
which updates the Success Factor from 1.26 to 2.52, thereby doubling it. A Success
Factor would have changed the Knowledge Level rating from Marginal to Acceptable
and recommended decision from “Delay or Corrective Action” to “Proceed with
Caution.”
Consequently, if the program manager delayed the Milestone A Review until the
SEP and the architecture artifacts were completed, the program manager would have
provided the MDA better evidence to support a favorable decision to proceed, as
183
compared to the original scenario. Even though falling behind schedule is not desirable,
the what-if scenario with the Acceptable rating provided the MDA just enough proof of
program maturity and knowledge certainty to be allowed to “Proceed with Caution.”
Graph showing the what-if change in the DAPS Success Factor Profile is provided in
Figure 84.
DAPS Case 1 Success Factor Profile
w/ What-if Scenario
3
2.5
2
1.5
1
0.5
0
Case 1 Success Factor
Case 1 What-if Success Factor
Figure 84 Case 1 DAPS Success Factor Profile with What-if Scenario
7.3 Case 2—Sensitivity Test 2.1 at Milestone A
7.3.1 Model Input
The intent of Case 2 is to test the sensitivity of the model to another extreme but
realistic set of conditions and observe the result of conflicting evidence to program
success. The Case 2 used in Chapter 3 presented a case where the program has provided
evidence showing exceptional understanding and approach in many of the acquisition
184
artifacts, including the Acquisition Strategy, Systems Engineering Plan (SEP), Integrated
Master Schedule (IMS), and cost estimates. The program also adequately developed the
Business Case and the Program Charter leading to an acceptable Milestone A Investment
Decision Memorandum (IDM) from the Investment Review Board (IRB). However, the
program funding has been cut completely due to priority issues with no alternative
funding sources, leading to unacceptable risks for scope, time, cost, and quality.
Furthermore, the majority of the program staff have been reallocated to another higher
priority program. Generally, the stakeholders and sponsors like the goal of the program.
However, it is something that is considered nice to have, and not a priority in the lean
fiscal environment. The specific Case 2 observations of evidence are outlined below:
1. Acceptable Business Case
2. Outstanding SEP
3. Unacceptable Time Risk, due to non-executable schedule with no funding
4. Unacceptable Scope Risk, due to inability to continue scope management with no
funding
5. Unacceptable Cost Expenditure due to inability to sustain the expenditure rate
with no funding
6. Outstanding Cost Estimates
7. Unacceptable Cost Risk, due to no funding
8. Unacceptable Budgeting and Funding
9. Unacceptable Staffing for the program
10. Acceptable Program Charter
185
11. Acceptable IDM
12. Outstanding Acquisition Strategy
13. Unacceptable Quality/Performance Risk, due to no funding to achieve the desired
quality
14. Outstanding Integrated Master Schedule (IMS) development
15. Unacceptable IMS Progress projection due to inability to fund the contractor to
meet the schedule baseline
7.3.2 Model Output
The Case 2 Evidence Node inputs as well as the results of Knowledge Area Node
and the Knowledge Checkpoint Node measurements are shown below in Figure 85. The
Program Success measure at this Knowledge Checkpoint, as indicated by the MSA_KC
Node, has a Probability of Success of 30.3%. The Measurable Knowledge Areas—Time
KA, Quality KA, Cost KA, and Scope KA—all are likely to be marginal. The Enabling
Knowledge Areas—Procurement KA and SE KA—are both only slightly more likely to
be good while the GM KA is more likely to be marginal.
186
MSA_KC
Success
Failure
30.3
69.7
0±0
Scope_MSA
Time_MSA
Good
21.8
Marginal
78.2
Good
Marginal
Quality_MSA
Good
24.1
Mariginal
75.9
Cost_MSA
Good
48.3
Marginal
51.7
Procurement_MSA
Good
Marginal
31.0
69.0
SE_MSA
55.6
44.4
Good
Marginal
52.5
47.5
GM_MSA
Good
Marginal
23.0
77.0
Acquisition_Strat_Plan_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Budgeting_Funding_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Expenditure_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Business_Case_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Staffing_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Estimate_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
SEP_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
IMP_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
IMP_Progress_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Program_Charter_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
MSA_ADM
Outstanding
9.93
Acceptable
47.6
Unacceptable
42.5
IGCE_MSA
Outstanding
Acceptable
Unacceptable
100
0
0
RiskRep_Cost_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Time_MSA
Outstanding
Acceptable
Unacceptable
0
0
100
RiskRep_Scope_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
MSA_IDM
Outstanding
Acceptable
Unacceptable
0
100
0
Figure 85 Case 2 Sensitivity Test 2.1
The Probability of Success measurement at Milestone A for Case 2 strongly
suggests that the program has not achieved the knowledge certainty necessary to proceed
to the next phase. Although the program has shown exceptional understanding and
approach in many observations, the unavailability of funding and staffing ultimately
resulted in unacceptable risks in all Measurable Knowledge Areas of the program. Table
21 below outlines the Probability of Success and their respective Success Factors at each
of the 15 Knowledge Checkpoints.
187
Table 21, Case 2 DAPS Model Output
KC
P(Success) Success Factor
53
1.12766
MDD
48.9
0.956947
ITR
41.2
0.70068
ASR
30.3
0.43472
MSA
38.6
0.628664
SRR
42.1
0.727116
SFR
44.3
0.795332
PreED
45.9
0.848429
MSB
47.2
0.893939
PDR
48.1
0.926782
CDR
48.8
0.953125
TRR
49.2
0.968504
MSC
49.5
0.980198
PRR
49.7
0.988072
IOC
49.8
0.992032
FOC
A Success Factor of 0.43 at Milestone A warrants an Unacceptable overall
Knowledge Level rating and a recommended decision of “Corrective Action or ShutDown.” The Success Factor being under 1 for all Knowledge Checkpoints after
Milestone A provides additional supporting data for an unfavorable decision. A
Corrective Action of allocating funds and staff to this program is a possible alternative.
Tabling or Shutting Down the program until a later date when the more prioritized
programs are accomplished is another alternative. Since this program is desired but not a
high priority, this program should be shut down until a later date when the fiscal
environment is friendlier to nice to have programs.
Case 2 DAPS Success Factor Profile is shown in Figure 86. The Milestone A
observations caused a deep dip in the Success Factor Profile, dragging all future
188
Knowledge Checkpoint Success Factor predictions under 1, which indicates the
program’s unlikeliness to achieve ultimate success at these future Knowledge
Checkpoints.
Case 2 DAPS Success Factor Profile
1.2
1
0.8
0.6
0.4
0.2
0
Case 2 Success Factor
Figure 86 Case 2 DAPS Success Factor Profile
7.3.3 What-if Analysis
Since the program is doing very well other than funding and staffing, what is the
Probability of Success if the program is funded to an acceptable level? Would it perhaps
be a worthwhile investment to build momentum and positivity to the organization? Figure
87 below provides the updated DAPS Model with acceptable funding, which changed
multiple observations of evidence, outlined below:
1. RiskRep_Quality from Unacceptable to Acceptable
189
2. IMP_Progress from Unacceptable to Acceptable
3. Budgeting and Funding from Unacceptable to Acceptable
4. Cost_Expenditure from Unacceptable to Acceptable
5. RiskRep_Cost from Unacceptable to Acceptable
6. RiskRep_Time from Unacceptable to Acceptable
7. RiskRep_Scope from Unacceptable to Acceptable
Success
Failure
MSA_KC
93.9
6.07
0±0
Scope_MSA
Time_MSA
Good
98.8
Marginal
1.15
Good
Marginal
Quality_MSA
Good
94.1
Mariginal
5.90
90.9
9.07
Cost_MSA
Good
99.8
Marginal
0.24
Procurement_MSA
Good
98.0
Marginal
1.98
Good
Marginal
SE_MSA
96.1
3.88
GM_MSA
Good
80.2
Marginal
19.8
Acquisition_Strat_Plan_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Budgeting_Funding_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
Business_Case_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
Staffing_MSA
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Estimate_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
SEP_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
IMP_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
Program_Charter_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
IGCE_MSA
Outstanding
100
Acceptable
0
Unacceptable
0
RiskRep_Time_MSA
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_Progress_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
MSA_ADM
Outstanding
Acceptable
Unacceptable
21.3
58.3
20.3
RiskRep_Cost_MSA
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_MSA
Outstanding
Acceptable
Unacceptable
MSA_IDM
Outstanding
Acceptable
Unacceptable
0
100
0
Figure 87 Case 2 (What-if analysis) at Milestone A
190
0
100
0
The acceptable funding dramatically changed the measurements in the DAPS
model, significantly improving the Probability of Success from 30.3% to 93.9%. This
changed the Success Factor measurement from 0.434 to 15.39, updating the Knowledge
Level rating from Unacceptable to Outstanding, and changing the associated
recommended decision from “Corrective Action or Shut-Down” to “Proceed, consider
decreased oversight." In addition, the Success Factor at FOC also changes from below 1
to above 1, indicating it is now more likely to achieve ultimate success. It might be
worthwhile for the organization to invest in the program presented in Case 2 due to the
high likelihood for success. Figure 88 below provides the Success Factor Profile with the
what-if scenario for the visualization of the dramatic change to program success with the
funding.
Case 2 DAPS Success Factor Profile
w/ What-if Scenario
18
16
14
12
10
8
6
4
2
0
MDD ITR
ASR MSA SRR
SFR PreED MSB PDR CDR TRR MSC PRR
Case 2 Success Factor
Case 2 What-if Success Factor
Figure 88 Case 2 DAPS Success Factor Profile with What-if Scenario
191
IOC
FOC
7.4 Cases 3–10
Cases 3–10 are hypothetical DBS program cases. A variety of scenarios are
presented to demonstrate the behavior and performance of the DAPS model. A high-level
description and discussion of each case and the DAPS model inputs/outputs in Netica
(Norsys 2010) are provided. The model outputs for all hypothetical Cases 1–10 are
summarized in Section 7.4.7.
7.4.1 Case 3—Logistics Program at IOC
Case 3 is a Logistics Program entering Initial Operating Capability (IOC) with
acceptable Cost and Schedule performance. However, there was one critical material
deficiency identified from the latest Trouble Report, which will prevent the program from
meeting a critical requirement in the Requirements Specification. Many Evidence Nodes
are unobserved due to not being required. Figure 89 below provides the DAPS model
input and outputs for Case 3 at IOC.
192
IOC_FDD_KC
Success
40.0
Failure
60.0
0±0
Scope_IOC
Good
35.3
Marginal
64.7
Time_IOC
Good
43.5
Marginal
56.5
Quality_IOC
Good
34.0
Mariginal
66.0
Cost_IOC
Good
57.4
Marginal
42.6
Procurement_IOC
Good
52.4
Marginal
47.6
Good
Marginal
Good
Marginal
SE_IOC
44.4
55.6
GM_IOC
61.5
38.5
Req_Spec_Product_IOC
Outstanding
0
Acceptable
0
Unacceptable
100
Budgeting_Funding_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Estimate_CPI_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Scope_IOC
Outstanding
12.4
Acceptable
49.9
Unacceptable
37.7
Staffing_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
IRS_Product_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_IOC
Outstanding
14.0
Acceptable
51.4
Unacceptable
34.5
SDD_Product_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
IDD_Product_IOC
Outstanding
12.4
Acceptable
49.9
Unacceptable
37.7
Trouble_Report_Defect_IOC
Outstanding
0
Acceptable
0
Unacceptable
100
IA_DIACAP_IOC
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_IOC
Outstanding
12.1
Acceptable
49.6
Unacceptable
38.2
RiskRep_Cost_IOC
Outstanding
16.8
Acceptable
54.0
Unacceptable
29.2
IMS_Progress_SPI_IOC
Outstanding
14.0
Acceptable
51.4
Unacceptable
34.5
SEP_CMP_IOC
Outstanding
14.2
Acceptable
51.6
Unacceptable
34.2
RiskRep_Time_IOC
Outstanding
14.0
Acceptable
51.4
Unacceptable
34.5
LCSP_IOC
Outstanding
14.2
Acceptable
51.6
Unacceptable
34.2
CPAR_IOC
Outstanding
15.8
Acceptable
53.1
Unacceptable
31.1
APB_IOC
Outstanding
12.4
Acceptable
49.9
Unacceptable
37.7
IOC_ADM
Outstanding
17.6
Acceptable
54.8
Unacceptable
27.6
Help_Desk_Support_IOC
Outstanding
12.1
Acceptable
49.6
Unacceptable
38.2
Figure 89 Case 3 at IOC
The DAPS model Probability of Success measurement is 40% with a Success
Factor of 0.67 at IOC, and a predicted Success Factor of 0.838 at FOC. This would result
in an assessment rating of Unacceptable and recommended decision of “Corrective
193
Action or Shut-Down.” Potential ways ahead could include finding an acceptable
mitigation for the critical issue or shutting down the program.
7.4.2 Case 4—ERP Program at FOC
The hypothetical program for Case 4 has delivered specified requirements at an
acceptable level entering Full Operating Capability (FOC). However, the cost and
schedule performances to reach this point were rated Unacceptable. Furthermore, budget
cuts have made the program unsustainable going into the future, raising concerns about
quality risk and the ability to execute the LCSP and meet the APB FOC requirements.
Figure 90 below provides the DAPS model input/output for Case 4 at FOC.
194
FOC_KC
Success
Failure
10.7
89.3
0±0
Scope_FOC
Time_FOC
Good
4.95
Marginal
95.1
Good
Marginal
Quality_FOC
Good
6.44
Mariginal
93.6
Cost_FOC
Good
7.08
Marginal
92.9
Procurement_FOC
Good
Marginal
8.68
91.3
Good
Marginal
Good
Marginal
Req_Spec_Product_FOC
Outstanding
Acceptable
Unacceptable
0
100
0
IRS_Product_FOC
Outstanding
0
Acceptable
100
Unacceptable
0
SDD_Product_FOC
Outstanding
Acceptable
Unacceptable
0
100
0
IDD_Product_FOC
Outstanding
0
Acceptable
100
Unacceptable
0
Trouble_Report_FOC
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_FOC
Outstanding
Acceptable
Unacceptable
17.5
82.5
SE_FOC
3.45
96.5
GM_FOC
1.92
98.1
OM_Budgeting_Funding_FOC
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Perf_CPI_Final_FOC
Outstanding
0
Acceptable
0
Unacceptable
100
OM_Staffing_FOC
Outstanding
0
Acceptable
0
Unacceptable
100
LCSP_FOC
Outstanding
0
Acceptable
0
Unacceptable
100
IMS_Perf_SPI_Final_FOC
Outstanding
Acceptable
Unacceptable
0
0
100
APB_FOC
Outstanding
Acceptable
Unacceptable
0
0
100
CPAR_FOC
Outstanding
Acceptable
Unacceptable
0
100
0
FOC_ADM
Outstanding
5.73
Acceptable
43.6
Unacceptable
50.7
0
0
100
IA_DIACAP_FOC
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 90 Case 4 at FOC
The DAPS model Probability of Success measurement is 10.7%, with a Success
Factor of 0.12 at FOC. This would result in an assessment rating of Unacceptable and
recommended decision of “Corrective Action or Shut-Down.” The potential ways ahead
could include securing adequate funding for operation and sustainment, even considering
the history of cost growth and schedule delays with the program that will likely continue
into the future. Alternatively, the program could be turned off.
195
7.4.3 Case 5 —Research Program at SFR
Case 5 is a research program prototyping a new capability entering the System
Functional Review (SFR). Although the program is making expected progress, the new
capability has not adequately defined user requirements or solidified the need for the
capability. Figure 91 below provides the DAPS model input and output for Case 5 at
SFR.
196
Success
Failure
SFR_KC
60.7
39.3
0±0
Scope_SFR
Time_SFR
Good
82.1
Marginal
17.9
Good
Marginal
Quality_SFR
Good
83.5
Mariginal
16.5
Cost_SFR
Good
85.1
Marginal
14.9
Procurement_SFR
Good
74.1
Marginal
25.9
Good
Marginal
Good
Marginal
Req_Spec_Func_SFR
Outstanding
0
Acceptable
0
Unacceptable
100
Test_Rep_Proto_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
SE_SFR
71.3
28.7
GM_SFR
79.1
20.9
Budgeting_Funding_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Estimate_SFR
Outstanding
Acceptable
Unacceptable
Staffing_SFR
Outstanding
Acceptable
Unacceptable
15.2
84.8
0
100
0
0
100
0
Cost_Expenditure_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Time_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
PR_or_DraftRFP
Outstanding
20.1
Acceptable
57.2
Unacceptable
22.7
Tech_Review_Rep_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
Industry_Qs_Feedback
Outstanding
20.1
Acceptable
57.2
Unacceptable
22.7
RiskRep_Scope_SFR
Outstanding
0
Acceptable
0
Unacceptable
100
IMP_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Cost_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_Progress_SFR
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 91 Case 5 at SFR
The DAPS model Probability of Success measurement is 60.7%, with a Success
Factor of 1.54 at SFR, and a predicted Success Factor of 1.02 at Full Operating
Capability (FOC). This would result in an assessment rating of Acceptable and a
recommended decision to “Proceed with Caution.” A potential decision could be to
continue the prototype period up to up to Pre-Engineering Development Review (Pre-
197
ED) or Milestone B (MSB) to make the final investment determination to continue with
the program or not.
7.4.4 Cases 6–8—Programs with High, Moderate, and Low Risk Reports at
CDR
The majority of Evidence Nodes were evaluated as Acceptable for this
hypothetical program entering Critical Design Review (CDR). Cases 6–8 present three
separate scenarios for the program where the Risk for the direct Knowledge Areas (scope,
quality, time, and cost) were observed as Unacceptable, Acceptable, and Outstanding
respectively. Figures 92–94 below provide the DAPS model input and output for Cases
6–8, assessed at CDR.
198
Success
Failure
CDR_KC
40.3
59.7
0±0
Scope_CDR
Good
49.0
Marginal
51.0
Time_CDR
Good
33.2
Marginal
66.8
Quality_CDR
Good
36.1
Mariginal
63.9
Procurement_CDR
Good
58.0
Marginal
42.0
Cost_CDR
Good
37.6
Marginal
62.4
Good
Marginal
Good
Marginal
SE_CDR
55.6
44.4
GM_CDR
66.1
33.9
Req_Spec_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Budgeting_Funding_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Estimate_CPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IRS_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Staffing_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
SDD_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Cost_CDR
Outstanding
0
Acceptable
0
Unacceptable
100
IDD_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_Progress_SPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Tech_Review_Rep_CDR
Outstanding
16.4
Acceptable
53.7
Unacceptable
29.9
Test_Cases_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Time_CDR
Outstanding
0
Acceptable
0
Unacceptable
100
CONT_TEP_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Test_Report_CONT_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
CDRL_Rev_Overall_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Scope_CDR
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Quality_CDR
Outstanding
0
Acceptable
0
Unacceptable
100
CPAR_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 92 Case 6 (High Risk) at CDR
199
Success
Failure
CDR_KC
94.2
5.75
0±0
Scope_CDR
Time_CDR
Good
95.7
Marginal
4.29
Good
Marginal
Quality_CDR
Good
94.7
Mariginal
5.25
Procurement_CDR
Good
95.5
Marginal
4.47
SE_CDR
Good
Marginal
Good
Marginal
Req_Spec_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IRS_Product_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
SDD_Product_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
IDD_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Test_Cases_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Test_Report_CONT_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_CDR
Good
94.8
Marginal
5.19
GM_CDR
88.5
11.5
Budgeting_Funding_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Staffing_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
IMS_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
IMS_Progress_SPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Time_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
CDRL_Rev_Overall_CDR
Outstanding
Acceptable
Unacceptable
92.3
7.66
0
100
0
Cost_Estimate_CPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_CDR
Outstanding
Acceptable
Unacceptable
RiskRep_Cost_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Tech_Review_Rep_CDR
Outstanding
23.8
Acceptable
60.6
Unacceptable
15.6
CONT_TEP_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_CDR
Outstanding
Acceptable
Unacceptable
CPAR_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 93 Case 7 (Moderate Risk) at CDR
200
0
100
0
0
100
0
95.3
4.73
Success
Failure
CDR_KC
97.9
2.14
0±0
Scope_CDR
Time_CDR
Good
99.2
Marginal
0.83
Good
Marginal
Quality_CDR
Good
98.8
Mariginal
1.24
Cost_CDR
Good
98.7
Marginal
1.26
Procurement_CDR
Good
Marginal
SE_CDR
97.2
2.75
Good
Marginal
Good
Marginal
Req_Spec_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IRS_Product_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
GM_CDR
89.7
10.3
Budgeting_Funding_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Staffing_CDR
Outstanding
Acceptable
Unacceptable
94.3
5.68
0
100
0
Cost_Estimate_CPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
SDD_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Cost_CDR
Outstanding
100
Acceptable
0
Unacceptable
0
IDD_Product_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_Progress_SPI_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Tech_Review_Rep_CDR
Outstanding
24.2
Acceptable
61.0
Unacceptable
14.9
Test_Cases_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Time_CDR
Outstanding
100
Acceptable
0
Unacceptable
0
CONT_TEP_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Test_Report_CONT_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_CDR
Outstanding
100
Acceptable
0
Unacceptable
0
98.7
1.28
CDRL_Rev_Overall_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_CDR
Outstanding
Acceptable
Unacceptable
100
0
0
CPAR_CDR
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 94 Case 8 (Low Risk) at CDR
Case 6 with the Unacceptable Risk Report ratings provided a Probability of
Success measurement of 40.3% and Success Factor of 0.68, with a predicted Success
Factor of 1.062 at FOC. This would result in an assessment rating of Unacceptable and a
recommended decision of “Corrective Action or Shut-Down.”
201
Case 7 with the Acceptable Risk Report ratings provided a Probability of Success
measurement of 94.2% and Success Factor of 16.24, with a predicted Success Factor of
1.257 at FOC. This would result in an assessment rating of Outstanding and a
recommended decision to “Proceed with consideration for decreased oversight.”
Case 8 with the Outstanding Risk Report ratings provided a Probability of
Success measurement of 97.9% and Success Factor of 46.62, with a predicted Success
Factor of 1.273 at FOC. This would result in an assessment rating of Outstanding and
recommended decision to “Proceed with consideration for decreased oversight.”
Case 7 with the Acceptable Risk Report ratings provides a baseline for these three
cases, where all observations of evidence are corroborative, cumulating to a high
Probability of Success measurement. Case 8’s Outstanding Risk Report ratings, as
compared to Case 7’s Acceptable ratings, made only a minor increase in the Probability
of Success measurement due to the fact that it is already high with an abundance of
corroborative supporting evidence of program success. When comparing the Success
Factor for Cases 7 and 8, 16.24 compared to 46.62, however, one can better appreciate
what the slight increase in Probability of Success can mean when translated to a ratio.
Case 6 with the Unacceptable Risk Report rating, when compared to the output
for Case 7, significantly lowered the Success Factor to 0.68 from 16.24. Case 6 contained
four unfavorable observations from the Unacceptable Risk Report ratings and fifteen
slightly favorable observations from the other Acceptable ratings. The Success Factor
difference, between Case 6 and Case 7, revealed the immense impact the Unacceptable
202
(High Risk) ratings would have to a program, as well as the effects of conflicting
evidence.
7.4.5 Case 9—COTS Integration Program at ASR
Case 9 is a program procuring COTS software with known costs entering the
Alternative System Review (ASR). Program is adequately staffed and the risk of
additional costs is low. However, the Plan of Actions and Milestones (POA&M) to
finalize the integration and production was rated as Unacceptable due to unrealistic
durations and lack of details. The Risk Report is also indicating a high schedule risk.
Figure 95 below provides the DAPS model input and output for Case 9 at ASR.
203
Success
Failure
ASR_KC
62.4
37.6
0±0
Scope_ASR
Good
61.4
Marginal
38.6
Time_ASR
Good
28.5
Marginal
71.5
Quality_ASR
Good
Mariginal
Cost_ASR
65.0
35.0
Good
Marginal
Procurement_ASR
Good
Marginal
94.4
5.55
SE_ASR
79.5
20.5
Good
Marginal
74.5
25.5
GM_ASR
Good
Marginal
RiskRep_Quality_ASR
Outstanding
Acceptable
Unacceptable
0
100
0
POAM_ASR
Outstanding
Acceptable
Unacceptable
0
0
100
80.6
19.4
RiskRep_Scope_ASR
Outstanding
Acceptable
Unacceptable
0
100
0
Budgeting_Funding_ASR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Cost_ASR
Outstanding
Acceptable
Unacceptable
100
0
0
Tech_Review_Rep_ASR
Outstanding
Acceptable
Unacceptable
20.2
57.3
22.5
POAM_Progress_ASR
Outstanding
11.0
Acceptable
48.6
Unacceptable
40.4
Staffing_ASR
Outstanding
0
Acceptable
100
Unacceptable
0
AoA
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Time_ASR
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Expenditure_ASR
Outstanding
24.2
Acceptable
61.0
Unacceptable
14.8
BPR_ASR
Outstanding
17.6
Acceptable
54.8
Unacceptable
27.6
Cost_Estimate_ASR
Outstanding
100
Acceptable
0
Unacceptable
0
Figure 95 Case 9 at ASR
The DAPS model Probability of Success measurement is 62.4% with a Success
Factor of 1.66 at ASR, and a predicted Success Factor of 1.004 at Full Operating
Capability (FOC). The result is an assessment rating of Acceptable and recommended
decision to “Proceed with Caution.” A potential decision could be to continue with the
program with a requirement to build out a comprehensive schedule with realistic
milestone dates to support the decision at the next Knowledge Checkpoint.
204
7.4.6 Case 10—Expanded Deployment Program at PRR
Case 10 is an Expanded Deployment Program entering Production Readiness
Review (PRR). There are many evidence items not required by the decision authority.
However, the required evidence items have shown to be acceptable. Figure 96 below
provides the DAPS model input and output for Case 10 at PRR.
205
Success
Failure
PRR_KC
85.1
14.9
0±0
Scope_PRR
Good
74.0
Marginal
26.0
Time_PRR
Good
Marginal
89.9
10.1
Cost_PRR
Good
89.9
Marginal
10.1
Quality_PRR
Good
93.4
Mariginal
6.63
Procurement_PRR
Good
91.3
Marginal
8.75
Good
Marginal
Good
Marginal
Req_Spec_Product_PRR
Outstanding
20.1
Acceptable
57.2
Unacceptable
22.7
RiskRep_Scope_PRR
Outstanding
Acceptable
Unacceptable
20.1
57.2
22.7
SE_PRR
84.4
15.6
GM_PRR
85.7
14.3
Budgeting_Funding_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Estimate_CPI_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
Staffing_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
IMS_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Cost_PRR
Outstanding
23.3
Acceptable
60.2
Unacceptable
16.6
IRS_Product_PRR
Outstanding
Acceptable
Unacceptable
20.1
57.2
22.7
SDD_Product_PRR
Outstanding
Acceptable
Unacceptable
20.1
57.2
22.7
IDD_Product_PRR
Outstanding
20.1
Acceptable
57.2
Unacceptable
22.7
IMS_Progress_SPI_PRR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Time_PRR
Outstanding
Acceptable
Unacceptable
23.3
60.2
16.6
Test_Report_Defect_PRR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_PRR
Outstanding
24.0
Acceptable
60.8
Unacceptable
15.2
OA_UAT_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
CDRL_Rev_Overall_PRR
Outstanding
Acceptable
Unacceptable
0
100
0
CPAR_PRR
Outstanding
Acceptable
Unacceptable
0
100
0
Data_Center_Perf_PRR
Outstanding
Acceptable
Unacceptable
0
100
0
IA_DIACAP_PRR
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 96 Case 10 at PRR
206
Tech_Review_Rep_PRR
Outstanding
Acceptable
Unacceptable
22.2
59.1
18.7
LCSP_Training_Plan_PRR
Outstanding
Acceptable
Unacceptable
22.2
59.1
18.7
The DAPS model Probability of Success measurement is 85.1% with a Success
Factor of 5.71 at PRR, and a predicted Success Factor of 2.106 at Full Operating
Capability (FOC). The result is an assessment rating of Good with a recommended
decision to “Proceed.” Since all observations of evidence are Acceptable, it is reasonable
to proceed with the program from the decision maker’s perspective.
7.4.7 Cases 1–10 Summary
The following Table 22 provides a summary of the hypothetical cases, Cases 1–
10:
Table 22, Cases 1-10 Summary
Case
Title
P(S)
Success
Factor
SF @
FOC
KC
Assessment
55.8
1.26
1.01
Marginal
30.3
0.43
0.99
Unacceptable
40
0.67
0.838
Unacceptable
Decision
Recommendation
Delay or
Corrective Action
Corrective Action
or Shut-Down
Corrective Action
or Shut-Down
Corrective Action
or Shut-Down
1
Sensitivity Test 1.1 to Milestone A
2
Sensitivity Test 2.1 to Milestone A
3
Logistics Program at IOC
4
ERP Program at FOC
10.7
0.12
0.12
Unacceptable
5
Research Program at SFR
60.7
1.54
1.02
Acceptable
6
Program with High Risk Report at CDR
40.3
0.68
1.062
Unacceptable
7
Program with Moderate Risk Report at CDR
94.2
16.24
1.257
Outstanding
8
Program with Low Risk Report at CDR
97.9
46.62
1.273
Outstanding
Proceed w/caution
Corrective Action
or Shut-Down
Proceed /
decreased
oversight
Proceed /
decreased
oversight
9
10
COTS Integration Program at ASR
Expanded Deployment Program at PRR
62.4
85.1
1.66
5.71
1.004
2.106
Acceptable
Good
Proceed w/caution
Proceed
Cases 1–10 provide a variety of hypothetical situations to demonstrate the DAPS
model developed in this research. Case 1 and Case 2 provided two extreme but realistic
207
cases at Milestone A. They have been discussed in depth in Sections 7.2 and 7.3. Cases
3–10 provide additional scenarios to demonstrate the behavior and potential use of the
DAPS model to support acquisition decision making.
7.5 Case 11—RDT&E Program to Milestone B
7.5.1 Model Input
Program A is an RDT&E IT acquisition program intended to reduce operation
cost. It also revolutionizes the business processes of the user community and has drawn
huge participation and interest. Program A achieved Material Development Decision
(MDD) fairly easily with satisfactory acquisition artifacts.
Program requirements are meticulously documented by the user community and
then refined extensively by the Systems Engineering and Procurement communities at
System Functional Review (SFR) prior to becoming part of the contract solicitation.
There are some funding risks due to budget cuts at SFR, but it is funded as first priority in
the organization, so the risk level is acceptable. Pilots have been completed prior to the
acquisition with satisfactory results in quality. However, the program has fallen
significantly behind schedule with high risk for further delays. No Risk Management
process has been established and no risk report has been produced so far. There has been
one adequate government cost estimate developed. The Technical Review Report
indicated that the progress so far has been acceptable and is ready to proceed to the next
phase of the program
At Pre-Engineering Development Review (Pre-ED), an Independent Government
Cost Estimate (IGCE) was constructed, showing that the program will cost more than the
208
budget allocated. However, because this program is funded as first priority, it is still
manageable within the Program Manager’s own budget. Ironically, due to the immense
interest, visibility, and participation, it has become a challenge to obtain stakeholder
agreement, further delaying the timeline for the Request for Proposal (RFP) release.
There have also been workload issues at the procurement office, where they are
overloaded with other procurement tasks. This procurement workload issue would
become another driver of unacceptable schedule delays, evidence for both an
unsatisfactory program schedule progress as well as overly optimistic schedule plan.
However, the RFP was outstanding due to the extensive work put into it. Risk
Management was first implemented at this stage of the program, providing the initial
Risk Report to the Milestone Decision Authority (MDA). The schedule risk was reported
as high while scope, quality, and cost risks were reported as moderate. The Pre
Engineering Development (Pre-ED) Acquisition Decision Memorandum from the MDA
permitted the release of the RFP based on the acquisition requirements accomplished thus
far and allowed the program to proceed to the next phase.
By Milestone B, Schedule Performance came to an unacceptable level, which
required that the schedule be re-baselined based on historical schedule performance.
However, even with the re-baselined schedule, the ability to meet the new schedule is still
in doubt due to the uncertain schedule delays of another IT program required to be
completed to deploy Program A. There is good news though, as the Milestone B
Independent Logistics Assessment (ILA) came out to be Outstanding. The Business Case
and Program Charter also received outstanding reviews. The staffing shortage at the
209
procurement office is also no longer an issue, since procurement resource demands will
be minimal beyond Milestone B.
Currently, Program A has achieved Milestone B and is working toward
Preliminary Design Review (PDR). The program manager is looking to utilize DAPS to
combine the observations made thus far to look into program performance trends as well
as the forecast of future program performance. The specific observations of evidence for
Case 11 are shown in Figures 97 to 100 located in the next section, 7.5.2 Model Output,
below.
7.5.2 Model Output
The DAPS model observations of evidence were entered sequentially at each
Knowledge Checkpoint (KC) to incrementally observe the effects of the additional
evidence. Figures 97 to 100 provide the Netica outputs of the specific Knowledge
Checkpoints, Material Development Decision (MDD), System Functional Review (SFR),
Pre-Engineering Development Review (Pre-ED), and Milestone B Review (MS B).
210
Success
Failure
MDD_KC
84.0
16.0
0±0
Scope_MDD
Good
86.1
Marginal
13.9
Time_MDD
Good
85.3
Marginal
14.7
Quality_MDD
Good
82.9
Mariginal
17.1
Procurement_MDD
Good
87.8
Marginal
12.2
Cost_MDD
Good
Marginal
Good
Marginal
84.6
15.4
SE_MDD
88.7
11.3
GM_MDD
Good
89.2
Marginal
10.8
Initial_ROM_Schedule
Outstanding
0
Acceptable
100
Unacceptable
0
MDD_Memo
Outstanding
0
Acceptable
100
Unacceptable
0
Initial_ROM_Cost
Outstanding
0
Acceptable
100
Unacceptable
0
BPR_MDD
Outstanding
0
Acceptable
100
Unacceptable
0
Problem_Statement_MDD
Outstanding
0
Acceptable
100
Unacceptable
0
AoA_Study_Guide
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 97 Case 11 DAPS Output at MDD
Not much information is generated at MDD, but with the available information,
DAPS calculated favorable results for each of the Knowledge Areas. The observations at
Material Development Decision (MDD) concluded that Program A has likely
accomplished the knowledge certainty needed to start the program, yielding a Probability
of Success measurement of 84%, a Success Factor of 5.25, a Knowledge Assessment of
Good, and a recommended decision to “Proceed.” This is consistent with the real-world
occurrence where the decision authority agreed with the program proposal and authorized
funds to start the acquisition.
211
SFR_KC
Success
Failure
83.9
16.1
0±0
Scope_SFR
Time_SFR
Good
72.4
Marginal
27.6
Good
Marginal
Quality_SFR
Good
Mariginal
Cost_SFR
81.5
18.5
Good
Marginal
Procurement_SFR
Good
Marginal
91.1
8.88
89.4
10.6
SE_SFR
96.4
3.60
Good
Marginal
87.9
12.1
GM_SFR
Good
Marginal
Req_Spec_Func_SFR
Outstanding
Acceptable
Unacceptable
100
0
0
Test_Rep_Proto_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Time_SFR
Outstanding
Acceptable
Unacceptable
19.8
56.9
23.3
RiskRep_Quality_SFR
Outstanding
Acceptable
Unacceptable
21.6
58.6
19.8
87.9
12.1
Budgeting_Funding_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
Staffing_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
PR_or_DraftRFP
Outstanding
Acceptable
Unacceptable
100
0
0
Industry_Qs_Feedback
Outstanding
Acceptable
Unacceptable
0
100
0
IMP_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
Cost_Estimate_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
Cost_Expenditure_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
Tech_Review_Rep_SFR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_SFR
Outstanding
Acceptable
Unacceptable
23.5
60.4
16.1
RiskRep_Cost_SFR
Outstanding
Acceptable
Unacceptable
23.2
60.1
16.8
IMP_Progress_SFR
Outstanding
Acceptable
Unacceptable
0
0
100
Figure 98 Case 11 DAPS Output at SFR
Figure 98 provides the Case 11 DAPS output at SFR. The output at SFR is
inclusive of the previous observations of evidence, updated by the observations shown in
Figure 98. There are no Risk Reports required by the Technical Review chair or the
program manager, hence no Risk Reports were observed with any adverse effects. If the
Risk Reports were unexpectedly missing, the observations would be instead rated as
212
Unacceptable and would adversely affect the DAPS calculations. The result of the SFR
observations, including the SFR Technical Review report from the Technical Review
Authority, yielded a Probability of Success of 83.9%, a Success Factor of 5.211, a
Knowledge assessment rating of Good, and a recommended decision to “Proceed.” This
is consistent with the real-world occurrence where the lack of ability to meet the stated
schedule was noted, but the Program Manager proceeded with the acquisition since the
program had accomplished the required tasks.
213
Pre_ED_KC
Success
62.7
Failure
37.3
0±0
Scope_PreED
Good
79.4
Marginal
20.6
Time_PreED
Good
9.46
Marginal
90.5
Quality_PreED
Good
60.2
Mariginal
39.8
Procurement_PreED
Good
97.7
Marginal
2.26
Cost_PreED
Good
88.0
Marginal
12.0
SE_PreED
Good
70.4
Marginal
29.6
GM_PreED
Good
67.3
Marginal
32.7
RFP_Final
Outstanding
100
Acceptable
0
Unacceptable
0
Budgeting_Funding_PreED
Outstanding
0
Acceptable
100
Unacceptable
0
Business_Case
Outstanding
0
Acceptable
100
Unacceptable
0
SSP
Outstanding
100
Acceptable
0
Unacceptable
0
Staffing_PreED
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Estimate_PreED
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_PreED
Outstanding
0
Acceptable
0
Unacceptable
100
PreED_ADM
Outstanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_PreED
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_Progress_PreED
Outstanding
Acceptable
Unacceptable
0
0
100
RiskRep_Time_PreED
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Scope_PreED
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Cost_PreED
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_PreED
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 99 Case 11 DAPS Output at Pre-ED
At Pre-Engineering Development Review (Pre-ED), Program A received
unfavorable schedule performance observations due to major delays, resulting in an
unfavorable Time KA measurement of only 9.46% Good. In total, the Probability of
Success outcome at Pre-ED dropped to 62.7%, as shown in Figure 99, with a Success
Factor of 1.68. This updates the knowledge assessment rating for Program A to
Acceptable at Pre-Engineering Development Review, with a recommended decision to
214
“Proceed with Caution.” This recommendation is consistent with the real-world scenario,
where the Milestone Decision Authority (MDA) noted the significant schedule delays
and requested additional future schedule updates, but allowed Program A to proceed
since the program had thoroughly accomplished the required tasks at this Knowledge
Checkpoint.
Success
Failure
MSB_KC
86.2
13.8
0±0
Scope_MSB
Good
87.7
Marginal
12.3
Time_MSB
Good
55.1
Marginal
44.9
Quality_MSB
Good
96.8
Mariginal
3.20
Cost_MSB
Good
96.0
Marginal
4.00
Procurement_MSB
Good
97.7
Marginal
2.35
Good
Marginal
SE_MSB
99.6
0.36
GM_MSB
Good
98.8
Marginal
1.16
Staffing_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
IMP_MSB
Outstanding
Acceptable
Unacceptable
0
100
0
SEP_MSB
Outs tanding
0
Acceptable
100
Unacceptable
0
Budgeting_Funding_MSB
Outstanding
Acceptable
Unacceptable
0
100
0
TEP_MSB
Outs tanding
Acceptable
Unacceptable
0
100
0
APB_MSB
Outstanding
0
Acceptable
0
Unacceptable
100
Business_Case_MSB
Outstanding
Acceptable
Unacceptable
100
0
0
IMP_Progress_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
Program_Charter_MSB
Outstanding
100
Acceptable
0
Unacceptable
0
LCSP_MSB
Outs tanding
100
Acceptable
0
Unacceptable
0
Cost_Estimate_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Time_MSB
Outstanding
0
Acceptable
0
Unacceptable
100
MSB_ADM
Outstanding
0
Acceptable
100
Unacceptable
0
ISP_MSB
Outs tanding
0
Acceptable
100
Unacceptable
0
Cost_Expenditure_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
MSB_IDM
Outstanding
0
Acceptable
100
Unacceptable
0
PPP_MSB
Outs tanding
0
Acceptable
100
Unacceptable
0
ILA_Report_MSB
Outstanding
100
Acceptable
0
Unacceptable
0
RiskRep_Cost_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Scope_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
Risk_Man_Plan_MSB
Outs tanding
0
Acceptable
100
Unacceptable
0
IGCE_MSB
Outstanding
0
Acceptable
100
Unacceptable
0
Figure 100 Case 11 DAPS Output at MS B
215
Figure 100 provides the DAPS output at Milestone B from Netica. The schedule
re-baseline, according to the historical performance, helped to improve the certainty level
of the Integrated Master Plan (IMP), which improves the overall knowledge certainty of
the Time KA. Although significant schedule risks remain, Program A has demonstrated
exceptional understanding and development in several other Knowledge Areas and
accomplished all required tasks, resulting in a Probability of Success of 86.2% at
Milestone B, with a Success Factor of 6.246, a Knowledge assessment rating of Good,
and a recommended decision to “Proceed.” This is consistent with the real-world
occurrence where the Investment Review Board (IRB) and the Milestone Decision
Authority (MDA) both authorized Program A to proceed to contract award and program
execution.
Table 23 below summarizes the results of DAPS Case 11 Analysis.
Table 23, Case 11 Results Summary
KC
Success Factor
MDD
5.25
SFR
5.211180124
Pre-ED
1.680965147
MS B
6.246376812
Knowledge Level
Good
Good
Acceptable
Good
Recommended Decision
Proceed
Proceed
Proceed w/Caution
Proceed
Table 24 below provides the complete DAPS Success Factor Probability of
Success profiles calculated after each Knowledge Checkpoint stage.
216
Table 24, Case 11 DAPS Output Profile
KC
MD
D
ITR
P(S)
MDD
Success
Factor
P(S)
SFR
Success
Factor
Pre-ED
Success
P(S)
Factor
Milestone B
Success
P(S)
Factor
84.0
5.25
85.1
5.71
84.0
5.25
83.7
5.13
76.4
3.24
78.9
3.74
76.3
3.22
75.9
3.15
ASR
69.0
2.23
74.1
2.86
69.1
2.24
68.2
2.14
MSA
62.9
1.70
72.5
2.64
64.0
1.78
62.6
1.67
SRR
58.5
1.41
75.2
3.03
62.8
1.69
61.4
1.59
SFR
PED
MSB
55.4
1.24
83.9
5.21
70.6
2.40
70.8
2.42
53.4
1.15
76.8
3.31
62.7
1.68
70.4
2.38
52.1
1.09
69.2
2.25
63.3
1.72
86.2
6.25
PDR
51.3
1.05
63.0
1.70
60.1
1.51
80.2
4.05
CDR
50.8
1.03
58.5
1.41
56.9
1.32
72.6
2.65
TRR
50.5
1.02
55.4
1.24
54.4
1.19
65.6
1.91
MSC
50.3
1.01
53.4
1.15
52.8
1.12
60.4
1.53
PRR
50.2
1.01
52.1
1.09
51.7
1.07
56.7
1.31
IOC
50.2
1.01
51.3
1.05
51.0
1.04
54.2
1.18
FOC
50.1
1.00
50.8
1.03
50.6
1.02
52.6
1.11
DAPS has consistently identified the Time KA as the problem area where a
Program Manager should focus their attention. The best course of action could be to
achieve better schedule certainty by tying the schedule of the other interdependent IT
program to Program A to gain knowledge into the specific schedule dependencies and
resources loading across both programs. Doing so would increase Program A’s
knowledge in the Time KA and improve overall likelihood for success.
217
This is consistent with the real-world occurrence, where the Chief Engineer built
an Integrated Master Schedule (IMS) of all related programs, which prolonged the overall
schedule of Program A, but increased the certainty and knowledge in the Time KA. The
increased knowledge in the schedule also supported the risk assessment and mitigation
efforts, since the schedule baseline that the risk is assessed against is now more credible.
Program A has demonstrated the ability to complete key Objective Quality
Evidence (OQE) for program success, but cannot do so with adequate schedule certainty.
Overall, DAPS indicates that the probability of successful program completion and
achievement of FOC is favorable.
7.6 Case 12—Program B Major Release Program to Milestone C
7.6.1 Model Input
Program B was a major upgrade release for an existing software system. It was an
urgent program to execute critical upgrades. However, it has had problems with user
community participation throughout the development. The program was certified for
MDD and Milestone B without issues due to the urgent need. However, many of the
proper program management and systems engineering controls and associated
documentations were streamlined to support a shortened schedule. No Technical Reviews
were conducted at any point and Milestone A was skipped in favor for immediate start.
At MDD, the only evidence items available were the Problem Statement and the Material
Development Decision (MDD) Memo.
No official Milestone B was conducted. Unofficially, by this point,
Scope/requirements documentation was not required. The Systems Engineering Plan was
218
unexpectedly missing, but an acceptable Test and Evaluation Plan was delivered. Other
plans were not required and not developed. Cost Estimates were adequate. Schedule
estimates indicated a high level of risk due to the shortened time frame. The mitigation
approach for the schedule risk was to hire more software developers and analysts to help
support the schedule, which instead increased the risks in staffing and manning
management. Risk Management was not institutionalized, thus no risk reports existed for
review. An existing contract was used and was acceptable. Ultimately, Milestone B was
approved and program execution started.
PDR was not done. CDR was not officially conducted. Unofficially by that point,
Program B still did not develop any technical plans outside the Test and Evaluation Plan.
Requirements and design documentation were not reviewed. Architecture and BPR
artifacts were nonexistent. Software itself was being developed and demonstrated to the
government on a limited exposure basis. Cost estimates were acceptable and Time
progress was tracking to the baseline according to contractor-provided information. Unit
Testing shown to the government provided acceptable results. The program was allowed
to proceed to integration and testing.
At the time of TRR, funding had dried up due to major funding cuts. The
contractor work force was almost entirely laid off. The code developed thus far was
turned over to the government. Government review of the configuration documentation
yielded unacceptable results. Most of the required documentation was not sufficiently
developed. Government testing of the system also yielded unacceptable results,
invalidating the schedule progress so far as well as the past cost estimates.
219
Program B is now at a critical juncture. The Program Manager would like to
understand what went wrong and what can be done better in the future. The specific
observations of evidence for Case 12 can be found in Figures 101 to 104 located in the
next section, 7.6.2, Model Output, below.
7.6.2 Model Output
The DAPS model observations of evidence were entered sequentially at each
Knowledge Checkpoint to incrementally observe the effects of the additional evidence.
Figures 101 to 104 provide the Netica outputs of the specific Knowledge Checkpoint:
Material Development Decision (MDD), Milestone B (MS B), Critical Design Review
(CDR), and Test Readiness Review (TRR).
220
MDD_KC
Success
Failure
69.5
30.5
0±0
Scope_MDD
Time_MDD
Good
69.1
Marginal
30.9
Good
Marginal
Quality_MDD
Good
Mariginal
Cost_MDD
Good
Marginal
69.4
30.6
Procurement_MDD
Good
74.6
Marginal
25.4
71.2
28.8
Good
Marginal
68.9
31.1
SE_MDD
74.0
26.0
GM_MDD
Good
80.0
Marginal
20.0
Initial_ROM_Schedule
Outstanding
Acceptable
Unacceptable
19.1
56.2
24.6
MDD_Memo
Outstanding
0
Acceptable
100
Unacceptable
0
Initial_ROM_Cost
Outstanding
19.1
Acceptable
56.2
Unacceptable
24.7
BPR_MDD
Outstanding
19.5
Acceptable
56.6
Unacceptable
23.8
Problem_Statement_MDD
Outstanding
0
Acceptable
100
Unacceptable
0
AoA_Study_Guide
Outstanding
Acceptable
Unacceptable
20.1
57.2
22.7
Figure 101 Case 12 DAPS Output at MDD
Only two evidence items were observed at MDD for Case 12 shown in Figure
101, the Problem Statement for the program and the resulting MDD approval memo. The
other Evidence Nodes were unobserved. Probability of Success measurement at MDD
was 69.5%, with a Success Factor of 2.28, a Knowledge assessment rating of Acceptable,
and a recommended decision to “Proceed with Caution.” This is consistent with the realworld occurrence where the decision authority gave the go-ahead for the program.
However, this decision was not based on much supporting evidence. There was only the
Problem Statement to support the decision to go ahead and the MDD to confirm the
221
decision. The decision authorities were looking to complete this program fast and were
ready to make the simple decision for a must-do program.
MSB_KC
Success
Failure
28.6
71.4
0±0
Scope_MSB
Time_MSB
Good
Marginal
Good
Marginal
20.3
79.7
Quality_MSB
Good
Mariginal
Cost_MSB
25.0
75.0
Good
Marginal
Procurement_MSB
Good
Marginal
31.9
68.1
35.0
65.0
SE_MSB
26.4
73.6
Good
Marginal
16.4
83.6
GM_MSB
Good
Marginal
21.2
78.8
Staffing_MSB
Outstanding
Acceptable
Unacceptable
IMP_MSB
Outstanding
Acceptable
Unacceptable
0
100
0
IMP_Progress_MSB
Outstanding
Acceptable
Unacceptable
0
0
100
RiskRep_Time_MSB
Outstanding
Acceptable
Unacceptable
9.40
47.1
43.5
RiskRep_Quality_MSB
Outstanding
Acceptable
Unacceptable
10.3
48.0
41.7
RiskRep_Cost_MSB
Outstanding
Acceptable
Unacceptable
12.3
49.8
37.8
0
0
100
SEP_MSB
Outstanding
Acceptable
Unacceptable
Budgeting_Funding_MSB
Outstanding
Acceptable
Unacceptable
0
100
0
TEP_MSB
Outstanding
Acceptable
Unacceptable
Program_Charter_MSB
Outstanding
Acceptable
Unacceptable
9.58
47.2
43.2
0
100
0
9.58
47.2
43.2
11.7
49.2
39.0
8.62
46.3
45.1
PPP_MSB
Outstanding
Acceptable
Unacceptable
RiskRep_Scope_MSB
Outstanding
Acceptable
Unacceptable
8.62
46.3
45.1
ISP_MSB
Outstanding
Acceptable
Unacceptable
MSB_IDM
Outstanding
Acceptable
Unacceptable
0
100
0
LCSP_MSB
Outstanding
Acceptable
Unacceptable
MSB_ADM
Outstanding
Acceptable
Unacceptable
0
0
100
8.62
46.3
45.1
Risk_Man_Plan_MSB
Outstanding
Acceptable
Unacceptable
8.62
46.3
45.1
APB_MSB
Outstanding
Acceptable
Unacceptable
11.7
49.2
39.0
Business_Case_MSB
Outstanding
Acceptable
Unacceptable
11.7
49.2
39.0
Cost_Estimate_MSB
Outstanding
Acceptable
Unacceptable
0
100
0
Cost_Expenditure_MSB
Outstanding
Acceptable
Unacceptable
12.3
49.8
37.8
ILA_Report_MSB
Outstanding
Acceptable
Unacceptable
10.3
48.0
41.7
IGCE_MSB
Outstanding
Acceptable
Unacceptable
12.3
49.8
37.8
Figure 102 Case 12 DAPS Output at Milestone B
Figure 102 provides the Case 12 DAPS Output at Milestone B. Many of the
Objective Quality Evidence items (OQEs) under the DBS acquisition process were again
streamlined due to schedule concerns. Inexcusably, however, the Systems Engineering
222
Plan (SEP) required by the decision authority was still not developed. A Risk
Management process was still not required at this point and not available for observation.
The Probability of Success measurement for Case 12 at Milestone B shown in Figure 102
is 28.6%, which would translate to a Success Factor of 0.40, an Unacceptable knowledge
assessment rating, and a recommended decision for “Corrective Action or Shut-Down.”
In the real-world occurrence, a corrective action was made at this juncture of Program B
to expeditiously hire software developers and analysts with the intent to support timelier
program execution. This action, however, would not support the DAPS model of
ensuring that program knowledge and certainty is adequately achieved prior to moving on
to the next program phase.
223
CDR_KC
Success
Failure
36.9
63.1
0±0
Scope_CDR
Time_CDR
Good
Marginal
Good
Marginal
50.6
49.4
Quality_CDR
Good
Mariginal
Cost_CDR
57.1
42.9
Good
Marginal
Procurement_CDR
Good
Marginal
0.51
99.5
53.6
46.4
SE_CDR
41.8
58.2
Good
Marginal
39.6
60.4
GM_CDR
Good
Marginal
Req_Spec_Product_CDR
Outstanding
Acceptable
Unacceptable
0
0
100
IRS_Product_CDR
Outstanding
Acceptable
Unacceptable
0
0
100
SDD_Product_CDR
Outstanding
Acceptable
Unacceptable
0
0
100
IDD_Product_CDR
Outstanding
Acceptable
Unacceptable
0
0
100
Test_Cases_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Test_Report_CONT_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Quality_CDR
Outstanding
Acceptable
Unacceptable
16.7
54.0
29.3
30.6
69.4
Budgeting_Funding_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Staffing_CDR
Outstanding
Acceptable
Unacceptable
0
0
100
IMS_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
IMS_Progress_SPI_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Time_CDR
Outstanding
Acceptable
Unacceptable
15.4
52.8
31.8
CDRL_Rev_Overall_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Cost_Estimate_CPI_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
Cost_Expenditure_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Cost_CDR
Outstanding
Acceptable
Unacceptable
16.0
53.3
30.6
Tech_Review_Rep_CDR
Outstanding
Acceptable
Unacceptable
13.2
50.7
36.1
CONT_TEP_CDR
Outstanding
Acceptable
Unacceptable
0
100
0
RiskRep_Scope_CDR
Outstanding
Acceptable
Unacceptable
5.45
43.3
51.2
CPAR_CDR
Outstanding
Acceptable
Unacceptable
13.7
51.1
35.2
Figure 103 Case 12 DAPS CDR Output
At the juncture of the unofficial Critical Design Review (CDR), Program B’s
underperformance in documentation continued, failing to provide adequate design
documents. The Program B contractor, however, provided evidence where their cost,
224
schedule, and system quality performance were on track through their Earned Value
Management System (EVMS) metrics as well as their own test results and limited system
demonstrations. CDR is a systems engineering technical review with the purpose of
determining baseline system requirements and design. Thus, the absence of critical
requirements and design documents should be a clear indication that the Program B
contractor has not achieved the necessary requirements to move on to the next phase of
the program. As shown in Figure 103, the Probability of Success of Program B at CDR is
36.9%, which results in a Success Factor of 0.58, a knowledge assessment rating of
Unacceptable, and a recommended decision for “Corrective Action or Shut-Down.” In
the real-world occurrence, even with the absence of critical documents, the official
Contract Data Requirements List (CDRL) deliverables were accepted by the government,
and the program was allowed to proceed to the next phase.
225
Success
Failure
TRR_KC
2.65
97.3
0±0
Scope_TRR
Good
.042
Marginal
100
Time_TRR
Good
Marginal
0.75
99.3
Quality_TRR
Good
4.22
Mariginal
95.8
Cost_TRR
Good
1.0
Marginal
99.0
Procurement_TRR
Good
0.82
Marginal
99.2
Good
Marginal
Req_Spec_Product_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Scope_TRR
5.36
43.2
51.4
1.19
98.8
Budgeting_Funding_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
Staffing_TRR
Outstanding
Acceptable
Unacceptable
0
0
100
IRS_Product_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
IMS_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
SDD_Product_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
IMS_Progress_SPI_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
IDD_Product_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Time_TRR
Outstanding
5.50
Acceptable
43.4
Unacceptable
51.1
Test_Cases_TRR
Outstanding
Acceptable
Unacceptable
SE_TRR
9.10
90.9
GM_TRR
IA_DIACAP_TRR
Outstanding
6.19
Acceptable
44.0
Unacceptable
49.8
Outstanding
Acceptable
Unacceptable
Good
Marginal
Cost_Estimate_CPI_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
Cost_Expenditure_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
RiskRep_Cost_TRR
Outstanding
5.55
Acceptable
43.4
Unacceptable
51.0
Tech_Review_Rep_TRR
Outstanding
Acceptable
Unacceptable
7.16
45.0
47.9
TEP_TRR
Outstanding
0
Acceptable
100
Unacceptable
0
RiskRep_Quality_TRR
Outstanding
6.19
Acceptable
44.0
Unacceptable
49.8
CDRL_Rev_Overall_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
6.19
44.0
49.8
Test_Report_CONT_TRR
Outstanding
0
Acceptable
0
Unacceptable
100
CPAR_TRR
Outstanding
Acceptable
Unacceptable
5.51
43.4
51.1
Figure 104 Case 12 DAPS Output at TRR
Figure 104 above provides the DAPS output at Test Readiness Review (TRR) for
Program B. At this juncture of the program, further government-guided integration
testing has shown that the system development completion is severely behind the
reported progress. Code has been written, but has not been adequately tested and verified
to meet government expectations. However, clear specifications and designs were also
never developed to guide the actual software coding implementation. The funding cut
226
also exacerbated the situation, which makes cost and schedule baselines unachievable.
The Probability of Success measurement for Program B at TRR is now 2.7%, with a
Success Factor of 0.03, a knowledge assessment rating of Unacceptable, and
recommended decision for “Corrective Action or Shut-Down.” In the real-world
occurrence, the Program B contractor was changed, and the whole program was restarted
by the new contractor.
Table 25 below summarizes the Case 12 DAPS results at the four Knowledge
Checkpoints.
Table 25, Case 12 Results Summary
KC
MDD
MS B
CDR
TRR
Success Factor
2.28
0.40
0.58
0.03
Knowledge Level
Acceptable
Unacceptable
Unacceptable
Unacceptable
Recommended Decision
Proceed w/Caution
Correct Action or Shut-Down
Correct Action or Shut-Down
Correct Action or Shut-Down
Table 26 below outlines the complete DAPS Success Factor and Probability of
Success profiles calculated after each Knowledge Checkpoint stage.
227
Table 26, Case 12 DAPS Output Profile
KC
MDD
MDD
Success
P(S)
Factor
69.5
2.28
MS B
Success
P(S)
Factor
68.8
2.21
CDR
Success
P(S)
Factor
68.9
2.22
TRR
Success
P(S)
Factor
68.7
2.19
ITR
66.3
1.97
65.3
1.88
65.5
1.90
65.2
1.87
ASR
62.3
1.65
60.7
1.54
60.9
1.56
60.4
1.53
MSA
58.7
1.42
55.8
1.26
56.2
1.28
55.4
1.24
SRR
55.9
1.27
50.9
1.04
51.6
1.07
50.2
1.01
SFR
PreE
D
MSB
53.9
1.17
45.4
0.83
46.4
0.87
44.2
0.79
52.5
1.11
38.3
0.62
39.7
0.66
36.2
0.57
51.6
1.07
28.6
0.40
29.9
0.43
24.7
0.33
PDR
51.0
1.04
31.9
0.47
33.6
0.51
22.9
0.30
CDR
50.6
1.02
36.6
0.58
36.9
0.58
17.8
0.22
TRR
50.4
1.02
40.7
0.69
38.6
0.63
2.7
0.03
MSC
50.2
1.01
43.8
0.78
41.2
0.70
15.6
0.18
PRR
50.1
1.00
46.0
0.85
43.7
0.78
26.1
0.35
IOC
50.1
1.00
47.5
0.90
45.8
0.85
34.0
0.52
FOC
50.1
1.00
48.4
0.94
47.3
0.90
39.5
0.65
The Program B presented in Case 12 goes against the principles of KnowledgeBased Acquisition embedded in the DAPS model. Knowledge Checkpoints were not
officially utilized in most occasions. The decision makers also allowed the program to
proceed without adequate completion of Objective Quality Evidence (OQEs) due to the
program time pressure. The DAPS model indicates to the program manager that the
decisions made in Program B were made with high level of uncertainty and risk. Without
properly ensuring that the program has adequately achieved the certainty and maturity
necessary to succeed in the next phase of the program, the result is less controllable and
more likely to lead to program failure.
228
7.7 Model Analysis Summary
This chapter proposed an assessment guide to support DBS acquisition decision
making and presented 12 cases for model analysis. Hypothetical Cases 1 and 2 tested the
sensitivity of the DAPS model with extreme and conflicting evidence concerning
program success. Cases 3–10 provided additional hypothetical cases to consider a variety
of scenarios. Cases 11 and 12 were derived from real-world situations to demonstrate the
applicability of the model in practice.
These cases demonstrated the ability of the Defense Business System Acquisition
Probability of Success (DAPS) model to provide managers, engineers, and decision
makers an analytical tool to assess the Probability of Success at Knowledge Checkpoints.
Case analyses show the model’s potential for use as a platform for decision analysis,
what-if analysis, and knowledge certainty/maturity analysis to support evidence and
Knowledge-Based Acquisition decision making.
DAPS was able to observe the Objective Quality Evidence (OQEs) developed,
reviewed, and approved in acquisition programs, and measure the knowledge certainty
achieved in the seven Knowledge Areas (cost, time, quality, scope, systems engineering,
procurement management, and general management). The Knowledge Area
measurements provided information about the certainty/uncertainty that the program has
matured enough in these subject matter Knowledge Areas to move on to the next phase of
the program. Based on the measurements at the seven Knowledge Areas, DAPS then
calculated the program’s Probability of Success at the 15 Knowledge Checkpoints of the
acquisition program life cycle. The measurements of Probability of Success were
229
furthermore converted to a Success Factor metric to calculate the program’s ratio for
likelihood of success as compared to failure. Additionally, knowledge assessment ratings
and recommended decisions were also formulated to further aid DBS acquisition decision
makers in understanding the DAPS model’s Probability of Success measurements.
The 12 cases presented in this chapter, Model Analysis, showed the DAPS
model’s ability to meet the research objectives outlined in Section 1.3:
1. DAPS developed a probabilistic reasoning system using Bayesian Networks to
collectively draw inferences from available evidence of DBS acquisition. This
was demonstrated throughout the 12 case analyses.
2. DAPS modeled complex interrelationships within DBS acquisition to simulate
real-world interactions affecting program success. This was accomplished through
the KA2KA network structure as well as the dynamic KA2KAi+1 relationships.
The behaviors and the results of the complex interrelationship modeling was
demonstrated in the 12 case analyses and discussed in the chapter.
3. DAPS modeled dynamic relationships of DBS acquisition to enable the prediction
of future program success/failure. The dynamic relationships are modeled in
DAPS through the KA2KAi+1 arcs. The predictive properties of the DAPS model
are demonstrated throughout the 12 cases with the predicted P(Success) and
Success Factor at FOC and other future Knowledge Checkpoints.
4. DAPS incorporated risk management elements into the model. Risk identification
and analysis are factored into the criteria for the observation of evidence. Risk
230
Reports and Risk Management Plans were also modeled as Evidence Nodes for
observation.
5. DAPS incorporated evidence-based decision making into the model. The case
analyses demonstrated the DAPS model’s ability to utilize all available evidence
collectively to support a program’s acquisition decisions at Knowledge
Checkpoints. Both corroborative and conflicting evidence were used in the cases.
6. DAPS was tailored for Defense Business System (DBS) acquisition programs, but
is adaptable for IT and system programs in general. The case analyses
demonstrated the use of the DAPS model in a variety of scenarios, showing the
flexibility and adaptability of the model.
DAPS is a promising way ahead to assess program success and associated risks,
support decision making, and gain further understanding of program management and
systems engineering.
231
8
CONCLUSION
In conclusion, the Defense Business System Acquisition Probability of Success
(DAPS) research demonstrated that an evidence-based, Bayesian Network model can be
developed to collectively draw inferences from the abundance of DBS acquisition
evidence, in a repeatable and structured manner, to reliably support acquisition decision
making. The DAPS model consists of observable evidence items, intermediate subject
Knowledge Areas, defense acquisition Knowledge Checkpoints, and the respective
Causal-Influence Relationships (CIRs) among them. The principles of Knowledge-Based
Acquisition and risk management are embedded in the DAPS model to support the
evidence-based inference of program success. The resultant DAPS model demonstrated
the ability to assess the program’s performance in subject matter Knowledge Areas and to
ascertain the program’s Probability of Success at Knowledge Checkpoints. The
Probability of Success measurement was furthermore used to calculate the Success
Factor, determine an assessment rating for the program, and provide an acquisition
decision recommendation.
Chapter 1 of the DAPS dissertation provided an introduction to the DAPS
research, presenting the Problem Statement, Motivation and Background, Research
Hypothesis and Goals, the Dissertation Contributions, as well as the overarching design
of the research. Chapter 2 discussed the expansive Literature Review conducted as part of
232
the DAPS research, including the technical foundations in evidential reasoning and
Bayesian Networks, as well as the application domain in defense acquisition,
project/program management, and systems engineering.
Chapter 3 presented a Bayesian Network Prototype based on the Naval PoPS
framework. The prototype model focused on the issue of complex interrelationship
modeling. It introduced the conversion criteria and method used in this dissertation for
probability specifications, and conducted two case analyses to demonstrate the potential
of using Bayesian Networks for complex interrelationship modeling with direct
comparison to the Naval PoPS model.
Chapter 4 provided discussion on the analysis and organization of the evidence
items produced during DBS acquisition and provided the evidential reasoning
foundations for the DAPS model. Chapter 5 presented a detailed discussion on the DAPS
model structure, including details on the three types of model nodes and the four types of
model arcs. Chapter 6 outlined the approach taken to elicit and implement the expert
knowledge into the DAPS model, embedding the expert knowledge into the DAPS
network structure and Conditional Probability Tables (CPTs).
Finally, Chapter 7 provided 12 additional case analyses to analyze the behavior of
the DAPS model and demonstrate its potential to support acquisition decision making.
The 12 case analyses showed the DAPS model’s ability to meet the research objectives
outlined in Chapter 1.
The contributions of the DAPS research from Section 1.4 are restated below:
233
1. Developed a quantitative system for the fusion of DBS acquisition
data/knowledge to aid decision makers in holistically and logically processing all
available data/evidence by:
a. Measuring performance of key areas of a program (Knowledge Areas)
b. Measuring success at a review/milestone (Knowledge Checkpoint)
c. Predicting future program success based on dynamic modeling
2. Conducted Knowledge Engineering of DBS/Defense acquisition from an
evidence-based perspective:
a. Analyzed and constructed taxonomy of DBS acquisition evidence
b. Elicited expert knowledge for the evidential relationships/influences on
program success, including intermediate complex interrelationships
c. Implemented the Knowledge Engineering into the DAPS model
3. Conducted a total of 14 case analyses of hypothetical and real-world cases,
demonstrating the potential to aid evidence-based decision making, as well as the
potential for further research regarding program evidence, program success, and
additional applications and expansions of the DAPS model.
The DAPS model distills and summarizes the complex factors pertaining to
DBS/defense acquisition success in the form of a model to predict program success using
observable indicators. It is the author’s sincere hope that this research will help to
facilitate further study of the complex science behind systems engineering and
management, and further gain understanding into the people, processes, and product
234
interactions that enable humans to develop amazing engineered systems, which are what
make human existence so unique on this planet.
235
9
FUTURE RESEARCH
There are many potential future research directions that can be further expanded
based on this dissertation. The most pertinent is perhaps the development of the
probabilistic ontology of systems engineering and project/program management to
support the further transformation from qualitative and heuristic disciplines to scientific
studies of cause and effect. It is evident, through the many studies observed during the
literature review, that there has already been much research conducted in this direction.
Section 2.7.5 provided some select samples of the related works of research. There is
great potential to collectively examine these works already conducted to improve the
understanding of the subject matter. Couple the probabilistic ontology to the everevolving analytical and reasoning computing power, and there is great potential to vastly
improve the way humans develop and build systems.
For defense acquisition, this research applied the concept of Knowledge-Based
management into practice. Instead of regimented processes and regulations to guide
programs, future focus could be on developing a framework to improve knowledge and
certainty about programs, and developing methods to segment and minimize risks. As a
result, more programs could be successfully implemented, leading to better investment
outcomes.
236
237
Initial ROM
Estimate - Cost
2 Time
Initial ROM
Estimate - Time
3 Scope
Problem
Statement
238
1 Cost
5 GM
Business Process
Re-Engineering
MS Acquisition
Decision Memo
6 SE
AoA Study Guide
7 SE
AoA Study Plan
Schedule (POAM,
Program
Structure, IMP,
IMS)
4 Scope
8 Time
Schedule structure of the program
FOC
IOC
PRR
MS C
TRR
CDR
PDR
MS B
PreED
SFR
SRR
Description
Initial Rough Order Magnitude
estimate used for initial
development decision and initial
budgeting
Initial Rough Order Magnitude
estimate of time needed for
program execution
Required BCL acquisition artifact.
Used to make development
decision. Precursor to the Business
Case.
Scope/Architecture Artifacts
capturing the business process
changes for the IT system.
Approval Memos signed out by the
MDA.
Guide to develop the Analysis of
Alternatives Study Plan
Analysis of Alternative (AoA) Study
Plan to develop the AoA
MS A
Evidence
ASR
KA
ITR
EID
MDD
10 APPENDIX A—DAPS EVIDENCE TAXONOMY
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
9 Time
10 GM
11 GM
12 SE
13 Cost
Qualit
14 y
15 Cost
239
16 Cost
17 Scope
Qualit
18 y
19 Time
20 Cost
Schedule
Progress (POAM,
IMP, IMS)
Staffing
Budgeting/Fundi
ng
Tech Review
Report
CARD/BOM/Cost
Breakout
Market Research
Cost
Estimate/Projecti
on/CPI
Cost Expenditure
Risk Report Scope
Risk Report Quality
Risk Report Time
Risk Report - Cost
21 Scope AoA
Procur Acquisition
22 e
Strategy/Plan
23 GM
Program Charter
Schedule historical progress and
projection based on the historical
progress
Program Staffing level
Program Budgeting and Funding
level
Report produced by the technical
review chair after the review
Breakout of the items for cost
analysis and estimate
Market research to support the
analysis of alternatives
Cost estimate and projection
including the EVMS CPI.
Project historical cost expenditure
and projection based on the average
expenditure rate.
Risk Report on program scope
Risk Report on system product
quality
Risk Report on program
time/schedule
Risk Report on program cost
Analysis of Alternative (AoA) for the
system solution.
Acquisition Strategy and Plan for the
acquisition program.
Documents which defines the
program goals, program policies,
organization, roles, and
responsibilities of stakeholders.
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
24 GM
MS Investment
Decision Memo
25 Cost
IGCE
26 SE
SEP
Qualit
27 y
240
28 Scope
29 SE
30 SE
Qualit
31 y
RFI
Performance/Re
quirements
Specification
Risk
Management
Plan
TES/P, TEMP
Approval Memos signed out by the
Investment Review Board
Independent Government Cost
Estimate - not from the program
office
Documents the program's systems
engineering process, policy,
organization, roles, responsibilities,
and tool set.
Request For Information - a
contracting artifact requesting
information from potential contract
offerors to collect information on
the potential offers. Helps to
support the Request For Proposal.
Performance/Requirements
Specification for the system being
developed
Documents the risk management
process, policy, organization, roles,
responsibilities, and tool set.
Test Evaluation Strategy/Plan, or the
Test and Evaluation Master Plan
used to document the testing
process, policy, organization, roles,
responsibilities, and tool set.
Test Report
Procur
32 e
PR or Draft RFP
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Procurement Request or Draft
Request For Proposal - subset of a
Request For Proposal submitted to
the contracting office to develop the
final Request For Proposal.
X
X
X
X
X
X
X
X
X
Industry
Procur Questions and
33 e
Feedback
Procur
34 e
RFP
Procur Source Selection
35 e
Plan
241
36 Scope
Business Case
37 SE
LCSP
38 SE
ISP
39 SE
PPP
Industry Questions or Feedback
from government's industry day,
RFI, or draft RFP information
released to the industry, providing
information to the government
program office and contracting
office for the refinement of the RFP.
Request For Proposal - official
documentation released to the
industry to request and guide
proposals.
Plan to evaluate the solicitation
proposals and select the winning
offeror.
Business Case is an official
acquisition document required at
Milestone Reviews under the
Business Capability Lifecycle model.
Life Cycle Sustainment Plan
documents the system operation
and sustainment process, policy,
organization, roles, responsibilities,
and tool set.
Information Support Plan
documents the system architecture,
information requirements which
precede Information Assurance and
Data Center and network
requirements.
Program Protection Plan documents
the plan to protect the critical
information being produced and
developed in the acquisition
program.
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
40 Scope
Acquisition
Program Baseline
(APB)
Qualit
41 y
ILA Report and
Findings
242
Procur Contract - Terms
42 e
and Conditions
Software
Procur Licensing
43 e
Agreement
Procur
44 e
CDRL Review
Technical
Procur Proposal
45 e
evaluation
Acquisition Program Baseline
documents the agreement for
quality/performance, schedule, and
cost baselines along with any other
program scope and acquisition
thresholds and objectives.
Independent Logistics Assessment is
an independent assessment of the
acquisition program's development
of infrastructure and team to
operate and sustain the system
being developed/acquired. This is a
required artifact for ACAT level
programs.
Contract awarded for performance
and its specific terms and
conditions.
Software Licensing Agreements (if
any) for the software, commercial
software used to develop/integrate
the system.
Contract Data Requirements List
deliverables are the data/material
deliverables required by the
contract and delivered to the
government for inspection and
acceptance.
The review of the Technical
proposal submitted by the
contractor, providing evidence on
how well the contractor will
potentially perform for this
acquisition program.
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Procur
53 e
CPAR
IA - DIACAP
Qualit Progress/Approv
al
54 y
Qualit Test Report
55 y
(Government)
Interface Requirements
Specification documenting the
interface data requirements for the
system.
Contractor's Configuration
Management Plan and/or software
development plan to guide the
software development program.
Contractor's test plan in support of
government's test plan.
System Design Document
documents the software design.
Interface Design Document
documents the design for the
system interfaces needed to meet
system requirements
Test Cases documents the scenarios
used to test and verify system
performance.
Contractor testing results based on
the test cases.
Contractor Performance Assessment
Report documents the
government's review of contractor
performance as well as potential
contractor response.
Information Assurance
documentation. Approval is
required for system production.
Government's test results based on
the test cases
Qualit
56 y
Operational Assessment or User
Acceptance Testing - performed by
the user community to validate the
46 Scope
IRS
49 Scope
Contractor
Development/C
M Plan
Contractor Test
Plan
SDD/Software
Design
50 Scope
IDD/Architecture
Qualit
51 y
Qualit
52 y
Test Cases
Test Report (Contractor)
47 SE
48 SE
243
OA - UAT
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
system has accomplished the
desired functions and capabilities.
57
58
59
60
Qualit
y
Qualit
y
Qualit
y
Procur
e
61 Cost
62 Cost
Data Center
Performance
System Trouble
Reports
Help Desk
Support
SSA Decision
Report
Cost/Price Terms
for Contract
Cost/Price
Proposal Analysis
Report - Realism
Historical or tested performance of
the hosting data center.
Trouble Report of the system
reported by the user community.
Help Desk Support quality feedback.
X
X
X
X
244
11 APPENDIX B—DAPS NETWORK STRUCTURE AND PROBABILITY SPECIFICATION DATA COLLECTION
11.1 Subject Matter Expert Interview Data Collection Sheet
Knowledge Area to Knowledge Checkpoint Arcs Relationships Outline
*The Author has provided the direct causal-influence relationships from his perspective, his explanations, and associated
ranks of the relationships. Please provide your perspective. Please only indicate your rankings for all ARCS below. If
desired, please also provide your explanations. If the relationships are equal, please use the same ranking.
245
*When this ranking is translated to the model, the author will use the ranking to calculate the probability specifications.
Ranking of 1 will provide the Arc a weight of 1. Ranking of 2 will provide the Arc a weight of 1/2. Ranking of 3 will provide
the Arc a weight of 1/4. Ranking of 4 will provide the Arc a weight of 1/8.
*Knowledge Checkpoint Node State Definitions
1. Success—Knowledge necessary for Program Success thoroughly achieved at this KC and is ready to proceed to the next phase.
2. Failure—Otherwise
For Full Operating Capability Knowledge Checkpoint node, the program progress would have been fully achieved. Thus,
the Node simply indicates the Program “Success” and “Failure” achieved based on the observations of evidence
throughout the program life cycle.
*Knowledge Area Node State Definitions
1.Good
a)Thorough understanding of the Knowledge Area at this Knowledge Checkpoint
b)Minor Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
2.Marginal
a)Unclear understanding of the Knowledge Area at this Knowledge Checkpoint
b)Significant Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
First
Quality
Scope
Time
Cost
Second
KC
KC
KC
KC
Author Explanation
Quality performance is a direct measurable to success.
Scope directly affects program success.
Time is a direct measurable to success.
Cost is a direct measurable to success.
Sample
Ranking
1
1
1
1
RANKING
246
Knowledge Area to Knowledge Area Arc Relationship Outline
KA to KA Arcs Represents relationship between the first KA (cause) and second KA (effect), where second event is understood as the
consequence of the first.
The Following Arc relationships represent DIRECT causal-influence relationships of Knowledge Areas and the associated rank respective to the
SECOND Knowledge Area Node of the Arc Relationships.
*The Author has provided sample direct causal-influence relationships from his perspective, his explanations, and his associated ranks of the
relationships, Please provide your perspective. Please only indicate your TOP DIRECT relationships, and NO MORE THAN 3 for each ending
(second) node. If desired, please also provide your explanations. If the relationships are equal, the same rank should be used, however, the
total is still no more than 3.
*When this ranking is translated to the model, the author will use the ranking to calculate the probability specifications. Ranking of 1 will
provide the Arc a weight of 1. Ranking of 2 will provide the Arc a weight of 1/2. Ranking of 3 will provide the Arc a weight of 1/4.
247
*For root nodes (no arc) the author intends to use a standard (Good=70/30, Marginal = 30/70) probability for the root node probability
specification. For single arcs, the author intends to use a standard (Good=85/15, Marginal=15/85) probability specification. Double Arcs,
(Good=90/10, Marginal=10/90), Triple Arcs, (Good=95/5, Marginal=5/95).
*Provide your answer in the yellow highlighted areas. Please be mindful not to create a circular logic. For example, if cost is a consequence of
time, then time is not a consequence of cost.
*Knowledge Area Node State Definitions
1.Go
od
a)Thorough understanding and demonstrated performance of the Knowledge Area at this Knowledge
Checkpoint
b)Minor Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
2.Marginal
a)Unclear understanding and unsatisfactory performance of the Knowledge Area at this Knowledge
Checkpoint
b)Significant Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
*Please reference the brief included to see the Knowledge Area Structures collected so far. Feel free to develop one which is different from
these based on your expert perspective. Sample ranking is provided for Structure 1. A blank graph is also provided for you below. Feel free to
use it to draw arrows yourself for visual graph modeling.
Cost_Knowledge
Good
56.7
Marginal
43.3
Time_Knowledge
Good
57.8
Marginal
42.2
248
Scope_Knowledge
Good
53.2
Marginal
46.8
Procurement_Knowledge
Good
57.9
Marginal
42.1
General_Management_Knowledge
Good
70.0
Marginal
30.0
Quality_Knowledge
Good
66.7
Mariginal
33.3
Systems_Engineering_Knowledge
Good
72.0
Marginal
28.0
Arcs to GM
First
249
Second
Author Explanation
General
Management
General Management is a root node (no parents) because it is the top level
management and process which has causal influence on all other Knowledge
Areas.
Systems
Engineering
General
Management
No causal-influence relationship. Rather Systems Engineering is a consequence
of General Management.
Procurement
General
Management
No causal-influence relationship. Rather Procurement Management is a
consequence of General Management.
Scope
General
Management
No direct causal-influence relationship.
Quality
General
Management
No direct causal-influence relationship
Time
General
Management
No direct causal-influence relationship
Cost
General
Management
No direct causal-influence relationship
Sample
Ranking
1
RANKING
(AT MOST
3)
Arcs to Systems Engineering Knowledge
First
250
Second
Author Explanation
Systems
Engineering
Not a root node, driven by GM.
General
Management
Systems
Engineering
General Management drives Systems Engineering through having good
personnel working on the systems engineering issues; have the technical
personnel stay motivated; providing clear communication and decisions
regarding technical management and technical process of the program.
Procurement
Systems
Engineering
No direct causal-influence relationship. Rather, Systems Engineering will drive
Procurement.
Scope
Systems
Engineering
No direct causal-influence relationship. Rather, Systems Engineering will drive
Scope.
Quality
Systems
Engineering
No direct causal-influence relationship. Rather, Systems Engineering will drive
Quality.
Time
Systems
Engineering
No direct causal-influence relationship.
Cost
Systems
Engineering
No direct causal-influence relationship.
Sample
Ranking
1
RANKING
(AT MOST
3)
Arcs to Procurement Management Knowledge
First
251
General
Management
Author Explanation
Not a root node, driven by GM.
General Management (Communication, HR, and Environmental) drives good or marginal
procurement management through hiring/assigning good personnel working on the
procurement; clear communication with stakeholders; clear and constructive decisions
regarding procurement; good communication and general management and directions
Procurement to the contractor; consistent budgeting and funding.
Systems
Engineering
Systems Engineering Process and Plan set by the Government drives the procurement
contract terms of delivery such as EVMS requirements, risk management requirements,
configuration management, architecture requirements, technical data management,
Integrated Date Environment (IDE) requirements, and data deliverables required. Risk
Management, Technical Management, and Technical Process are the control factors for
Procurement the technical performance of the contract in terms of quality, cost, and schedule.
Scope
Quality
Time
Cost
Second
Procure
Procurement
Procurement
Procurement
Procurement
Scope drives the whole procurement process. How well the scope is defined, including
the required process, required deliverables, required performance specifications, and
the required laws, regulations, and policies. The scope of the program is the Statement
of Work and specification of the contract that will solicited, awarded, and administered.
Scope of the program affects the strategy required to deliver the best value system
product government is seeking
No causal-influence relationship. Rather, Procurement drives Quality.
No causal-influence relationship. Rather, Procurement drives Time.
No causal-influence relationship. Rather, Procurement drives Cost.
Sample
Ranking
1
2
2
RANKIN
G (AT
MOST 3)
Arcs to Scope Management Knowledge
First
Second Author Explanation
Scope
Scope is not a root node, it is driven by another KA.
252
Systems
Engineering
Scope
Systems Engineering Technical Management Process and Technical Process are centered on
the development of the requirement scope throughout the program lifecycle. Scope is a
consequence of the systems engineering processes.
General
Management
Procure
Scope
Scope
Scope not directly influenced by General Management, but rather through systems
engineering.
No causal-influence relationship. Rather, scope drives procurement.
Quality
Time
Cost
Scope
Scope
Scope
No causal-influence relationship. Rather, scope specifies the quality requirements.
No causal-influence Relationship. Rather, scope could specify time requirements
No causal-influence relationship. Rather, scope could specify cost requirements.
Sample
Ranking
1
RANKIN
G (AT
MOST
3)
Arcs to Quality Management Knowledge
First
Second Author Explanation
Quality Quality is not a root node. It is driven by other KAs.
Sample
Ranking
Quality
Mature Systems Engineering Process drives the product quality delivered by the contractor
through documentation, review and approval process, requirements
development/verification/validation, and defect management and tracking
1
Procure
Quality
The management of the procurement (contractor) drives the management of the quality of
work contractor delivers since it depends on the contractual terms, selection of the
contractor, as well as the administration of the contract.
1
General
Management
Time
Cost
Quality
Quality
Quality
Quality not directly influenced by GM, but rather through Procurement and Systems
engineering.
No direct causal-influence relationship. Rather, quality drives time.
No direct causal-influence relationship. Rather, quality drives cost.
Scope
Quality
No direct causal-influence relationship. Quality is a consequence of scope through
procurement.
Systems
Engineering
RANKING
(AT MOST
3)
253
Arcs to Time Management Knowledge
First
Second Author Explanation
Time
Time is not a root node. It is driven by other KAs.
Sample
Ranking
Time
Quality of the product affects the time it takes to develop the product. System product of
higher quality has a higher probability of meeting the quality requirements and systems
engineering reviews at the time scheduled or better.
1
Procure
Time
The Management of the procurement (contractor and contract) drives the management of
time contractors take to perform the work since it depends on the contractual terms, selection
of the contractor, as well as the administration of the contract. This relationship is weaker
because while quality issues is the main cause of time it takes to complete a program (quality
already a consequence of procurement), procurement-time relationship determines the
program contractual decisions to lower the quality to meet the time constraint or prolong the
program to meet the quality constraint. Thus, it is secondary.
2
General
Management
Time
Time not directly influenced by GM, but rather through Procurement and SE.
Systems
Engineering
Time
Time not directly a consequence of SE, but rather through Quality performance.
Scope
Cost
Time
Time
Time not directly a consequence of the scope, but rather through procurement based on
Structure 1.
Time not directly a consequence of cost. Rather, cost is a consequence of time.
Quality
RANKIN
G (AT
MOST 3)
254
Arcs to Cost Management Knowledge
First
Second Author Explanation
Cost
Cost is not a root node, it is driven by other KAs.
Sample
Ranking
255
Procure
Cost
The management of the procurement (contractor) drives the management of the cost
contractors accrue to perform the work since it depends on the contractual terms, selection of
the contractor, as well as the administration of the contract. The contract could potential have
cost requirements (Firm Fixed Price Contracts). This sets the price for the fixed cost drivers as
well as variable cost drivers.
Quality
Cost
Quality affects the cost of the product. High system component quality and high deliverable
quality will lower the cost uncertainty.
3
Time
Cost
Labor Hours is a significant cost driver in a software program, and the most of significant of
the variable costs. The longer a program takes/delays, the more it will cost - also indicating rework or uncertainties unaccounted for prior.
2
General
Management
Cost
Cost not directly influenced by General Management, but rather through Procurement and
Systems Engineering.
Cost
Cost
Cost not directly a consequence of Systems Engineering, but rather through Procurement and
Quality.
Cost not directly a consequence of scope, but rather through procurement.
Systems
Engineering
Scope
1
RANKIN
G (AT
MOST 3)
Knowledge Area to Evidence Node Arc Relationships Outline
Probability Assignments for all Evidence Nodes based on the Knowledge Area to Evidence Relationships.
Knowledge Area States:
1.Good
a)Thorough understanding and demonstrated performance of the Knowledge Area at this Knowledge Checkpoint
b)Minor Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
2.Marginal
a)Unclear understanding and unsatisfactory performance of the Knowledge Area at this Knowledge Checkpoint
b)Significant Risk and Uncertainties to Program Success identified in the Knowledge Area at this Knowledge Checkpoint
256
As a consequence of the Knowledge Area (Good or Marginal), the evidence could obtain the adjectival ratings of:
1.Outstanding
a)Evidence indicates low risk in the KA to Program Success
b)Evidence indicates exceptional understanding and approach of the KA necessary for program success
c)Evidence contains KA strengths far outweighing weaknesses
2.Acceptable
a)Evidence indicates no worse than moderate risk in the KA to Program Success
b)Evidence indicates adequate approach and understanding necessary for program success
c)Evidence contains KA strengths and weaknesses which are offsetting
3.Unacceptable
a)Evidence indicates high risk in the KA to program success
b)Evidence indicates no clear understanding and approach in KA for program success
c)Evidence contains one or more deficiency indicating program failure
d)Evidence is missing
For Good Knowledge in Knowledge Area:
Node State:
Author Specification
Outstanding
Acceptable
Unacceptable
For Marginal Knowledge in Knowledge Area:
Node State:
Author Specification
Outstanding
Acceptable
Unacceptable
Probability Specification
30
60
10
Probability Specification
5
45
50
*It is useful to look at this in term of ratios, both comparing outstanding-outstanding ratios across Knowledge Area States, and comparing outstanding to
acceptable to unacceptable within a Knowledge Area state.
257
Knowledge Area to Knowledge Checkpoint Arcs Relationships Outline
*Utilizing the Knowledge Area specifications as a guide, indicate the relationship of the Knowledge in the Knowledge
Area at a previous Checkpoint to the present Knowledge Checkpoint. The ranking is in reference to the rankings
specified in the Knowledge Area section. A general ranking is used for all KAs in the Knowledge Checkpoint. Flow of the
Knowledge Checkpoints is provided below.
*When this ranking is translated to the model, the author will use the ranking to calculate the probability specifications.
Ranking of 1 will provide the Arc a weight of 1. Ranking of 2 will provide the Arc a weight of 1/2. Ranking of 3 will provide
the Arc a weight of 1/4. Ranking of 4 will provide the Arc a weight of 1/8.
*This relationship will be combined with the rankings specified previously in the Knowledge Area section for Knowledge
Area node probability specification calculations.
258
*Author's ranking 1-4.
First
Second
Prior KA
Present KA
Wt. Score
Please Rank 1-4
First
Second
Prior KA
Present KA
Wt. Score
Knowledge Checkpoints
ITR
ASR
MS A
SRR
SFR
Pre-ED MS B PDR
CDR
TRR
MS C PRR IOC
FOC
3
3
3
3
3
2
2
2
2
2
2
2
1
1
0.25 0.25
0.25 0.25 0.25
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
1
Knowledge Checkpoints
ITR
ASR
MS A
SRR
SFR
Pre-ED MS B PDR
CDR
TRR
MS C PRR IOC
FOC
Success
Failure
MDD
100
0
Success
Failure
SRR
Success
Failure
98.9
1.09
98.9
1.09
98.9
1.09
ASR
100
0
Success
Failure
SFR
Success
Failure
PDR
Success
Failure
ITR
100
0
Pre_ED
Success
Failure
CDR
Success
Failure
100
0
PRR
100
0
0
100
0
0
100
0
0
IOC
Success
Failure
100
0
FOC
Success
Failure
85.0
15.0
MS_A
100
0
MS_B
Success
Failure
TRR
Success
Failure
0
Success
Failure
Success
Failure
100
0
0
MS_C
Success
Failure
100
0
0
11.2 Data and Statistics
Knowledge Area Structure Data
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
14
0
0
3
0
0
0
17
Mean
14 0.824
0
0
0
0
3 0.176
0
0
0
0
0
0
17
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
Mode
1
0
0
0
0
0
0
1
Mode
1
0
0
0
0
0
0
1
Median
1
0
0
0
0
0
0
1
Median
1
0
0
0
0
0
0
1
SME Input
1 1 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 1 1
Mean
259
Second
GM
GM
GM
GM
GM
GM
GM
Total
Rank Obs.
First
Root
SE
Procure
Scope
Quality
Time
Cost
TOTAL
Arcs to GM
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
Second
SE
SE
SE
SE
SE
SE
SE
Total
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
0
0
2
SME Input
0 0 0 0
1 1 1 1
0 0 0 0
1 0 0 0
0 0 0 0
0 0 0 1
0 0 0 1
2 1 1 2
0
1
0
1
0
0
0
2
0
1
0
0
0
0
0
1
0
1
0
1
0
0
0
2
0
1
0
1
0
0
0
2
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
0
1 15.5
0 0.25
0
5
0
0
0 0.5
0 0.75
1
22
Rank Obs.
First
Root
GM
Procure
Scope
Quality
Time
Cost
TOTAL
Arcs to Systems Engineering Knowledge
0
17
1
5
0
1
2
17
0
0.912
0.015
0.294
0
0.029
0.044
1.294
0
1
1
1
0
0
0
3
0
1
1
1
0
0
0
3
0
1
1
1
0
0
0
3
0
1
1
0
0
0
0
2
0
1
1
1
0
0
0
2
0
0
1
1
0
0
0
2
0
1
1
1
0
0
0
2
0
0
1
1
0
0
0
2
0
1
1
1
0
0
0
3
0
1
1
1
0
0
0
3
0
1
1
0
0
0
0
2
0
1
1
1
0
0
0
2
0
1
0
1
0
0
0
2
0
14.3
10.8
11
0
0.25
0
36.3
0
1
1
0
0
0
0
2
1
0
0
0
0
0
0
1
0
1
1
0
0
0
0
2
0
0
1
0
0
0
0
1
1
0
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
1
0
0
0
0
1
0
1
0
0
0
0
0
1
0
16
16
17
0
1
0
17
Mode
Median
Mean
SME Input
0 0 0
1 1 1
0 1 1
1 1 1
0 0 0
0 0 0
0 0 0
2 3 2
0
0.838
0.632
0.647
0
0.015
0
2.132
0
0
1
1
0.5 0.5
0.5 0.5
0
0
0
0
0
0
2
2
Mean
260
0
1
1
1
0
0
0
2
Rank Obs.
GM
SE
Scope
Quality
Time
Cost
Second
Procure
Procure
Procure
Procure
Procure
Procure
Procure
TOTAL
First
Rank Obs.
Arcs to Procurement Management Knowledge
SE
GM
Procure
Quality
Time
Cost
SME Input
0 0 0
1 1 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 1 1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0 1
0
1 0
0 0
0 0
0 0
0 0
1 1
0
1
0
0
0
0
0
1
0
0
1
0
0
0
0
1
3
9.5
5.25
0
0
0.25
0
17.5
3
11
7
0
0
1
0
17
0.176
0.594
0.309
0
0
0.015
0
1.029
0
0.75
0
0
0
0
0
1
Mode
Second
Scope
Scope
Scope
Scope
Scope
Scope
Scope
Median
First
TOTAL
Arcs to Scope Management Knowledge
0
1
0
0
0
0
0
1
SE
Procure
GM
Time
Cost
Scope
0
1
1
0
0
0
0
2
0
1
1
0
0
0
0
2
0
1
1
0
0
0
0
2
0
1
0
0
0
0
0
1
0
1
1
0
0
0
0
2
0
1
1
0
1
0
0
2
0
1
1
0
0
0
0
2
0
1
1
0
0
0
0
2
0
0
1
0
0
0
0
1
0
1
1
0
0
0
0
2
0
1
1
0
0
0
1
3
0
1
1
0
0
0
0
2
0
0
1 15.5
1
12
0
0
0 0.5
0
0
0
1
2
29
0
16
16
0
1
0
1
17
0
0.912
0.706
0
0.029
0
0.059
1.706
0
1
0.5
0
0
0
0
1.5
0
1
1
0
0
0
0
2
261
0
1
1
0
0
0
0
2
0
1
1
0
0
0
0
2
0
1
1
0
0
0
0
2
0
1
0
0
0
1
0
2
0
1
1
0
0
0
0
2
0
0
0
1
1
0
0
2
0
1
1
0
0
0
0
2
0
0
1
0
0
1
0
2
0
1
1
0
0
1
0
2
0
1
1
0
0
1
0
3
0
1
1
0
0
1
0
2
0
1
1
0
0
1
0
3
0
1
1
0
0
1
0
2
0
0
0 13.5
1
9
0
1
0 0.5
0
8
0
0
2
32
0
16
15
1
1
11
0
17
Mode
Median
Mean
SME Input
0 0 0
1 1 1
1 1 1
0 0 0
0 0 0
1 1 0
0 0 0
2 3 2
Rank Obs.
Second
Time
Quality Time
Procure Time
GM
Time
SE
Time
Scope
Time
Cost
Time
TOTAL
Arcs to Time Management Knowledge
First
Mode
Median
SME Input
0 0 0 0
1 1 1 1
1 1 1 1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
2 2 2 2
Mean
Second
Quality
Quality
Quality
Quality
Quality
Quality
Quality
Rank Obs.
First
TOTAL
Arcs to Quality Management Knowledge
0
0
0
0.794
1
1
0.529 0.5 0.5
0.059
0
0
0.029
0
0
0.471 0.5
0
0
0
0
1.882 1.75 1.5
0
1
0
1
0
0
0
2
0
1
1
1
0
0
0
2
0
1
1
1
0
0
0
3
0
1
0
1
0
0
0
2
0
1
1
0
0
0
0
2
0
1
0
1
0
0
0
2
0
1
1
0
0
0
0
2
0
1
0
1
0
0
0
2
0
1
0
1
0
0
0
2
0
1
0
1
0
0
0
2
0
1
1
1
0
0
0
2
0
0
1
1
0
0
0
2
0
1
1
1
0
0
0
3
0
0
1 14.25
0
7.5
0
10
0
0
0 0.25
0
0
1
32
0
17
16
16
0
1
0
17
Mode
Median
SME Input
0 0 0
1 1 1
0 1 0
1 1 1
0 0 0
0 0 0
0 0 0
2 2 2
Mean
Second
Cost
Procure Cost
Quality Cost
Time
Cost
GM
Cost
SE
Cost
Scope
Cost
Rank Obs.
First
TOTAL
Arcs to Cost Management Knowledge
0
0
0
0.838
1
1
0.441 0.25 0.25
0.588 0.5 0.5
0
0
0
0.015
0
0
0
0
0
1.882 1.75 1.75
262
1
1
1
1
0.5
0.25
1
1
1
0.125
0.5
0.25
0.5
1
1
0.125
1
0.721
1
1
0.3605
1
0.125
KC
1
1
1
0.5
1
0.5
1
1
0.5
0.25
1
1
1
0.25
0.5
0.25
1
0.75
1
1
0.3187
1
0.25
Time
KC
1
0.5
0.5
0.5
0.5
1
0.25
0.5
0.5
1
0.25
0.125
0.25
0.5
0.25
0.5
0.5
0.507
0.5
0.5
0.2669
1
0.125
Cost
KC
1
1
0.5
1
0.5
0.5
0.5
0.125
1
0.5
0.25
0.5
0.25
0.5
0.25
1
0.25
0.566
0.5
0.5
0.3129
1
0.125
Min
KC
Scope
Max
Mode
Quality
SME Input
St. Dev
Second
Mean
KA
Median
Knowledge Checkpoint Structure Data
Evidence Node Data
263
5
40
50
35
75
30
10
50
65
Min
10
50
5
Min
Max
Mode
Median
5
45
50
30
60
10
Max
Node State:
SME Input
Outstanding
5 10 5 1 10 10 10 5 5 5 0 5 5 0 5 5 5 5.35 3.20
Acceptable
45 40 35 50 40 40 50 45 45 50 35 35 45 40 50 50 40 43.24 5.57
Unacceptable 50 50 60 49 50 50 40 50 50 45 65 60 50 60 45 45 55 51.41 6.58
25
60
10
Mode
St. Dev.
Average
Node State:
SME Input
Outstanding
30 30 20 20 25 30 30 35 25 10 20 20 25 35 30 30 15 25.29 6.95
Acceptable
60 65 60 70 50 60 60 55 70 60 70 60 65 55 60 60 75 62.06 6.39
Unacceptable 10 5 20 10 25 10 10 10 5 30 10 20 10 10 10 10 10 12.65 6.87
For Marginal Knowledge in Knowledge Area:
Median
St. Dev.
Average
For Good Knowledge in Knowledge Area:
0
35
40
REFERENCES
Alberts, David S., Garstka, John J., Stein, Frederick P., Network Centric Warfare, CCRP
1999
Aramo-Immomen, Heli, Project Management Ontology—A Learning Organization
Perspective, Tampere University of Technology, 2009
Ashby, W.R., An Introduction to Cybernetics, Chapman&Hall LTD, London, 1957,
Retrieved from http://pespmc1.vub.ac.be/books/introcyb.pdf
Ashby, W.R. Principles of the self-organizing system. In: Principles of SelfOrganization, 255–278, 1962, Retrieved from
http://csis.pace.edu/~marchese/CS396x/Computing/Ashby
Atkinson & Moffat, The Agile Organization—From Informal Networks to Complex
Effects and Agility, CCRP 2005
Bloch, M., Blumberg, S., and Laartz, J., Delivering large-scale IT projects on time, on
budget, on value, McKinsey & Company, October 2012
Bunting, William, Enterprise Line of Sight Analysis Using Bayesian Reasoning, George
Mason University, UMI Dissertation Publishing, 2012
Complex Adaptive Systems Group (CASG), December 2014, Retrieved from:
http://www.cas-group.net/
Cohen, Michael. D. and Robert Axelrod, Coping With Complexity: The Adaptive Value of
Changing Utility, American Economic Review 74 (1984): 30-42.
Cohen, Michael. D. and Robert Axelrod, Harnessing Complexity: Organizational
Implications of a Scientific Frontier. The Free Press, 1990
Cooke-Davies, T.J., The “real” success factors on projects, Int. Journal of Project
Management, Vol. 20, 2002, pp. 185-190.
Cooper, Lynne Pucilowski, How Project Teams Conceive Of And Manage PreQuantitative Risk, UMI, University of Southern California, 2008
264
Defense Acquisition University, Defense Acquisition Guidebook, Defense Acquisition
University, August 2013, https://dag.dau.mil/Pages/default.aspx
Defense Acquisition University, Systems Engineering Process, Last Updated November
15, 2012, Retrieved from:
https://dap.dau.mil/acquipedia/Pages/ArticleDetails.aspx?aid=9c591ad6-8f6949dd-a61d-4096e7b3086c
Defense Acquisition University Press, U.S. Department of Defense Extension to: A Guide
to the Project Management Body of Knowledge (PMBOK® Guide) First Edition
Version 1.0, June 2003, PMI Standard, Fort Belvoir, VA.
Defense Acquisition University, Integrated Lifecycle Chart, https://ilc.dau.mil/, August
2013
Defense Acquisition University, Risk Management Guide for DoD Acquisition, August
2006
Defense Acquisition University, Systems Engineering Fundamentals, Fort Belvoir, VA,
2001
Demir, K.A., and Osmundson, John S., A Theory of Software Project Management and
PROMOL: A Project Management Modeling Language, Technical Report, NPSIS-08-006, Naval Postgraduate School, Monterey, CA, USA, March 2008.
Demir, Kadir Alpaslan, Approaches for Measuring the Management Effectiveness of
Software Projects, April 2008, Retrieved from:
http://faculty.nps.edu/kdemir/papers/Software_Project_Management_Success.pdf
Department of Defense, Department of Defense Source Selection Procedures, March
2011
Department of Defense, Interim DoD Instruction 5000.02, “Operation of the Defense
Acquisition System”, November 25, 2013
Department of Defense, MIL-STD- 499A, Engineering Management (Now cancelled), 1
May 1974.
Department of Navy, Naval PoPS Guidebook - A Program Health Assessment
Methodology For Navy and Marine Corps Acquisition Programs Version 2.2,
2012
265
Doskey, Steven, A Measure of Systems Engineering Effectiveness in Government
Acquisition of Complex Information Systems: A Bayesian Belief Network-Based
Approach, The George Washington University, UMI Dissertation Publishing,
2014
Eisner, Howard, Managing Complex Systems: Thinking Outside the Box, WileyInterscience, 2005
Electronic Industries Alliance, EIA Standard IS-632, Systems Engineering, December
1994
Evans, M.E. et al, Seven Characteristics of Dysfunctional Software Projects, Crosstalk,
Vol. 15, No. 4, April 2002, pp. 16-20.
Government Accountability Office, GAO-11-233SP Defense Acquisitions—Assessments
of Select Weapon Programs, March 2011
Government Accountability Office, GAO-12-565R DOD Financial Management, March
2012
Government Accountability Office, Implementing a Knowledge-Based Acquisition
Framework Could Lead to Better Investment Decisions and Project Outcomes,
December 2005
Higbee, John and LTC Ordonio, Roberto, Program Success—A Different Way to Assess
it, AT&L Magazine, May 2005
Hoff, Peter D., A First Course in Bayesian Statistical Methods, Springer, New York,
2009
Honour, Eric C., and Valerdi, Ricardo, Advancing an Ontology for Systems Engineering
to Allow Consistent Measurement, Conference on Systems Engineering Research,
Los Angeles, CA, 2006.
Institute of Electrical and Electronics Engineers, IEEE P1220 Standard for Application
and Management of the Systems Engineering Process [Final Draft], 26
September 1994.
Jones, C., Software Project Management Practices: Failure versus Success, Crosstalk,
Vol. 17, No. 10, October, 2004.
Khayani, Payam Bakhshi, A Bayesian model for controlling cost overrun in a portfolio of
construction projects, Civil engineering Dissertations, Northeastern University,
Boston, MA, 2011
266
Khodakarami, Vahid, Bayesian Networks: A Novel Approach for Modelling Uncertainty
in Projects, PMI Risk SIG, 2009, PMI Rome Italy Chapter
Khoski, Timo J.T, and Noble, John M., A Review of Bayesian Networks and Structure
Learning, Mathematica Applicanda, 2012, Vol. 40(1) 2012, p. 53–103, Retrieved
from:
http://www.mimuw.edu.pl/~noble/courses/BayesianNetworks/Bayesnetsreview.p
df
Kim, Byung Cheol, Forecasting Project Progress and Early Warning of Project
Overruns with Probabilistic Methods, Texas A&M University, 2007, Retrieved
from:
http://repository.tamu.edu/bitstream/handle/1969.1/85811/Kim.pdf?sequence=1
Laskey, Kathryn, Graphical Probability Models for Inference and Decision Making/
Computational Models for Probabilistic Reasoning, September 2012, Retrieved
from: http://mason.gmu.edu/~klaskey/GraphicalModels/
Lewis, Tiffany, Quantitative Approach To Technical Performance Measurement And
Technical Risk Analysis Utilizing Bayesian Methods And Monte Carlo Simulation,
The George Washington University, UMI Dissertation Publishing, 2010
Liu, J., Yang, J-B., Wang, J. and Sii, H.S., Review of Uncertainty Reasoning Approaches
as Guidance for Maritime and Offshore Safety-based Assessment, Journal of UK
Safety and Reliability Society 23(1), 63-80., 2002
Merriam-Webster, http://www.merriam-webster.com/dictionary, 2014
Neapolitan, Richard E., Learning Bayesian Networks, Northeastern Illinois University,
Chicago, 2004, Retrieved from: http://lowcaliber.org/influence/neapolitanlearning-bayesian-networks.pdf
Norman, Douglass O. and Kuras, Michael L., Engineering Complex Systems, MITRE
Corporation, January 2004, Retrieved from:
http://www.mitre.org/sites/default/files/pdf/norman_engineering.pdf
Norsys Software Corp., Netica 4.16 for MS Windows, Copyright 1992-2010 by Norsys
Software Corp.
Office of the Deputy Assistant Secretary of Defense (ODASD) Systems Engineering,
Systems Engineering Plan Outline Version 1.0, April 2011
267
Pearl, J., Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference,
Morgan Kaufmann, 1988.
Pinto J.K., and Rouhiainen, P., Developing a customer-based project success
measurement, Proceedings of the 14th World Congress on Project Management,
Ljubljana, Slovenia, 1998, pp. 829-835.
Project Management Institute, Inc., A Guide to the Project Management Body of
Knowledge (PMBOK Guide), 4th ed., Newtown Square, PA: Project Management
Institute, Inc., 2008.
Praeger, Dave, Our Love Of Sewers: A Lesson in Path Dependence", Retrieved from:
http://progressivehistorians.wordpress.com/2007/10/06/our-love-of-sewers-alesson-in-path-dependence/, 15 June 2007.
Procacino, J. Drew, Quantitative Models For Early Prediction of Software Development
Success: A Practitioner’s Perspective , Drexel University, 2002
Random House Reference, Random House Webster’s Unabridged Dictionary, Random
House Reference, July 2005
Smith, Brian C, Prologue to Reflections and Semantics in a Procedural Language. In
Ronald Brachman and Hector J. Levesque. Readings in Knowledge
Representation. Morgan Kaufmann. pp. 31–40, 1985
Sage, Andrew and Rouse, William, Handbook of Systems Engineering and Management,
Wiley-Interscience, April 2009
Schum, David A., The Evidential Foundations of Probabilistic Reasoning, Northwestern
University Press, Illinois, 2001
Sheard, Sarah A., Principles of Complex Systems for Systems Engineering, Great Falls,
VA, Wiley Periodicals Inc., 2008
Stanford Encyclopedia of Philosophy., Emergent Properties, Retrieved from:
http://plato.stanford.edu/entries/properties-emergent/, December 2014
Tah, J.H., Carr, v., Knowledge-Based approach to construction project risk management,
Journal of Computing in Civil Engineering, 2001.
Taroun, Abdulmaten and Yang, Jian-Bo, Dempster-Shafer Theory of Evidence: Potential
usage for decision making and risk analysis in construction project management,
The Built & Human Environment Review, Volume 4, Special Issue 1, 2011
268
Valerdi, Ricardo, Constructive Systems Engineering Cost Model (COSYSMO), University
of Southern California, 2005
Yoo, Wi Sung, An Information-based Decision Making Framework for evaluating and
forecasting a project cost and completion Date, Electronic Thesis or Dissertation.
Ohio State University, 2007, https://etd.ohiolink.edu/
269
BIOGRAPHY
Sean Tzeng is a graduate of Alexis I. DuPont High School in Greensville, Delaware,
class of 2001. He received his Bachelor of Science in Aerospace Engineering with a
minor in Mathematics from Virginia Tech in 2005 and a Master of Science in Systems
Engineering, with concentration in Operations Research and Management Science, from
the George Washington University in 2007. Mr. Tzeng has been in public service as a
federal civilian in the Department of the Navy since 2007.
270
© Copyright 2026 Paperzz