A typology of systems paradoxes

Information Knowledge Systems Management 9 (2010) 1–15
DOI 10.3233/IKS-2010-0130
IOS Press
1
A typology of systems paradoxes
W. Clifton Baldwina,∗ , Brian Sauserb , John Boardmanb and Lawrence Johnc
a US
Federal Aviation Administration, WJH Technical Center, NJ, USA
of Systems & Enterprises, Stevens Institute of Technology, Hoboken, NJ, USA
c Analytic Services, Inc. Arlington, VA, USA
b School
Abstract: Philosophers have studied paradoxes for millennia. The term paradox is appearing increasingly outside of philosophy,
and researchers seek to understand them in common situations. By understanding these phenomena, systems engineers may
develop better strategies to deal with them when encountered in business or projects. However, what is meant by the term
systems paradox and are there different types? Towards this goal, this paper will present definitions of systems from systems
science and paradox from philosophy in a quest to define systems paradox. A paradox that impacts some objective of a system is
designated a systems paradox. Then a typology of systems paradoxes will be proposed and described using set theory. Various
examples provide a demonstration of the typology.
Keywords: Systems, paradox, typology
1. Introduction
Philosophers have studied paradoxes for millennia. Recent works have taken paradoxes out of the
realm of philosophy with intentions of understanding them in organizational structures [14,23,27,35,36,
55]. While some paradoxes are only interesting puzzles, many paradoxes cause obstacles in reaching
an organizational objective. An increased understanding of paradoxes may lead to better strategies to
cope with them in business. For example, Westenholz [55] examines employees’ frames of reference
when identifying organizational solutions to problems and their often paradoxical behavior. Lewis and
Dehler [36] discuss learning through paradox as a strategy for exploring organizational complexity. While
Sauser and Boardman [48] discuss paradoxes in regards to system of systems (SoS) characteristics, many
of their examples present alternative approaches to problems within organizations, such as the paradoxes
associated with teams and with control. To distinguish the paradoxes of interest, this paper refers to them
as systems paradoxes.
Although organizations increasingly encounter paradoxes [35], this paper strives to be an endeavor
into systems thinking. Therefore there remains the question of what is a systems paradox? A trivial
response is a paradox in the systems context, but we strive for a deeper understanding. While the term
“system” applies to much more than organizations and business, existing paradox classifications and
typologies are found primarily in philosophy. Therefore a logical starting position is to define the phrase.
In addition, examples imply paradoxes may come in different types.
It has been argued that systems engineers must recognize and leverage paradox as a matter of professional survival [32]. Exploring and categorizing these systems paradoxes may help prepare systems
∗
Corresponding author. Tel.: +1 609 485 4832; E-mail: [email protected].
1389-1995/10/$27.50  2010 – IOS Press and the authors. All rights reserved
2
W.C. Baldwin et al. / A typology of systems paradoxes
engineers to develop strategies to mitigate their impacts or even use them advantageously in business.
For many years scientists were stymied by the paradox of atoms exhibiting both wave and particle
properties. However, using this apparent paradox led to the development of quantum mechanics and the
tantalizing possibility of quantum computing, with a host of attendant practical engineering challenges
to meet. If we can determine a logical grouping of these paradoxes, perhaps we can eventually develop
group-specific strategies for them.
This paper expands on the work of John et al. [32] by presenting a definition and typology of systems
paradoxes. Other scholars interested in this topic are encouraged to examine the derived categories in
order to explore unique qualities that may be exploited. Note we use the term typology rather than
taxonomy, as a typology is defined as conceptual. Quite often the term taxonomy is used for both, but it
will be reserved for empirical classifications rather than conceptual [7].
The following section of this paper explores different definitions of system and paradox to arrive at
an acceptable description of systems paradox. The next section reviews other attempts at categorizing
paradoxes and reviews some logic to be used in the proposed typology. Using elements from logic,
Section 4 proposes a typology based on set theory. The penultimate section demonstrates the proposed
typology against examples discovered from the literature as well as a few additional examples to complete
the typology. The paper concludes with some suggestions for further research.
2. Definitions
Initially a systems paradox may specify any paradox that affects a system, but this definition is selfreferential. Splitting the phrase into its two constituent parts expands the problem into an exercise that
probes each element separately. After examining a variety of definitions extracted from the literature, a
generic definition for each term is compiled from the common themes. Admittedly there is the potential
for a judgment call among possible alternatives. An accepted definition for each of the two terms will
comprise a compiled definition, which epitomizes a first step in increased understanding.
2.1. System
The word system is ubiquitous in everyday usage, but exactly what does it mean in the context of this
paper? The concept existed at least as far back as the time of Plato but has evolved and changed over
the years [24]. A system as a whole is said to be more than the sum of its parts [21,29]. The Greek
philosopher Aristotle states, “The whole is something beside the parts” [3]. Although translated as the
word “whole,” these “wholes” in philosophy are “systems” in system theory [39]. Nonetheless what are
they?
To derive a scientific meaning of systems, one could start with a name well known to general system
theory, Ludwig von Bertalanffy. General systems theory is the “scientific exploration of ‘wholes’ and
‘wholeness”’ [9]. Bertalanffy describes general systems theory as having three main aspects, which
he names systems science, systems technology, and systems philosophy. It is this last piece that is of
interest here as it involves systems ontology, or what is meant by systems [9]. “A system can be defined
as a set of elements standing in interrelations among themselves and with the environment” [9]. This
description provides an abstract foundation to understand the expression under study. Known for his
influences in cybernetics and systems theory, Ross Ashby warns of the possible uncertainty when using
the word system without specifying its full intent [4], and questions the uncertain objective of general
systems theory pertaining to either physical systems or mathematical systems [5].
W.C. Baldwin et al. / A typology of systems paradoxes
3
Many who followed Bertalanffy expanded on his fundamental ideas to provide a better understanding
of systems [4,24]. When considering organizational problems, a system can refer “to a group or complex
of parts (such as people, machines, etc.) interrelated in their actions towards some goals” [16]. It is
obvious that this definition adheres to Bertalanffy’s notions of a system. Elements are called complex
parts, but the interrelation goes further towards some goals.
A pioneer of operations research as well as systems thinking, Russell Ackoff attempts to organize
and better define different types of systems. Acknowledging the various uses of the term, he starts by
defining a system as “a set of interrelated elements” [1]. These elements are connected either directly or
indirectly to each other and all the subsets of elements are related in some manner. The idea of a goal
for a system is not stated specifically; however, the various systems are said to have an output.
In his management book, William Exton Jr. states, “The term system represents the principle of
functional combination of resources to produce intended results or effects. . . . A system may be made
up of other interrelated systems, as is the body” [21]. In discussing the goal, he states, “when such a
function is not an end in itself but serves a larger function, the contributing systems are more appropriately
referred to as subsystems of the larger system” [21].
There has been apparently little change to the definition of systems over the years. For example,
“any two or more objects interacting cooperatively to achieve some common goal, function, or purpose
constitutes a system” [26]. Another definition substitutes the word “interrelated” but is otherwise the
same [13].
Based on the literature, we can compile a new collective definition that captures the elements of the
reviewed definitions. For the purposes of this paper, the derived definition of system is a set of elements
interacting for a purpose to achieve some common goal. Furthermore these elements can achieve more
together than the sum of their actions apart.
2.2. Paradox
The other element under consideration is the term paradox. Definitions for paradox have been offered
in several contexts, such as philosophy, business, and even systems. These definitions of paradox are
fairly analogous, and they focus on the paradox itself without relation to the system.
In the realm of philosophy, Quine [43] defines paradox as, “just any conclusion that at first sounds
absurd but that has an argument to sustain it.” Furthermore he states that he “would not limit the
word ‘paradox’ to cases where what is purportedly established is true.” Farson [23] provides a similar
definition of “seeming absurdities.”
Another philosopher presents a brief history of the paradox, argument, or sophism as Aristotle called
them. In describing the word, Rescher explains that “para” is Greek for “beyond” and “doxa” is
“belief.” Therefore he uses the definition “a contention or group of contentions that is incredible.”
He goes on to distinguish two types of paradoxes. The first type is the logical paradox, described as
“a conflict of what is asserted, accepted, or believed” [46]. He calls his second category rhetorical
paradoxes, which are perversely descriptive (for example, “waking life is but a dream”). Although he
contends that the second category may lead to the former, his focus is the study of the logical paradoxes,
which he calls aporetics. An apory is “a group of acceptable-seeming propositions that are collectively
inconsistent” [46]. This description indicates an incredible contention is an inconsistent statement in an
otherwise logical argument.
Likewise Sainsbury [47] offers a logical definition, “An apparently unacceptable conclusion derived
by apparently acceptable reasoning from apparently acceptable premises.” Recognizing that paradoxes
4
W.C. Baldwin et al. / A typology of systems paradoxes
may refer to engineered as well as organizational systems, Boardman and Sauser [14] define paradox in
a similar manner as, “an apparent contradiction.”
In dealing with management and organizational paradoxes, Lewis states, “’Paradox’ denotes contradictory yet interrelated elements - elements that seem logical in isolation but absurd and irrational when
appearing simultaneously” [35].
Westenholz [55] has a very narrow definition for use in business contexts. She identifies paradoxes as
contradictions that coexist and lack any choice in the situation. Furthermore she specifically insists that
the definition should be unique compared to similar terms, for instance dilemma, irony, inconsistency,
and ambivalence. Due to this limited definition, many famous paradoxes would not qualify, such as
paradoxes based on vagueness (e.g. the Sorites paradox) and paradoxes of perception (e.g. Zeno’s motion
paradoxes). The Sorites paradoxes involve ambiguous boundaries, such as when grains of sand become a
heap. Zeno of Elea was a very influential ancient Greek philosopher whose famous paradoxes attempted
to show that motion is an illusion [46].
Basically these definitions indicate there are two primary elements of paradox. A paradox involves
some form of perception of absurdity or some form of contradiction. Therefore any typology of systems
paradox should allow for both of these features.
As already stated, a trivial explanation of systems paradox is any paradox that affects a system. Using
the derived definitions of the two words creates a description that does not depend on either of the
constituent pieces. Our definition is a theoretical one that represents the concatenation of two lexical
definitions. A systems paradox is a contradiction or some form of absurd perception related to a set of
elements interacting for a purpose. The construct systems paradox is in itself a system, which implies by
definition emergent properties that are unique to the whole rather than the components. This definition
is applied in following sections.
3. Literature review
With an accepted definition of systems paradox, attention can now turn to categorizations of paradoxes
in the literature.
3.1. Survey of categorization
A review of the literature reveals how paradoxes have been categorized in philosophical works.
Occasionally there is a lack of categorization. For example Clark [19] does not attempt to categorize
paradoxes other than listing 84 of them alphabetically. Sorensen [51] presents a chronological listing
of paradoxes, primarily concentrating on the philosophers who struggled with them. Handy [27] does
not make any attempt to categorize paradox, but rather his intention is to show how paradoxes can
be managed in business and other organizations. Making no specific claims for or against categories,
Poundstone [40] presents examples that arbitrarily support some categories.
Despite no widely accepted classification, there are many instances of grouping paradoxes. Only
dealing with a small subset, Rea [45] shows that four philosophical paradoxes traditionally handled
separately are actually one problem, which he calls “the problem of material constitution.” His reasoning
is that there is one solution applicable to all four paradoxes. He describes the four paradoxes using the
same philosophical arguments, and then demonstrates that a similar manipulation of these arguments
resolves each of the four paradoxes.
W.C. Baldwin et al. / A typology of systems paradoxes
5
A more comprehensive example is Quine [43], who discusses three classes of paradoxes. His first
class of paradox includes those that happen to be true, which he calls “veridical.” For his second class,
he defines a “falsidical paradox” as “one whose proposition not only seems at first absurd but also is
false, there being a fallacy in the purported proof.” He places some of Zeno’s motion paradoxes as
falsidical. Finally, Quine’s third class of paradox is referenced as “antinomies,” which are created via
self-contradictions such as “this sentence is false.”
Admittedly a pure philosophy book, Rescher [46] presents categories of philosophical paradoxes,
levels of paradoxicality, and methods to resolve these paradoxes. He presents the following seven
characteristics that result in paradox: meaninglessness, falsity, vagueness, ambiguity and equivocation,
implausibility, unwarranted presupposition, truth-status misattribution, untenable hypotheses, and value
conflicts [46]. His method to resolve paradoxes is similar to Rea’s approach, while his methodology for
levels of paradoxicality is similar to Quine. A dissolvable paradox is one type or level that has false
or meaningless premises. A decisively resolvable paradox can be solved by altering or removing the
offending premise. The third level is an indecisively disjunctive resolution, which involves multiple
offending premises where the removal of any one would resolve the paradox. Rescher warns that level
of paradoxicality is not the same as difficulty of a paradox.
By his own admission, Rescher uses classification by subject matter. In his opinion, the major categories
of paradoxes are semantical paradoxes (those involving the ideas of truth and falsity), mathematical
paradoxes, physical paradoxes, epistemic paradoxes (those involving the ideas of knowledge and belief),
and philosophical paradoxes. Some of the categories can be broken down further, such as decomposing
philosophical paradoxes into moral paradoxes, metaphysical paradoxes, and paradoxes of philosophical
theology. An interesting observation is that Rescher organizes his book using a combination of his
categories and ways to create a paradox rather than organizing by subject matter.
In regards to categorization, Sainsbury [47] has a comparable opinion to Rescher in that he believes
paradoxes can be grouped by subject matter. Similarly this source is primarily philosophical. Sainsbury
goes further to present an invented scale to rate the degree of paradox, but he admits that his scale is very
subjective. The only use he offers for this subjective scale is to state that his book is concerned with level
6 paradoxes and higher up to 10, and he does not mention the levels again after the first chapter.
Briefly leaving the field of philosophy, Luscher et al. [37] discuss several types of paradox, namely
paradoxes of performing, paradoxes of belonging, and paradoxes of organizing. Boardman and Sauser [14] discuss several types also, namely boundary paradox, control paradox, and the paradox of diversity
or team paradox. There is similarity between the paradox of performing and the control paradox in that
both ask how one can be in charge while letting subordinates make decisions. The paradox of belonging
is similar but not exactly the same as the boundary paradox in that both are examining group boundaries.
The paradox of belonging is concerned with vague boundaries while the boundary paradox is concerned
with boundaries that allow contradictory behavior. Although these paradoxes cover many situations,
they are not exhaustive.
While the literature reveals some categorizations of paradoxes from philosophy, we would like to
propose a scheme for systems scientists. Our hope is that a categorization of different paradoxes may
lead to improved processes to deal with them. Hence the remainder of this paper presents a typology of
systems paradoxes applicable to systems.
3.2. Survey of logic
In the truest sense of the word, this paper proposes a logical categorization of systems paradoxes.
As stated, a systems paradox includes some form of contradiction, which implies the presence of
6
W.C. Baldwin et al. / A typology of systems paradoxes
two different elements. The logical categorization is based on the connections of these two elements.
Propositional logic is the study of logical connectives, in part [33]. Axioms of propositional logic include
the Law of Excluded Middle and the Law of Contradiction. These two axioms state that a sentence must
be either true or false but not both or neither [25]. In the study of paradoxes, obviously these rules are
broken. Cargile [18] supports this decision and suggests it is common practice to disregard an axiom
when ambiguous arguments are present. Although Cargile is discussing the Sorites paradox, he claims
the Excluded Middle maxim is a “natural target” of philosophers.
The application of logic to paradoxes is not a novel idea. Sainsbury [47] uses logic as an identification
scheme and to investigate certain paradoxes and variations of them. Rescher [46] presents many of his
propositions using first-order logic to illustrate their inconsistency. On the other hand, Slater [50] argues
that first-order logic is insufficient in some cases of paradox. In fact, there exist philosophers who agree
that the restriction of logic to true and false is unrealistic (see [6,11,12,41]). Some paradoxes are the
result of vagueness, such as the Sorites Paradox, with no clear true or false status [28,38]. Other logics,
such as fuzzy logic, have been suggested as alternative logics for paradoxes [41].
Additionally, the proposed logical categorization of systems paradoxes uses set theory, which is not
unique either. Since sets are collections of objects, it is a reasonable choice. Arguably one of the
most famous paradoxes discovered with set theory is known as Russell’s Paradox [30,47]. The paradox
involves the set of all sets that are not members of themselves. “Such a set appears to be a member of
itself if and only if it is not a member of itself, hence the paradox” [31]. A related self-reference paradox
is Curry’s Paradox [53], and another paradox discovered with set theory is the Burali-Forti paradox,
which involves the set of all ordinal numbers [17, Section 4.1].
4. Typology of systems paradoxes
The process of learning through paradox includes the components of contradictions or tensions,
paralysis or reinforcing cycles, and management [35,36]. Building from the information in the previous
sections of this paper, the approach of this paper applies set theory to categorize several types of systems
paradoxes. In other words, this typology labels the different subcomponents of the contradictions
component within the learning through paradox process. It is our hope that this typology will help
“engineers to accept and leverage paradox in structured ways that will empower them to take advantage
of the opportunity to develop innovative solutions that provide exceptional performance, affordability
and impact – in business terms, things customers may actually want” [32]. Examining systems paradoxes
via a reductionist approach may illuminate qualities otherwise missed from a holistic approach.
As was shown in Section 2, a paradox entails a perception of absurdity or some form of contradiction.
The perception of absurdity is different than other paradoxes in that it is only an illusion. Therefore a
special grouping is required to address the perception of absurdity aspect. Since any paradox adhering
to this definition is only absurd in the eye of the beholder, there is no substantive contradiction. Hence a
special standalone class will be presented for systems paradoxes that exist only as beliefs.
The remaining systems paradoxes can be described as contradictions within a system. Therefore define
S as the set of elements comprising a system. The following sections present several different categories
within this set.
4.1. Conjunction systems paradox
Let us start with an obvious category. A conjunction systems paradox adheres to the definition of
a paradox supplied by Westenholz [55]. For this case, define a subset C to represent the conjunction
W.C. Baldwin et al. / A typology of systems paradoxes
7
systems paradox Eq. (1). The members of this set are elements of a system and their negation or opposite
elements that coexist in the system. It is evidently clear that the set C is a subset of systems, and therefore
the set C exists within the set of systems paradoxes.
C = {p, ∼ p ∈ S|(p =∼ (∼ p)) ∧ (p =∼ p)}
(1)
To illustrate this category, consider a system with two contradictory functions. If one function is “allow
entry”, then an opposing function is “deny entry.” This example belongs to the boundary paradox as
described in Boardman and Sauser [48].
4.2. Biconditional systems paradox
The next case considers the temporal element in that the contradictions follow a sequence and do not
coincide. Let B represent this biconditional systems paradox set Eq. (2). Again it is clear that the set B
is a subset of S. A systems paradox of this category would take the form “if p then ∼p but later if ∼p
then p.” Perhaps a better description of this category is “if p leads to ∼p but when ∼p occurs, it leads to
p.”
B = {p, ∼ p ∈ S|(pt0 →∼ pt1 ) ∧ (∼ pt1 → pt2 ), t0 > t1 > t2 }
(2)
As an example, consider a software system. Suppose a certain condition during software execution causes
intentionally or accidentally the system to perform some opposing condition. Furthermore suppose the
opposing situation causes the system to produce the original condition. The struggle to break free from
these conditions would result in a systems crash or at best an error situation, depending on the software
code.
4.3. Equivalence systems paradox
In logic, there is no distinction between the biconditional and the equivalence (compare [2,41]). Yet an
additional category arises from the logic surrounding dialetheism [42] addressing elements that exhibit
contradicting qualities simultaneously. Although the conjunctive case has been defined to address two
separate but coinciding elements, there is the possibility of a single element possessing two contradictory
qualities. In contrast, the biconditional case involves contradictory elements temporally separated with
either one leading to the other. Therefore the equivalence paradox is reserved for situations where a
single element of a system may be represented by the contradictory qualities at the same time. Define the
subset E to represent the equivalence systems paradox Eq. (3). Stated simply, the set E contains elements
that possess contradictory qualities simultaneously. Since the element is contained in the system, it is
appropriate that E is a subset of systems paradoxes.
E = {p ∈ S|p =∼ p}
(3)
As an example of this class, take a business team as a system. This team consists of some number of
team members. It is clear that there exists a boundary for this system such that there are a set number
of team members yet it is quite difficult to determine where the boundary actually lies. Everyone in the
organization identifies themselves as either in the team or not in the team, and therefore a boundary must
exist. However, try to describe the boundary. Does the office building where this team works constitute
the boundary or does the number of team members constitute the boundary? Obviously the team can
go outside the building and still be the team. If one of the team members calls out sick, the team exists
8
W.C. Baldwin et al. / A typology of systems paradoxes
with one less team member. Hence the boundary is an element of the system that has the quality of both
existing and not existing at the same time.
In the realm of philosophy, vague boundary conditions are the topic of the Sorites paradox [47]. The
popular Sorites paradox is concerned with a heap of grain. At what point does a heap of grain stop
being a heap? There appears to be a boundary point where the grains become a heap, but this boundary
cannot be found. One attempt at solving the Sorites Paradox uses a mathematical definition similar to
the equations in this paper [18]. Basically the attempted solution states that there exists a specific value,
such that the status changes when one more item is added. Unfortunately there is no assigned quantity
for the value. Hence this solution calls for the existence of a boundary but cannot specify where it is.
Therefore the boundary exists and yet cannot be found, which is identical to the given business team
example.
4.4. Implication systems paradoxes
Logical implications do not necessarily cause contradictions, as a statement of the form “if p then
q” should result in a truth value of true or false in all situations. But suppose the consequent is the
contradiction of the antecedent. This changes the logical statement to the form “if p then ∼p”, but this
statement does not cause a logical contradiction either. Actually the statement is true when the antecedent
is false and false when the antecedent is true. Yet in the world of systems, there is a degree of irrationality
to a statement where an element implies its opposing element (see [35]). Therefore an apparent paradox
is present in this logical form of implication. We define this category as the set of implication systems
paradoxes Eq. (4). Any system function that leads to its own contradiction populates this subset.
Unlike previous cases, there is no restriction on the relation of the contradictory elements other than a
logical implication. Therefore the elements may coexist or have temporal distinction. Note that if the
contradictory elements are interchangeable, the paradox belongs in one of the previous sets instead of
implication systems paradoxes. Hence one specific element must always imply its contradiction and not
vice versa.
I = {p ∈ S|p →∼ p}
(4)
Although logical implications do not cause contradictions, possibilities for contradictions can be found
using compound implications. In the literature, these paradoxes are known as the Paradox of Strict
Implication [54] and the Paradox of Material Implication [15,49]. These two paradoxes appear in the
systems world and therefore are considered.
Let Eq. (5) represent the strict implication systems paradox. Here there is a situation where some
event implies either one of two opposing events. In other words, some event implies any consequence
instead of a specific outcome. Take as an example a software bug that produces unexpected results.
Under certain situations, the correct answer is produced while the same situation may appear arbitrarily
to produce contradictory results at other times.
T = {q ∈ S|q → (p∨ ∼ p), p =∼ p}
(5)
Similar to the previous set, contradictory elements necessarily exist in the material implication systems
paradox Eq. (6). An example from the set M may be software situations where the same result is
produced regardless of contradictory input.
M = {p, ∼ p ∈ S| ∼ p → (p → q), p =∼ p}
(6)
W.C. Baldwin et al. / A typology of systems paradoxes
9
4.5. Disjunction systems paradox (Identity)
A disjunction should not cause a paradox but rather a tautology traditionally, but that does not mitigate
the possibility of paradox within the system itself. As previously covered, a system is more than the
sum of its parts [29]. Concurrently, the complete set of elements of a system, physical and behavioral,
comprises the system. The axiom of extensionality from set theory states that a set is determined by its
elements [34]. Considering these two depictions simultaneously results in an apparent contradiction in
that the parts of a system do not fully describe the system. Emergence may better describe this result,
which expects the system as a whole to be more than the sum of its parts (see [10]). However, there is an
implication of paradox due to this emergence. One example of this systems paradox is Boardman and
Sauser’s paradox of diversity [14].
Define the set D to represent the disjunction systems paradox or alternatively the identity systems
paradox Eq. (7). In this case, it appears the system contains an emergent element that does not exist in
the system, which is the apparent paradox.
D = {m, n ∈ S, p ∈
/ S|(m ∨ n) → (m ∨ n ∨ p)}
(7)
4.6. Perceptual systems paradox
As already stated, the set of systems paradoxes is incomplete without the ingredient of absurd perception. We add one more category defined as the set of perceived paradoxes, designated A for “apparent.”
This category is not a true subset of systems paradoxes since they are unsubstantiated beliefs and therefore not actual elements in the system. Any paradox that is based on perception of a system rather than
reality will be placed in this bin. A philosophical example of this type of paradox would be Zeno’s
motion paradoxes [19,47,51]. These paradoxes involve the impossible versus the imperative.
4.7. Systems paradoxes
At this point, we can present a typology of systems paradoxes as defined by the equations in the
previous sections. A hierarchy diagram can represent this typology of systems paradoxes (Fig. 1). The
disjunction systems paradox and the perception paradox categories satisfy the absurd perception aspect
of the systems paradox definition while the remaining four categories are some form of contradiction.
This section and the following section provide examples to demonstrate the different sets of systems
paradoxes. Many of these examples are paradoxes documented in the literature, but given that this
typology is theoretical, some examples are the result of thought experiments. This approach has a long
history in scientific theory, and many discoveries have relied upon it. However, the method proposed
in this paper may not qualify as a thought experiment when reserved strictly for a mode of scientific
experimentation. Rather, the term hypothetical reasoning for non-scientific purposes may be a better
description [30]. In any case, hypothetical reasoning demonstrates the proposed typology.
5. Test scenarios
Since this typology applies to a generic system, it should apply equally to various classifications of
systems, such as system of systems or complex adaptive systems, although the paradoxes may differ.
To demonstrate this assertion, we will demonstrate the typology on an assortment of paradoxes already
discussed.
10
W.C. Baldwin et al. / A typology of systems paradoxes
Fig. 1. Typology of systems paradoxes.
Westenholz [55] examines organizational paradoxes and paradoxical thinking. By her description, all
considered paradoxes are coexisting contradictions. Therefore categorizing these systems paradoxes is
trivial as she explores only conjunction paradoxes by definition. One example is an organization where
employees want to make their own decision regarding wage reductions but at the same time they want the
trade union to make the agreements. Here there are two separate but opposing elements, which confirm
the paradox is a conjunction systems paradox. A similar argument applies to Sauser and Boardman’s
five characteristic paradoxes for SoS [48].
The boundary paradox argues the contradictory requirements for a boundary to keep things out and
keep things in while it must also let things out and let things in [14]. If we define an element as the
function (or component that implements the function) “keep things out,” then its negation “to let things
in” also exists. A review of the different sets leads to the conjunction systems paradox as the appropriate
choice.
A description of the control paradox is the opposition of forces [14]. Leadership must have some
command and control but the team members must have some command and control also. A related
example involves reciprocal loyalty where the leadership must be loyal to the subordinates in order to get
loyalty in return. Here we have two complementary conditions that happen to face in opposite directions,
which still support a classification of the conjunction system paradox.
The diversity paradox [14] has an element of ambiguity. The tension among diverse traits produces a
sum greater than the parts. Although unspecified, there exists an element of the system that is a member
of the system but not a component of any of the system’s parts, which is clearly a case of the disjunction
systems paradox.
For the international system, disorder is usually a cause for war which eventually results in peace.
But international peace eventually leads to disorder resulting in war. The overall situation creates a
war-and-peace paradox [8]. If peace is the contradiction of war, then over time peace leads to war, but
it is also true that war leads to peace. Since war and peace do not temporally coexist, the biconditional
system paradox is fully satisfied.
Returning to the topic of organizational boundaries, an argument can be made that a paradox is present.
The implied systems paradox could be characterized as the statement, an organization has a structured
boundary, which defines membership, but at the same time the boundary is permeable allowing the flow
W.C. Baldwin et al. / A typology of systems paradoxes
11
across of people and information [16]. While the context offered is organizational, the same line of
reasoning could apply to other types of system boundaries. For example, the boundary in the form of a
scope of a system exists, but at the same time the scope may be vague. In the description as well as the
example, the situation states that boundary exists and does not exist at the same time. This scenario is
indicative of the equivalence systems paradox.
Paradoxes of learning are described as “using, critiquing, and often destroying past understandings
and practices to construct new and more complicated frames of reference” [35]. If constructing is the
negation or opposite of destroying and new the negation of past, a systems paradox is present as “if
destroy past then construct new”, which is the implication systems paradox.
The paradoxes of organizing are defined as, “an ongoing process of equilibrating opposing forces that
encourage commitment, trust, and creativity while maintaining efficiency, discipline, and order” [35].
These opposing forces are contradictory elements. Therefore by definition this group represents the
conjunction systems paradox.
Another category from the literature, paradoxes of belonging, has the explanation, “Groups become
cohesive, influential, and distinctive by valuing the diversity of their members and their interconnections
with other groups” [35]. In other words, value of the group members’ diversity results in cohesion and
uniformity of the group. Since uniformity is the antonym of diversity, the paradox belongs to another
implication systems paradox.
An additional paradox of performing includes the communication of mixed messages [37]. For
example, a manager may request an open and honest discussion yet only allow orderly and civilized
statements. The possibility exists that an honest response might not be civilized. Another description
of this paradox is management must be in charge while allowing others to make decisions. In these
examples, one entity has two simultaneous contradictory qualities, which describes the equivalence
systems paradox.
Farson [23] presents a multitude of management related paradoxes, which are mostly perception
paradoxes. For example, he argues that relationships count more than skills for being a good manager.
He contends this statement is paradoxical arguing that the common perception of management is based
on skills. Another similar example is his assertion that organizations that need the most help will benefit
the least from it. These statements are paradoxical based on perception and their validity is arguable.
Hence the discussed categories of paradox from the systems or business literature fit into the proposed
typology.
Although the presented paradoxes fit the typology, they do not comprise every category. In order to
demonstrate the subcategories of the implication paradoxes, software systems are considered. There are
four logic structures in good programming that control the logic of the data flow through the program.
These structures are called the sequential, decision, case, and loop logic structures. The loop logic
structure is instantiated as the for-loop, while-loop, or repeat-loop [52]. In good programming practice, a
catch-all case, known as the otherwise case, executes when no other case is appropriate [52]. This catchall case helps ensure the code will produce a known result. Specific examples of this situation are the
deadlock, wrong lock, and no lock errors when an input produces the wrong result [22]. In the case of a
deadlock, the result is unreachable code. The software coder expects the input to produce the appropriate
action or output, but the true result may be the desired output or not the desired output. Therefore the
situation can be described using the proposed typology as a paradox of strict implication. Basically any
software code that may produce an unexpected result falls into the strict implication category.
While designed for systems, the proposed typology is general enough to apply to philosophy in some
situations. Consider the Greek philosopher Socrates who is credited with a group of paradoxes called
12
W.C. Baldwin et al. / A typology of systems paradoxes
“Socratic paradoxes.” Among the group is that Socrates knew that he knew nothing [44]. Based in
philosophy, one could argue that this paradox is another example of a perception paradox. However, for
the sake of argument let us assume that the statement is true. Then Socrates can be characterized as both
possessing and lacking knowledge simultaneously, meaning this paradox fits the equivalence systems
paradox.
The scope of paradox expands also into the realm of popular science fiction systems. In the novel by
Arthur C. Clarke, 2001: A Space Odyssey [20], the spaceship’s onboard computer, named HAL 9000, has
a paradox programmed into its software. The advanced computer has the conflicting directives to fully
disclose all its information while at the same time it must not disclose the true purpose of the flight. This
paradox produces homicidal tendencies in HAL to eliminate the contradiction. Several episodes of the
original Star Trek series included software paradoxes that caused the offending machine to self-destruct.
In these cases, logic was shown to conclude in coexisting, conflicting results, or conjunction systems
paradoxes.
6. Conclusion
The idea of systems paradox requires one to maintain focus on the system rather than on the components.
We defined the term as a contradiction or some form of absurd perception related to a set of purposefully
interacting elements. The impact of the paradox is a novel problem of perspective that requires the
systems engineer to elevate his or her thinking to a level where the seemingly impossible is accepted and
perhaps inevitable.
Described mathematically, sets of systems paradoxes are identified, and these sets are assembled
into a typology. This typology expands on the research of John et al. [32] in an attempt to gain a
better understanding of systems paradoxes, which often cause confusion in the real world. Furthermore
this typology attempts to strengthen the case for paradoxical thinking in the executive’s professional
development by understanding the contradiction element within the learning process (see [35,36]. The
proposed typology classifies systems paradoxes as members of six categories. The first four categories
involve contradictions that manifest in different ways, and the disjunction systems paradox describes
elements which do not exist in the components of a system yet emerge in the system itself. The remaining
category is clearly different in that the perception systems paradox does not involve any real contradiction
within the system. Rather it involves realities of a system that defy common beliefs.
Although we hope this attempt will be of value, there is no guarantee that this typology is the best
one. More importantly, the attempt at a typology may inspire others to improve upon the idea. It is our
belief that we need to understand the situation of systems paradoxes before we can actively address them.
To this end, a typology helps us decompose the situation in order to tackle it by parts. For example,
the proposed typology allows us to reduce the problem set by isolating the set of perception paradoxes.
In future papers, we hope to explore the connection between paradoxical thinking and management
theory more deeply. Other scholars interested in this topic are encouraged to explore the systems
paradoxes in any category to determine what if any unique qualities apply. The resulting improvement in
understanding may enable both the management and systems engineering communities to take advantage
of the opportunities currently impeded by these paradoxes.
As a work in systems thinking, this paper considers multiple perspectives of the subject in question.
Adhering to the essence of paradox, a contradictory view might argue for no future research. This paper
attempts to increase an understanding for the eventual development of strategies. Will a systems paradox
typology help determine strategies for systems paradoxes? Does a systems paradox typology add value
W.C. Baldwin et al. / A typology of systems paradoxes
13
to understanding systems paradoxes? The implication is an affirmative response, but in any case this
study is a step in improving the knowledge base. However, it is possible that the identification of systems
paradoxes is sufficient to provide opportunities that they otherwise impede. Perhaps this situation creates
the ultimate paradox, a paradox worthy of Socrates, a paradox of paradoxes. Equipping managers and
systems engineers with a means to identify paradox opens the door to new opportunities that otherwise
would go unnoticed.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
R.L. Ackoff, Towards a system of systems concept, Management Science 17(11) (1971), 661–671.
A.R. Angel, C.D. Abbott and D.C. Runde, A Survey of Mathematics with Applications, (7th ed.), Reading, MA: Addison
Wesley.
Aristotle, in: Metaphysics, book VIII of Aristotle, W.D. Ross, ed., Tran., Adelaide Australia: eBooks@Adelaide.
Retrieved June 16, 2008, from http://etext.library.adelaide.edu.au/a/aristotle/metaphysics/metaphysics.zip, 2007.
W.R. Ashby, Introduction to Cybernetics, (2nd ed.), Great Britain: Chapman & Hall, LTD, 1957.
W.R. Ashby, Principles of the self-organizing system, in: Principles of Self-Organization: Transactions of the University
of Illinois Symposium, H.V. Foerster and G.W.J. Zopf, eds, London, UK: Pergamon Press, 1962, pp. 255–278.
A. Avron, Combining classical logic, paraconsistency and relevance, Journal of Applied Logic 3(1) (2005), 133–160.
doi: 10.1016/j.jal.2004.07.015.
K.D. Bailey, Typologies and Taxonomies: An Introduction to Classification Techniques, Quantitative Applications in the
Social Sciences (1st ed.), Thousand Oaks, CA: Sage Publications, Inc, 1994.
W.C. Baldwin, Modeling paradox: Straddling a fine line between research and conjecture, In 2008 IEEE International
Conference on System of Systems Engineering, Presented at the 2008 SoSE, Monterey Bay, CA: IEEE, 2008.
L. von Bertalanffy, The history and status of general systems theory, Academy of Management Journal 15(4) (1972),
407–426.
L. von Bertalanffy, An outline of general systems theory, British Journal for the Philosophy of Science 1(2) (1950),
139–164.
J. Bèziau, The future of paraconsistent logic, Logical Studies 2 (1999), 1–28.
J. Bèziau, What is paraconsistent logic? in: Frontiers of Paraconsistent Logic, D. Batens, ed., Baldock: Research Studies
Press. Retrieved April 27, 2009, from http://www.jyb-logic.org/wplb.pdf, 2000, pp. 105–117.
B.S. Blanchard and W.J. Fabrycky, Systems Engineering and Analysis, (3rd ed.), Upper Saddle River, NJ: Prentice-Hall
Inc, 1998.
J. Boardman and B. Sauser, Systems Thinking: Coping with 21st Century Problems. Boca Raton, FL: CRC Press, 2008.
R. Brandom, Semantic paradox of material implication, Notre Dame Journal of Formal Logic 22(2) (1981), 129–132.
W.B. Brown, Systems, boundaries, and information flow, Academy of Management Journal 9(4) (1966), 318–327.
A. Cantini, Paradoxes and contemporary logic, in: Stanford Encyclopedia of Philosophy, E.N. Zalta, ed., (Fall.). Retrieved
April 27, 2009, from http://plato.stanford.edu/entries/paradoxes-contemporary-logic/, 2008.
J. Cargile, The sorites paradox, The British Journal for the Philosophy of Science 20(3) (1969), 193–202.
M. Clark, Paradoxes from A to Z, (2nd ed.). New York: Routledge, 2007.
A.C. Clarke, 2001: A Space Odyssey, New York: New American Library, 1968.
W. Exton, Jr., The Age of Systems: The Human Dilemma, United States of America: American Management Association,
Inc, 1972.
E. Farchi, Y. Nir and S. Ur, Concurrent bug patterns and how to test them, in: Proceedings of the 17th International
Symposium on Parallel and Distributed Processing (p. 286.2). IEEE Computer Society.
R. Farson, Management of the absurd: Paradoxes in leadership, New York: Free Press, 1997.
C. Francois, Systemics and cybernetics in a historical perspective, Systems Research and Behavioral Science 16(3)
(1999), 203–219.
M.F. Goodman, First Logic (illustrated edition.), Lanham, MD: University Press of America, 1993.
J.O. Grady, System Requirements Analysis, Burlington, MA: Elsevier Inc, 2006.
C. Handy, The Age of Paradox, Boston: Harvard Business School Press, 1995.
O. Hanfling, What is wrong with sorites arguments? Analysis 61(269) (2001), 29–35.
F. Heylighen, Principles of systems and cybernetics: Evolutionary perspective, In World Science: Cybernetics and
Systems ’92. Presented at the Cybernetics and Systems ’92, Singapore, 1992.
A.D. Irvine, Thought experiments in scientific reasoning, in: Thought Experiments in Science and Philosophy, T.
Horowitz and G. Massey, eds, Lanham, MD: Roman and Littlefield, 1991, pp. 149–165.
14
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
W.C. Baldwin et al. / A typology of systems paradoxes
A.D. Irvine, Russell’s paradox, in: Stanford Encyclopedia of Philosophy (Winter.), E.N. Zalta, ed., Retrieved April 27,
2009, from http://plato.stanford.edu/entries/russell-paradox/, 2003.
L. John, J. Boardman and B. Sauser, Leveraging paradox in systems engineering: Discovering wisdom, Information
Knowledge Systems Management 8 (2009), 1–20.
K.C. Klement, Propositional logic. In Internet Encyclopedia of Philosophy, Retrieved April 27, 2009, from
http://www.iep.utm.edu/p/prop-log.htm, 2006.
A. Levy, Basic Set Theory (Revised.). New York: Dover Publications, 2002.
M.W. Lewis, Exploring paradox: Toward a more comprehensive guide, Academy of Management Review 25(4) (2000),
760–776.
M.W. Lewis and G.E. Dehler, Learning through paradox: A pedagogical strategy for exploring contradictions and
complexity, Journal of Management Education 24(6) (2000), 708–725.
L.S. Luscher, M. Lewis and A. Ingram, The social construction of organizational change paradoxes, Journal of Organizational Change Management 19(4) (2006), 491–502. doi: 10.1108/09534810610676680.
P. Mott, Margins for error and the sorites paradox, The Philosophical Quarterly 48(193) (1998), 494–504.
D.C. Phillips, The methodological basis of systems theory, Academy of Management Journal 15(4) (1972), 469–477.
W. Poundstone, Labyrinths of Reason: Paradox, Puzzles, and the Frailty of Knowledge, New York: Anchor, 1989.
G. Priest, An Introduction to Non-Classical Logic, (1st ed.), Cambridge, UK: Cambridge University Press, 2001.
G. Priest, Truth and contradiction, The Philosophical Quarterly 50(200) (2000), 305–319.
W.V.O. Quine, Paradox, Scientific American 206(4) (1962), 84–96.
H.D. Rankin, Sophists, Socratics, and Cynics, Totowa, NJ: Barnes & Noble Books, 1983.
M.C. Rea, The problem of material constitution, The Philosophical Review 104(4) (1995), 525–552. doi:
10.2307/2185816.
N. Rescher, Paradoxes: Their Roots, Range, and Resolution, Peru, IL: Carus Publishing Company, 2001.
R.M. Sainsbury, Paradoxes, (2nd ed.), Cambridge, UK: Cambridge University Press, 1995.
B. Sauser and J. Boardman, Taking hold of system of systems management, Engineering Management Journal 20(4)
(2008), 44–49.
M. Shaw-Kwei, Logical paradoxes for many-valued systems, The Journal of Symbolic Logic 19(1) (1954), 37–40. doi:
10.2307/2267648.
H. Slater, Choice and logic, Journal of Philosophical Logic 34(2) (2005), 207–216. doi: 10.1007/s10992-004-6371-6.
R. Sorensen, A Brief History of the Paradox: Philosophy and the Labyrinths of the Mind, New York: Oxford University
Press, USA, 2005.
M. Sprankle, Problem Solving and Programming Concepts, (4th ed.), Upper Saddle River, NJ: Prentice Hall, 1998.
R.L. Stanley, Note on a paradox, The Journal of Symbolic Logic 18(3) (1953), 233. doi: 10.2307/2267406.
P.G.J. Vredenduin, A system of strict implication, The Journal of Symbolic Logic 4(2) (1939), 73–76. doi:
10.2307/2269062.
A. Westenholz, Paradoxical thinking and change in the frames of reference, Organization Studies 14(1) (1993), 37–58.
doi: 10.1177/017084069301400104.
W. Clifton Baldwin is a senior systems engineer with the Federal Aviation Administration. Currently
he is a PhD candidate in Systems Engineering studying system of systems in the School of Systems and
Enterprises at Stevens Institute of Technology. He is a certified Project Management Professional (PMP)
from the Project Management Institute (PMI) and a Certified Systems Engineering Professional (CSEP)
from the International Council on Systems Engineering (INCOSE). Furthermore he is doing research for
the Systomics Laboratory at Stevens Institute of Technology http://www.SystomicsLab.com
W.C. Baldwin et al. / A typology of systems paradoxes
15
Brian Sauser is an Assistant Professor in the School of Systems and Enterprises at Stevens Institute of
Technology. His research interests are in theories, tools, and methods for bridging the gap between systems
engineering and project management for managing complex systems. This includes the advancement of
systems theory in the pursuit of a biology of systems, system and enterprise maturity assessment for system
and enterprise management, and systems engineering capability assessment. He currently serves as the
Director of the Systomics Laboratory at Stevens Institute of Technology (http://www.SystomicsLab.com)
John Boardman is a Distinguished Service Professor in the School of Systems and Enterprises at Stevens
Institute of Technology. His research interests are in systems thinking and enterprise architecting. Prior
to this role he served as Professor of Systems Engineering at De Montfort University, Leicester. Dr.
Boardman’s previous academic appointments have taken him from the University of Liverpool, through
Brighton Polytechnic and Georgia Institute of Technology to the University of Portsmouth, which he
joined in 1990 as GEC Marconi Professor of Systems Engineering and founding Director of the School of
Systems Engineering. He is a Fellow of the Institute of Engineering and Technology and the International
Council on Systems Engineering (INCOSE).
Lawrence John is a Principal Analyst with Analytic Services, Inc. (ANSER), a not-for-profit public
institute in Arlington, VA and, through ANSER’s Applied Systems Thinking Institute, a graduate student
in the Systems Engineering program in the School of Systems and Enterprises at Stevens Institute of
Technology. A retired US Air Force officer, he has over 20 years of experience as a practicing strategic
business consultant, enterprise architect and operations analyst for decision makers at all levels within
US and international governments and military organizations. His current research program centers on
the principles of resilience in the extended enterprise, with special emphasis on “enterprise physics”
and the role of paradox. Mr. John holds a BA in Political Science from Penn State and a Masters in
Administrative Services (Public Administration) from Northern Michigan University. He is a member of
AFCEA, IEEE and INCOSE, and a senior member of the American Society for Quality.
Information Knowledge Systems Management 9 (2010) 47–74
DOI 10.3233/IKS-2010-0159
IOS Press
47
The role of service oriented architectures in
systems engineering
James F. Andary and Andrew P. Sage
Department of Systems Engineering and Operations Research, George Mason University, Fairfax, VA,
USA
E-mail: [email protected]
Abstract: Notions of Service Oriented Architectures (SOA) have recently become popular and potentially very useful in the
management, business and engineering worlds as the enterprise-focused Information Technology (IT) architecture of choice.
SOA is an approach to defining integration-architectures based on the concept of services, where a service is defined as a
mechanism that enables access to one or more capabilities using a prescribed interface. A Service Oriented Architecture (SOA)
is a way of organizing services and associated hardware and software so that it is potentially possible to respond quickly to the
changing requirements of the marketplace. Recognizing the significant advantages of service oriented architecture applied to a
business enterprise, we ask: Can an engineering organization, specifically a systems engineering organization, realize similar
benefits? This paper will attempt to answer that question by exploring the possibility and advantages of applying SOA to the
systems engineering design process, specifically as a key enabler of the Model Based Systems Engineering (MBSE) paradigm.
1. Introduction
The term “SOA” was perhaps first used in a 1996 research paper by Roy Schulte and Yefim Natis [40].
SOA principles arose out of work with distributed objects and it became popular to refer to these
distributed objects as “services”. But even before the acronym SOA was applied, people were applying
object-oriented principals to client-server architectures where design principals, guidelines and best
practices were derived from the experience with distributed object technologies. It became popular to
refer to these distributed objects as “services” points out that The first service-oriented architecture for
many people in the past was with the use of the Distributed Component Object Model (DCOM) or Object
Request Brokers (ORBs) as based on the Common Object Requesting Broker Architecture (CORBA)
specification. DCOM was an application for communication among software components distributed
across networked computers. It was introduced in 1996 [40] as an extension of the Component Object
Model (COM) and it is designed for use across multiple network transports, including such Internet
protocols as the Hypertext Transfer Protocol (HTTP).
Today, improvements in technology continue to accelerate, thereby feeding the increased pace of
associated changes in customer requirements. Business must rapidly adapt in order to survive, let
alone to succeed in today’s dynamic competitive environment, and the IT infrastructure must enable the
ability of the organization to adapt. An SOA infrastructure provides just such an enabler. With SOA
an enterprise can integrate large distributed systems that offer services across multiple organizations.
Integration approaches have often failed in the past because the connected applications are separately
managed and over time they tend to change with respect to their specifications, interfaces, protocols,
1389-1995/10/$27.50  2010 – IOS Press and the authors. All rights reserved
48
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
processes and content with no commitment to the overall enterprise. This usually considerably hinders
the ability of the enterprise to succeed.
The literature is full of articles expounding the virtues of SOA as applied at the level of the information
technology infrastructure. While some of these advantages are realizable, others tend to be exaggerations.
However some of the more realistic of these could prove to be beneficial in improving the way information
technology is utilized in the systems engineering design process.
Our intent in this paper is to show, at the architectural level, how an SOA can be applied to systems
engineering organizations. We will show that SOA has major impact when supporting Model Based
Systems Engineering (MBSE) and the systems engineering design process, particularly when working
in a collaborative team environment. We will show that the following three significant advantages can
be expected through the application of SOA in the MBSE environment:
1. Consolidation of functions and elimination of duplication;
2. Increased organizational agility; and
3. Enhanced sharing of information.
In the next section we take a look at SOAs and their fundamental structure. We then examine Model
Based Systems Engineering (MBSE) and some of the issues associated with implementing MBSE within
a systems engineering organization. Next we investigate the meaning of services in a systems engineering
environment and then define how an SOA can be integrated into the MBSE processes to resolve some of
the resulting issues. This is followed by a discussion of SOA implementation strategies and what they
mean to the systems engineering organization.
2. Service oriented architectures
Some major trends in system creation and evolution over the last decade have contributed to the
increasing popularity of SOA. First, systems and the organizations creating these systems have increased
in size, scope and complexity. Second, there is increasing system dynamics and integration with demands
for more interoperability. This is forcing rapid changes and adaptations as evolutionary changes occur
during (service) system operation [9].
A major result of these trends is a transition from “simple” closed system creation to distributed open
system creation and evolution. In the IT world, a closed system is a system in which specifications are
kept proprietary to prevent third-party hardware or software from being used. An open system, on the
other hand, is a system that allows third parties to make products that plug into or interoperate with it.
Another result of increasing system dynamics, complexity, and interoperability is that organizations
are transitioning their focus from capabilities to services. It is important to distinguish between these
two terms. By a capability we mean a resource that may be used by a service provider to achieve a real
world effect on behalf of a consumer of these services. A service, on the other hand, is a mechanism
that enables access to one or more capabilities using prescribed interfaces that are consistent with
constraints and policies as specified by the service description. This makes SOA an ideal architectural
approach where systems consist of service users and service providers. With an SOA an organization
can construct large distributed systems, integrating several systems that offer services that span across
multiple organizations [6].
But what is the structure of an SOA? At the most elementary level an SOA embodies the functionality
shown in Fig. 1. A service provider registers their service in a public registry. This registry is then used
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
49
Registry
Find
Service
Consumer
Contract
Bind and Invoke
Publish
Service
Provider
Fig. 1. The Basic SOA Functionality.
by consumers to find services that match certain criteria. If the registry has such a service, it provides
the consumer with a contract and an endpoint address for that service [45].
In reality, an SOA for a large global enterprise can be quite complex and require continuous, real-time
access to thousands of services in the course of doing business. There are services that provide access
to data from supply chains, such as delivery dates, order tracking, accounts payable, etc. Services
keep track of customers and their orders. There are services to maintain the company product catalog,
adjust inventory, set competitive pricing and even answer customer inquiries. In 2005, Starwood Hotels
migrated their business to a services-oriented architecture by implementing services that took over the
work of the last of their mainframe applications [2]. Finding the right tools to manage and monitor as
many as 150 SOA applications was critical to this project’s success. We discuss managing and monitoring
services within a SOA later when we take a look at the “backplane” of the SOA’s highly distributed
integration network. This “backplane” is the Enterprise Service Bus (ESB), as we will see later.
2.1. Advantages of a SOA
Fundamentally, a SOA supports an information environment that is built upon loosely coupled,
reusable, standards-based services. It promotes primarily data interoperability rather than application
interoperability because the focus is on the data that is shared among the applications and not on the applications themselves. But the data and the applications cannot really be separated. Data interoperability
actually results in application interoperability. By resolving data inconsistencies between applications
and by managing diverse sets of data models a SOA improves the quality of data sent between disparate
applications and hence enables the applications to interoperate successfully. By using a SOA, capability providers can reuse what already exists rather than recreating it every time. New capabilities can
therefore be fielded much more quickly, potentially thereby greatly increasing an organization’s agility.
Ultimately, the advantage of an SOA is that it provides the services to discover, access, and use data by
the people that need it and when they need it.
Another advantage of a SOA is the fact that each service component in a SOA is a stand-alone unit.
This means that the service software is independent of the requester systems and so changes made to
the service software are transparent to the requesters of the service. Changes internal to the service will
not impact the requesters of the service as long as the expected service capabilities remain unchanged.
The software which implements a service can be treated as a black box. The requester of the service
easily understands the purpose of the service without knowing anything about the underlying software.
Agility is another advantage. A SOA potentially enables the enterprise to respond quickly to changes in
50
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
the business environment by changing services. Services are also reusable, which avoids the expense of
developing new software. Reusability also translates into increased reliability of the entire system.
Overall, SOA’s have proven effective in integrating historically separate service systems, eliminating
duplicate systems, driving down costs, reducing risk and exposure, and providing flexibility to react to
changes more quickly. So far the primary application of SOA has been in the realms of business and
finance. But the benefits of SOA are desirable benefits for any enterprise. This leads to the question:
can the benefits of SOA be applied to an engineering design and product development enterprise? It is
a primary focus of this paper to explore how SOA can be used to improve the effectiveness of systems
engineering in an engineering design and product development enterprise.
2.2. DoD emphasizes SOA
The Department of Defense (DoD) has recently turned to SOA to promote interoperability among its farflung service units, allies and partners. The DoD Architectural Framework (DODAF) has gone through
an evolutionary change over the past several years. The earlier version 1.5 recognized the importance
of SOA [44] but the recently released version 2.0 goes further by providing extensive guidance on the
development of architectures, thereby supporting the adoption and execution of net-centric services and
the development and use of SOAs in support of net-centric operations. DODAF version 2.0 also facilitates
creation of SOA-based architectures that define solutions specifically in terms of services that can be
discovered, subscribed to, and utilized, as appropriate, in executing departmental or joint functions and
requirements. DODAF version 2.0 is a marked change from earlier versions of DODAF. For one thing the
major emphasis on architecture development has changed from a product-centric process to a data-centric
process designed to provide decision-making data organized as information for the manager. Version
2.0 focuses on architectural data, rather than on developing individual products as described in previous
versions. This latest version supports the concept of SOA development by describing and discussing
approaches to SOA. Volume 1 provides management guidance on development of architectural views
and viewpoints, based on service requirements and Volume 2 provides the technical information needed,
data views, and other supporting resources for development of services-based architectures [14].
Margaret Myers, the Principal Director for the DoD Deputy Chief Information Officer, describes how
the DoD is embracing Web-based services and SOAs as a way of breaking down the traditional and
ineffective information stovepipes. She states that “We must enable information sharing. Uncertainty
demands agility. To confront uncertainty with agility we are leveraging the power of information. Data
must be visible, accessible, understandable and trusted” [7].
2.3. Organizations promote SOA
A number of organizations are active in SOA applications and standards. The Object Management
Group (OMG)1 has formed a working group that provides a forum for discussion of SOA definition,
methodologies, models, and both business and technical implications. The World Wide Web Consortium (W3C)2 has produced some of the most useful technical descriptions of SOA. The basic SOA
functionality is best described in the World Wide Web Consortium (W3C) Web Services Architecture
working draft and in the later W3C Working Group Note. The Organization for the Advancement of
1
2
http://soa.omg.org/.
http://www.w3.org/.
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Application
A
Application
B
Service A
Service B
Application
A
Application
B
Service A
Service B
51
Enterprise Service Bus (ESB)
Service C
Service D
Service C
Service D
Application
C
Application
D
Application
C
Application
D
ESB
Point-to-Point
Fig. 2. Options for SOA Integration.
Structured Information Standards (OASIS)3 is a not-for-profit consortium founded in 1993 that drives
the development, convergence and adoption of open standards for the global information society. OASIS
has been working on SOA standardization focused on workflows, translation coordination, orchestration,
collaboration, loose coupling, business process modeling, and other SOA concepts that support agile
computing. The SOA Consortium 4 is another organization involved in SOA advocacy. It is a relatively
new group comprised of end users, service providers, and technology vendors, committed to helping
enterprises successfully adopt SOA by 2010.
With the numerous interoperable technologies and standards being developed by vendors and consortiums such as the OMG and W3C, the very difficult task now is to design and architect solutions
using the right tools, applications and interface standards in order to create an architecture that integrates
and manages the shared services. With SOA, an enterprise can potentially integrate large distributed
systems that offer services across multiple organizations. Integration approaches have often failed in the
past because the connected applications are separately managed and over time they tend to change with
respect to their specifications, interfaces, protocols, processes and content. Hopefully, this will no longer
be the case.
2.4. Integrating and managing services with an ESB
Bianco et al. [6] describe two significant options for SOA integration: direct point-to-point integration
and Enterprise Service Bus (ESB) integration, as shown in Fig. 2. The ESB approach has clear advantages
over the point-to-point option. If the applications are independently managed, then the point-to-point
option has a tendency to fail for the reasons stated at the end of the last section.
On the other hand, an ESB is a proven, lightweight and scalable SOA integration platform that
delivers standards-based integration for and in a SOA environment. It connects, mediates, and manages
interactions between heterogeneous services, legacy applications, and packaged applications across an
3
4
http://www.oasis-open.org/committees/tc cat.php?cat=soa.
http://www.soa-consortium.org/.
52
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
External
Business-to-Business
Interactions
Portal
Service
Mediation
Service
Transform
Route
Notify
Business
Workflow
Processes
Enterprise Service Bus
Enterprise
Information
System
Integrate
Orchestrate
Information
Management
Database
Enterprise
Services
Fig. 3. General view of an ESB implementation of an SOA.
enterprise-wide service network. In addition the ESB can provide built-in management and monitoring
capabilities which potentially eliminates the dangers of the point-to-point option. It monitors changes in
the specifications, interfaces, protocols, processes and content of the applications thereby enabling the
ESB to adapt to the changing environment and successfully manage the user service requests in a way
that is transparent to the requester of the service [6].
An ESB is a standards-based integration platform that combines messaging, web services, data transformation, and intelligent routing in order to reliably connect and coordinate the interaction of significant
numbers of diverse applications across extended enterprises with transactional integrity. Chappell [8]
points out that one of the important characteristics of an ESB is that its integration capabilities can
be highly distributed and the ESB “service container” permits selective deployment of the integration
services. This is a significant advantage because it means that the ESB functionality can be distributed
across a far-reaching enterprise comprised of geographically separated business units. Here, selective
deployment means that the integration services can be introduced piecemeal, one business unit at a time.
The transition to SOA does not have to be a disruptive, one-time overhaul of the entire IT structure of the
enterprise. An ESB allows incremental adoption, as opposed to being an all-or-nothing proposition [8].
Figure 3, adapted from [24], is a high-level view of an SOA with an ESB at the core supplying connectivity among the services. The ESB is multi-protocol and it supports point-to-point and publish-subscribe
styles of communication, as well as mediation services that process messages in flight. A service resides
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
53
in an abstract hosting environment known as a container and provides a specific programming metaphor.
The container loads the implementation code of the service, provides connectivity to the ESB, and
manages service instances. Different types of services reside in different containers.
3. Model based systems engineering
INCOSE defines MBSE as the formalized application of modeling to support system requirements,
design, analysis, verification and validation beginning in the conceptual design phase, and continuing
throughout development and later life cycle phases [23]. Model based systems engineering establishes
models rather than documents as the means of information capture, transfer and collaboration. A model is
a collection of text, graphics, equations, physical renderings, etc. with the underlying semantic linkages
to associate characteristics of each view to the same or related characteristics of the other views [10].
Uniformity is achieved by expressing the models in a common modeling language, such as the systems
engineering modeling language (SysML).
Long and Buede [29] promote MBSE as the enabler for capability-based architecting by delivering
needed insight during the systems requirements and analysis phase and enhancing communications across
multiple teams. MBSE highlights critical risks and issues, enables effective management of interfaces,
and ensures consistency while supporting change impact analysis and technology insertion [29].
Denno et al. [13] point out that the technical environment supporting systems engineering is still evolving. In this paper the authors analyze the basis of systems engineering decision making in conjunction
with the technical environment in which it may soon be performed. This analysis provides insight into
the requirements that, when met, enable a model-based systems engineering discipline.
3.1. MBSE is not fully realized yet
Although it holds considerable promise for revolutionizing the systems engineering process and
enabling systems engineering to accomplish much more than in a document-centric environment, MBSE
still has a long way to go before it is universally accepted and implemented. The International Council
on Systems Engineering [23] has developed a vision for the future development of MBSE. The roadmap
for that vision extends until the year 2020 at which time it is expected that MBSE will be widely used
throughout both academia and industry. The key enablers of a fully-developed MBSE are the emerging
modeling technologies which facilitate the exchange of information among various system viewpoints
that are developed in domain-specific languages using a variety of tools and platforms.
3.1.1. Lack of tool interoperability
Denno et al. [13] indicate that models should enable a family of more sophisticated tools, and thereby,
new processes. But the application of these new tools is often hampered by the high cost of transforming
information to a form that these tools can use. These authors emphasize the need for a formal specification
of viewpoints as well as a specification of correspondence of information across these viewpoints. They
suggest that models can be paired with information transformation engines to lower the cost of information
transformation. It is indicated that conformance-checking tools serve the important role of insuring that
software that exchanges information by means of a shared interface specification (an exchange file
specification) will conform to the normative stipulations in that specification. Thus, conformance to
a shared exchange specification is a key enabler of interoperability or the ability of parties to work
jointly toward a shared goal. These authors also point out that information transformation techniques
54
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
are relatively immature resulting in a lack of tool interoperability. This is a significant inhibitor to
widespread deployment of MBSE. What is needed are convergent MBSE standards to remove present
impediments to adoption [23].
Feedback from the INCOSE MBSE Challenge provides insight into the difficulties of transforming
information to a form that all tools can use. INCOSE initiated the MBSE Challenge in an attempt to
encourage members to gain experience in applying the MBSE methodology to real-world problems. The
INCOSE Space Systems Working Group (SSWG) took up this challenge and reported on their progress
as well as some of the difficulties they encountered. Delp et al. [12] report that, monolithic models are
not necessarily the best approach for all systems, as even collections of models must have a way to share
information of mutual relevance and that this is particularly difficult, especially when the team includes
people from different companies and countries.
3.1.2. Technology can be a hindrance
Estefan [16] in his survey of MBSE methodologies quotes Martin [31] when he says that the capabilities
and limitations of technology must be considered when constructing a systems engineering development
environment. This argument extends, of course, to an MBSE environment. Technology should not be
used only for the sake of technology as technology can either help or hinder systems engineering efforts.
Estefan [16] points to the so-called “PMTE” elements (Process, Methods, Tools, and Environment) and
says that when choosing a right mix of PMTE elements, one must always consider the knowledge, skills
and abilities (KSA) of people involved and when new PMTE elements are used, the KSAs of people must
be enhanced through special training and assignments. The time and effort required to train people to use
new technology can have an adverse effect on a project. The NASA Systems Engineering Handbook [34]
addresses this issue in a chapter called Selecting Engineering Design Tools. There it is noted that the
cost and time to perform the training for the designer to become proficient can be significant and should
be carefully factored in when making decisions on new design tools. This potentially disruptive aspect
of training is an important consideration in adapting to a different tool. Organizations must consider the
schedule of deliverables to major programs and projects when considering to a new and possibly more
effective and/or efficient tool.
Based on this discussion, we see that the path to interoperability is not achieved by requiring all
participants of a design team to use the same modeling tools. This is particularly true when there are
many participating organizations with each having experience and proficiency in the use of their particular
tool suite. Tool interoperability cannot be achieved by forcing uniformity in tool usage. As stated above,
what is needed is tool interoperability through conformance to a shared exchange specification. This is
the key to enabling a MBSE environment.
3.1.3. Models have not fully replaced documents
In the transition to MBSE it is anticipated that models will replace documents as the primary product
or artifact of systems engineering processes [23]. But engineering teams still continue to generate
documents and use documents. This will be the case for the immediate future as Estefan [16] points out
in his comprehensive survey of MBSE methodologies.
4. SOA as an enabler of MBSE
In the last section, we discovered several areas which are impediments to MBSE achieving its full
potential. In this section, we will explore ways that web services can address these same issues and take
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
55
a high-level look at an implementation of a SOA for systems engineering. To take just one example,
web services can be used to manage many documentation related aspects associated with an engineering
design activity. There can be services that provide document search capability; services to maintain
document configuration control; and services can be called on to generate documents in much the same
way that the Vitech5 tool CORE generates documents automatically. However, services would not be
constrained to any single tool environment but can draw on data from multiple tools and services [16].
Before we delve into the other issues with MBSE we will first take a closer look at what we mean by
systems engineering services.
4.1. Systems engineering services
Earlier we explored the benefits that a SOA brings to a business enterprise. We discussed how
organizations experienced improved communications among their many business units leading to an
increase in data sharing and interoperability. These are desirable benefits for any organization. Then we
asked ourselves the question: can these same benefits be realized in a systems engineering environment?
Clearly there must be some differences in applying SOA to different engineering organizations. To
start with, engineering is not heavily focused on customer services in the same way as is, say, a banking
enterprise. Banks have thousands of customers to whom they offer hundreds of different services
requiring all types of transactions, forms, record retention, reporting, etc. not to mention security. Also
bank transactions must be widely available to the customer through ATMs and home computers; and
providing results almost instantaneously. On top of this banks, must be agile and flexible in order to
change elements of their business quickly. Rates and international monetary exchange fluctuate minuteto-minute; new product lines are being offered weekly or monthly; new branches and/or ATMs are added
to the system every year. For these reasons, an appropriate SOA represents a clear advantage for the
banking enterprise.
But does a SOA make sense for an engineering enterprise? Engineering is typically focused on
designing, building, testing and delivering quality products to customers. The interaction with the
customer is not as intense as with the banking example; there are very few transactions directly with
the customer. Once the customer requirements and expectations are captured, the engineer may not
see the customer again until the final validation and delivery of the product. For some systems there is
no physical customer until the product is put on the market. The marketing department anticipates the
customer expectations and defines the requirements. What are the services that would be enabled with
an SOA in this scenario?
If we take a step back and examine how an engineering organization functions, we find that indeed
engineering makes extensive use of services and, in fact, for these services engineering is its own
customer. In this context we can loosely define a service to be any work done by one person or group
that benefits another. As such we find that services are heavily used by systems engineering, particularly
in the design phase of the product life cycle. Figure 4 shows some typical services that are accessed
during each of the phases of the life cycle.
We will focus on the design phase of the life cycle because there seems to be a greater demand for
information services during the early phases of product development, as shown in Fig. 4. Some of the
services needed during the design phase of the life cycle for a new product are: managing requirements,
checking parts lists, searching for suppliers, maintaining system models, maintaining databases, verifying
models, and searching the “lessons learned” data base, etc.
5
http://www.vitechcorp.com/index.html.
56
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Verification Planning
Document
Library
Scheduling Service
Standards
Interface Documents
Handbooks
Specifications
Parts Service
Best Practices
Risk Management
Planning Tools
Configuration
Management
Services
Modeling Tools
Requirements
Management
Services
Information
Transformation
Services
Requirements
Definition
Document
Services
Develop
Operations
Concept
Lessons
Learned
Database
Long Term
Data
Retention
Vendor/
Supplier
Database
System
Engineering
Process
Services
Product Data
Management
Services
Search
Agent
Service
Functional &
Logical
Decomposition
Design
Solution
Definition
Process
Flow
Fig. 4. Segment of Product Lifecycle Showing Processes and Services.
4.2. Services in the MBSE literature
Friedenthal et al. [18], in describing the INCOSE 2020 Vision, talk about the elements necessary
to achieve cross domain model integration. They list such things as integrated model management; a
distributed model repository; secure, reliable data exchange; and a “publish and subscribe registry”.
Although this list does not mention SOA specifically, the mention of a “publish and subscribe registry”
certainly touches on the mechanism of an SOA implementation.
A paper by Muth et al. [33] describes “service objects” in the context of modeling. This paper suggests
such a framework and identifies the systems of concern as a processing system that consist of a process
control system and a resource system. A service network is the primary structural model, to which basic
interface and behavior specifications are associated. All entities are service objects, or abstract objects.
A service object provides a single primitive service, but it can have any number of interfaces in which it
uses primitive services that are provided by other service objects. In this work, the extent of a primitive
service is defined by the corresponding service resource model. These authors present a model that can
act as a unifying framework for modeling processing systems, i.e., systems that consist of a resource
system and a process control system. The model is presented from two major views: A functional view
and a solution view.
There is therefore a need to define an abstract machine as an entity that can represent such an
aggregation of service objects, and that is equipped with a special function, object handling, that allows
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
57
a service object in one abstract machine to access a service object in another abstract machine. An
example of standardized object handling is the CORBA-platform as described in Orfali et al. [35].
The European Space Agency conducted a study [26] of an “integrated platform for engineering data
management” that would integrate all databases used in the space system life cycle. The study envisioned
using a SOA in which the process logic would be encapsulated by services. The conceptual data model
would be used within an MDA (Model-Driven Architecture) approach to generate database schemas
for new databases or data exchange adapters to wrap existing legacy applications. But the architectural
details of the services and their access via the bus were not presented in this initial study.
Abusharekh et al. [1] discuss the specifications, methods, and constructs to implement end-to-end
SOA-based systems engineering across a federation of information domains. This paper focuses on
SOA performance evaluation by defining a service design that will consistently yield quantifiable results.
There is no discussion in the paper on how engineering services would be implemented. The example
the authors chose involves business services in a lease approval application.
4.3. Requirements for systems engineering services
What type of services should be made available to the engineering team by way of a SOA? To answer
this question, we start by examining some of the requirements for something to be a service.
– Services should consist of systems engineering processes and the functions and activities that support
these processes.
– Services should provide the engineering team the capability to discover, access and use data where
they need it and when they need it.
– Services should allow users to reuse what already exists rather than recreating it every time.
– Services should promote data and application integration and interoperability. A single service is
not as important as how a network of loosely coupled services work together to solve the problems
of the engineering design teams.
– Services should be standards-based whenever possible.
Based on these requirements we can identify many candidate services that support the systems engineering activities during the design phase of the product life cycle. Many of these are understandably
data base searches since one of the requirements is to provide the capability to discover, access and use
data. This is not an exhaustive list. Services can easily be added or removed from the list at any time
and often services are unique to a particular application. But a preliminary list may look something like:
1. Requirements management services.
2. Configuration management services.
3. Services that search for documentation in the library data base (e.g. engineering standards, forms,
etc.)
4. Services that search for acceptable vendors and suppliers.
5. Services that search “lessons learned” data bases. (In the past the manual search of lessons learned
data bases tended to be a fruitless task. The engineer often does not know where to look and the
search engines are inadequate.)
6. Services that perform information transformation.
Not all services are visible to the user. Services can call other services as they integrate data and
information. Services can be imbedded in the models so that they are easily accessible by the designers.
This is similar to how requirements are already imbedded in SysML models. Bernard [5] shows how
58
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Organizational Human Services
Business
Unit
Services
Collaboration Tools
Partners
Team
Services
Teaming
Tools
Engineering
Team
Service
Orchestration
Design and
Monitoring
Customers
External Tools
For Customers
and Partners
Divisional
Services
Departmental
Services
Group
Services
Collaboration Orchestration Bus (COB)
Service
Directory
Tool
Asset
Directory
Tool
Employee
Directory
Tool
On Demand Workplace and
On Demand Operating Environment
Fig. 5. Human Services Bus.
to integrate requirement engineering in a model-based system engineering (MBSE) methodology using
UML or SysML as the modeling language. From an architectural standpoint there is a layering or
hierarchy of services in which the lower level services support those at the higher level.
4.4. Human services
The services mentioned above are services which are automated, that is, once the service request is
made, the handling of the service is performed by software agents with no human intervention. But
not all services in a systems engineering design environment can be automated. Those non-automated
engineering services are those functions that require human intervention, intelligence and decision
making. We need to take a different approach in converting these functions into services so that they
can be aligned in an SOA environment. Traditionally, in the development of a product, the various
disciplinary engineering teams are engaged at appropriate points in the product lifecycle. Systems
engineering orchestrates and coordinates the assignments and contributions of these disciplines. It
is this orchestration and coordination that we need to transform into a services structure so that the
numerous teams and their virtualized services are managed with an effective orchestration, collaboration
and coordination facility.
Bieberstein [4] describes such a facility as the logical equivalent to the Enterprise Service Bus (ESB)
for IT systems that we saw earlier in Fig. 3. He calls this coordination facility the collaborationand-orchestration bus (COB) as shown in Fig. 5. This bus is a conceptual artifact that connects the
organizational human services by providing mechanisms for communication, coordination, and collabo-
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
59
Fig. 6. An Engineering Design Center.
ration. The collaboration tools and the collaboration-and-orchestration bus all comprise what Bieberstain
calls the Human Services Bus (HSB) [4].
Applying SOA to human services within an organizational structure requires viewing core tasks
and activities as units of service. Services are the building blocks of SOA applications. While any
functionality can be made into a service, the challenge is to define a service interface that is at the right
level of abstraction. Services should provide coarse-grained functionality [30]. Each team within the
organization provides a service and is specialized in delivering a particular activity or task. A chain
of services from various teams can be orchestrated to execute higher-level tasks or business objectives.
Bieberstein [4] suggests that the services teams and their core competency be publicized on an internal
electronic bulletin board. This is equivalent to the service registry for a SOA. Also published are the
governing guidelines and policies which specify how to engage their services. This is the equivalent of
the SOA governance rules.
At this point it will be helpful to take a look at an example where team services are already being
used. Here we are referring to what are called collaborative engineering environments, which are the
core of many engineering design centers. These environments have proven to be an effective means of
managing, orchestrating and coordinating the contributions of multiple disciplines in the engineering
of a product. In design centers, for example, the engineers are typically located in one room and each
engineering discipline is provided a workstation that ties it to all the other team members over a local
bus as shown in Fig. 6. The workstations can then communicate with one another and share design
information with one another. The local bus also allows the team members to access a central database
which stores the latest design along with companion information such as cost estimates and analysis
details.
But collaborative engineering teams are not always located in one room or adjacent rooms. Team
members can be geographically dispersed while remaining connected electronically. It is not uncommon
60
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Systems Engineer
2
Supplier Web Site
SE acquires
more data
3
6
SE sends
data to EE
1
5
EE performs
task
EE transmits
data to SE
SE negotiates
task with EE
7
SE stores final
data package
4
EE accesses
system data
Data
Repository
Electrical Engineer
Fig. 7. Collaborative Engineering Interaction.
for such “virtual teams” to use distributed databases, collaborative notebooks, and other more sophisticated collaborative-work software for pooled work. But the use of such technologies requires some
preparation and training as well as integration with existing systems. How team members communicate
with one another and how they access important information are two critical factors that contribute to the
success of the team. Horizontal interfaces and dependencies with other teams, organizational functions
and external partners must be carefully managed and orchestrated [15].
Figure 7, after Tom Thurman [36–38], is an example of a typical collaborative interaction within a
virtual team. Here we see a systems engineer requesting a “service” from an electrical engineer (EE),
who may actually be working in a different location. The systems engineer also receives more supplier
information from the vendor website. The service request involves a task which is negotiated with the
EE who then performs the required task and transmits the requested data to the systems engineer. The
data is also stored in a repository to preserve it and make it available to the other members of the team.
If we compare this to the basic SOA functionality discussed earlier we see many similarities. Each
workstation in a collaborative environment can be viewed as a service. Each of the discipline engineers
provides a service as part of the engineering design team. As shown in Fig. 7, the electrical engineer
provides the electrical design “service”. Similarly the thermal engineer, the controls engineer and
the mechanical engineer all provide key services as part of the design effort. But there are some
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
61
differences. The biggest issue with design center “services” invoked over the collaborative bus is that
digital communication is a necessary but not sufficient component of a successful product design exercise.
Strictly digital communication is not always interpreted the way it should be when humans are involved.
Design teams rely on verbal communication – people talking to each other – either over a two-way voice
link or face-to-face. We find that much of the managing, orchestrating and coordinating is done verbally.
The systems engineer sends a request over the collaborative bus to the thermal engineer asking that
he/she provide the maximum operating temperature for a specific component. If the systems engineering
team does not understand a term or a result that they receive back from the thermal engineer, they can
discuss it verbally with the thermal engineer until they mutually resolve the misunderstanding. This
type of interaction would not be possible in a purely digital implementation. As we saw earlier, service
requests are made and responses are returned. If the service request is structured properly and follows
the appropriate protocol and syntax then the service will return an unambiguous response with no need
for any verbal communication such as, “what did you mean by this term?” or “how did you get this
result?”
This requirement for unambiguous requester/provider interaction is a challenge when working with a
diversified and dispersed design team. How can this requirement be satisfied in a SOA environment? A
partial answer would be to standardize the ontology and the semantics so that there is no ambiguity in
the human-to-human interaction, just as a standardized generic data format such as XML removes the
ambiguity in digital data. Much progress is being made to bridge the gap on ontologies and semantics
but for now we must insist that some form of voice and perhaps video be part of the architecture to
accompany the human services bus.
5. Communications link for human services
The goal in defining the SOA for systems engineering services is to construct an integrated architecture
that combines the automated, agent-based services with the human services. Since voice and perhaps
video communications are part of the human services we must look for ways to integrate voice and video
in a web services environment. There are several candidate technologies that are available for sharing
voice and video across the internet.
On the high end of the technology scale there are the webcasts and video-conferencing packages
which are popular with virtual teams and collaborative environments. Cisco WebEx 6 is a particularly
popular tool that enables the sharing of computer information and application sharing as well as voice
and multi-point video in real time among members of a team with technology that transcends firewalls
and requires very little set-up. There is no hardware to be installed; you simply need a phone and an
internet connection. For Voice over Internet Protocol (VoIP) or Internet calling participants can join
from their computer, using a computer headset with a microphone and speaker. Because WebEx is a
web-based service it can be accessed from any computer (Windows, Mac, Linux, or Solaris) as well as
an iPhone, Blackberry, or any other WiFi or 3G-enabled mobile device with no complicated installation.
One problem with a WebEx-type implementation is that users must first log into their WebEx account
to start or schedule a conference session. Participants use a call-in number to join the conference, which
may be cumbersome (and expensive) for extended engineering product design environments.
6
http://www.webex.com/.
62
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Beyond webcasts and video-conferencing, there is an emerging technology that is probably better
suited for integrating voice and video into a real-time SOA environment. That technology is called
Unified Communications (UC). Unified communications has been the subject of thousands of press
articles, and it is constantly being promoted by vendors and analysts as the next great communications
breakthrough that every company must adopt right now in order to remain competitive [25].
UC is a communications system that encompasses a broad range of technologies and applications that
have been designed as a single communications platform. Unified communications systems generally
enable companies to use integrated data, video, and voice in one supported product. Unified communications systems typically include the means to integrate real-time or near-real time unified messaging,
collaboration and interactive systems. For example, a single user can access a variety of communication
applications, such as e-mail, Short Message Service (SMS), video, fax, voice, and others through a single
user mailbox. Additionally, unified communications has expanded to incorporate collaboration and other
interactive systems such as scheduling, workflow, instant messaging and voice response systems. Many
of the service features of UC are readily available from a variety of devices, such as PDA’s, laptop
computers and other wireless devices.
An example of a Unified Communications product is the Microsoft Office Communications Server
2007 [32] that delivers a software-powered communications capability without expensive infrastructure
and network upgrades. This product integrates VoIP with email, calendaring, voice mail, unified
messaging, instant messaging, and Web conferencing to provide a streamlined experience right at the
user’s workstation rather than the disconnected experience provided by legacy systems today. A “presence
capability” enables real-time status of employees to be displayed to enable a user to contact someone the
first time, using the best communications method. The Web conferencing capability provides real-time
communication and collaboration for a total virtual meeting experience that integrates data, content,
video, voice, media, and text.
Rybczynski [39] claims that UC brings together telecom and IT technologies and represents an industry
inflection point. This inflection point creates increased choice, openness and flexibility in leveraging
technology for business advantage. “Central to the transformational opportunities associated with IT
convergence is unified communications (UC), delivered as an application within a standards-based
software application framework. In this new world, business processes can be accelerated through
embedded real-time communications”.
The problem with UC is that there are presently no standards developed for the integration of the
technologies. If an organization were to implement UC they would be at the mercy of vendors, many of
whom have strengths in only one or two of the technologies involved.
For the architecture we are proposing here, an adequate solution would be to implement VoIP. This is
a mature technology that can be conveniently tied into the architecture as a service. VoIP systems use
session control protocols as well as audio codecs which encode voice and video as an audio stream or
video stream for transmission over an IP network [43]. It is important to note that video over IP networks
must satisfy timing requirements that are at a much higher level than those encountered in voice over
IP. But present-day routers adhere to strict quality of service (QoS) requirements to ensure more than
adequate performance [42].
Figure 8 shows a typical implementation of a VoIP system. There are hardware components such as
VoIP servers, a public switched telephone network (PSTN) gateway with perhaps an Integrated Services
Digital Network (ISDN)-to-IP gateway, multi-point control units as well as a number of internet routers,
hubs and switches. (ISDN refers to a set of communications standards that enable telephone lines to
transmit voice, digital network services, and video).
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
63
Controller
Data
Switch
IP/PSTN
Router
VoIP
Gateway
Internet
Router
VoIP
Gateway
IP/PSTN
Data
Switch
Wireless
Router
Controller
Fig. 8. VoIP Implementation.
VoIP provides the critical “missing link” in the architecture which will connect the engineering team
members across the human services bus by way of digital data, voice and video transfer. VoIP must also
interface to the Engineering Services Bus in order to complete the SOA. We will show later how VoIP will
be integrated into the data services portion of the architecture, but first we discuss the implementation of
the data services architecture.
6. Description of the data services architecture
We have identified a number of engineering capabilities that can be supplied by services. We saw that
these services can be both agent-based and human-based. We have also seen the advantages of using an
SOA to make these engineering capabilities available to the members of the engineering design team.
In this section we will examine the architecture of the automated services portion of the SOA, i.e. the
agent-based service structure. The integration of these loosely coupled, shared services with the human
services will be discussed later in Section 7.
We will refer to the integration backbone of the services architecture as the Engineering Service Bus
(EngSB). This bus will have all the advantages of the Enterprise Service Bus mentioned earlier but
with modifications to accommodate the unique engineering services. As we will see, this gives us a
distributed, loosely coupled integration infrastructure that will be based on standards, common data
representation, web services, reliable asynchronous messaging and distributed integration components.
64
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Routing
Service
Service
Containers
Orchestration
Service
Logging
Service
Service
Containers
Transformation
Service
Engineering Service Bus (EngSB)
JMS
HTTP
Service
Container
Application
C
Application
B
Application
A
Service
Containers
C++
Fig. 9. Service Containers.
The fundamental components of the EngSB architecture are called EngSB “service containers” after
Chappell [8] and are illustrated in Fig. 9. “The container is an intrinsic part of the [EngSB] distributed
processing framework. In some sense the container is the [EngSB] – more so than the underlying
middleware that connects the containers together. The container provides the care and feeding of the
service” [8]. The EngSB container provides a variety of facilities, such as location, routing, service
invocation, Quality of Service (QoS), security and management. In this way the EngSB can integrate
nicely with any of the many different standards-based interface technologies. Some of the more common Application Programming Interfaces (API) include the Java Messaging Service (JMS) API client,
C/C++/C# API client, File Transfer Protocol (FTP) client, and the Hypertext Transfer Protocol (HTTP)
API client.
Regardless of which technology is used, it is the function of the EngSB to ensure that the data generated
in response to a request is transformed to a format that is readable by the requester of the service. The
important point here is that the service implementation with its interface definition is separate from the
EngSB processes and routing logic. The bus architecture of the EngSB separates the high-level SOA and
integration, including transformation, routing and the management of physical destinations as abstract
endpoints, from the details of the underlying protocols. This means that with an EngSB the underlying
protocols can change without affecting the higher-level integration functions of the bus. This gives
greater flexibility to the architecture by allowing applications using a variety of different technologies
to be plugged into the bus. The bus transformation services solve “the impedance mismatch between
applications” [8]. Packaging the interface logic as part of the service container separates the interface
definitions and service implementations from the process routing logic of the bus.
The primary function of the bus is to connect the many application services which are treated as abstract
endpoints on the bus. Two of the fundamental capabilities that form the core of the bus are messaging
and integration, which include routing and data transformation. These capabilities are implemented as
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
65
separate services so they can be independently deployed anywhere within the network. These services
are distinct from the application services, but like the application services they too are wrapped in a
service container which serves as a managed environment for the bus services.
6.1. Messaging
The messaging layer provides loosely coupled, asynchronous, reliable communications. This is
accomplished with Message Oriented Middleware (MOM) which is the backbone communications
channel that manages the passing of data between applications using self-contained units of information
called messages. The EngSB removes the low-level complexities of using a MOM by delegating that
responsibility to the service container. An application uses an API to communicate through a messaging
client that is provided by the MOM vendor. The data exchange architecture approach uses a canonical
data format so that applications that are plugged into the bus do not need to know how other applications
represent data. Each application needs to be concerned only with how it converts to and from the
canonical format. The service container then manages such things as establishing a connection with the
messaging server; creating publishers, subscribers, queue senders, and queue receivers; and managing
the transactional demarcation and recovery from failure. Each participant in a message exchange needs
to be able to rely on having transactional integrity with its interaction with the bus, and not with the other
applications that are plugged into the system.
6.2. Bus integration
Integration capabilities of the bus are implemented as separate services, such as transformation services,
content-based routing services and a service that logs messages for tracking purposes. Other services
include a SOA supervisor, an orchestration service and a registry service. Perhaps the most distinguishing
characteristic of the EngSB is its ability to be highly distributed. Many things contribute to making the
EngSB highly distributed, but the three components that stand out the most are the use of abstract
endpoints for representing remote services, Internet-capable MOM, and a distributed lightweight service
container.
Integration capabilities of the bus are implemented as separate services. Examples are a transformation
service, content-based routing service, and a logging service. If XML is the generic data format the
transformation service would apply XSLT (eXtensible Stylesheet Language Transformation) style sheet
to convert the XML messages.
The structure of the SOA environment is shown in Fig. 10, adapted from [44], and is comprised of
three layers. The process layer contains the flow of the systems engineering process, with service calls
as needed. The service layer contains the engineering application services and the EngSB services. The
EngSB is composed of the Message Oriented Middleware (MOM), the SOA supervisor, the orchestration
service, the registry service, and the service containers. The physical layer contains workstations and
servers which represent the physical nodes of the network and communication networks needed for this
SOA environment.
7. The integrated architecture
We have now seen two different bus architectures that serve to streamline the access to systems
engineering services. There is the Human Services Bus (HSB) for the human services and the Engineering
66
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
SE Process Flow
Process 2
Process 1
Process 3
Routing
Service
Service Call
Service Call
Service Call
Process 5
Process 4
Transformation
Service
Orchestration
Service
Logging
Service
Process
Layer
(EngSB)
HTTP
JMS
C++
API
Service
A
Service
B
Service
C
Service
D
Service
Layer
Physical
Layer
Fig. 10. Structure of the SOA Environment.
Service Bus (EngSB) for the agent-based services. The goal now is to integrate the two into one global
solution. In Section 5 we saw how Voice and Video over IP (VoIP) was the technology chosen to link
the human services together. This enables the communication among the engineering team members,
with real-time access to voice and video at each of their workstations. But the team members also
require access to all the agent-based services of the EngSB at their workstation. This is accomplished
by co-locating sections of the EngSB at each remote site. An important advantage of the EngSB service
containers is that they allow selective deployment of integration capabilities when and where they are
needed, with no integration brokers or application servers required [8].
There are actually two areas where the HSB and EngSB intersect. First and foremost there is the
electronic messaging and data handling which links all of the services of the EngSB to the engineering
team members who are viewed as “services” on the HSB. The EngSB is the backbone of the entire
development team and fulfills the electronic messaging and data handling part of the Collaborative
Orchestration Bus (COB) that was discussed in Section 4. But there is a second link between the
HSB and the EngSB and that is the services which the EngSB hosts to manage and monitor the VoIP
communications system for the HSB. The HSB is tied into the EngSB by way of a dedicated network
service which manages and partitions the VoIP network of routers. The management service provides
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
67
...
Human Services Bus
Voice & Video
Embedded Bus Services
VoIP
Monitoring
& Control
Service
...
Messaging & Data
Engineering Services Bus
...
Engineering
Services & Applications
Fig. 11. Integrated Bus Architecture.
greater Quality of Service by assigning IP addresses, and monitoring and controlling throughput levels,
latency and transmission speeds.
Shinder [40] points out that the nature of VoIP introduces some security issues. VoIP calls travel over
a packet-switched network which means that VoIP is inherently more vulnerable to attack than the PSTN
network because of the public nature of the IP network and its protocols. Intruders who have access to
the network can use “sniffer” software to capture the packets containing the voice data and use readily
available tools to reassemble them and eavesdrop on conversations. However, Shinder [40] shows that
by taking a carefully planned, multi-layered approach to securing VoIP networks, companies can make
VoIP as secure as – or even more secure than – traditional phone systems. The software needed to
provide this security can be readily implemented in services that plug into the EngSB.
By making the VoIP network software-enabled allows it to be seamlessly integrated with the Engineering Service Bus (EngSB) in a way that connects people with both voice and data wherever they are
and on any device that they are using.
Figure 11 illustrates the connection between the HSB and the EngSB where messaging and data traverse
the bus allowing the engineering team members access to all the engineering services and applications
on the EngSB. There is also the voice and video traffic which traverses the same bus but primarily
connects the human elements across the human services bus. This voice and video traffic is monitored
and controlled by a service which resides on the EngSB. By means of this integrated architecture each
team member can access engineering services at his/her workstation while staying in voice and video
contact with other members of the engineering team. Video will be displayed as a window on the
engineer’s computer monitor and voice will be available through the computer microphone and speakers.
68
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
An available service maintains the phone directory and allows one to easily initiate either a single
person-to-person call or to set up a multi-person conference call.
This environment established by this two-bus SOA architecture is essentially no different than the
one-room engineering design center shown in Fig. 6, yet the participants can be geographically separated
anywhere in the world with everyone working to the same processes, sharing the same models and
information, and having access to the same data. It is this type of information-sharing environment that
enable the full potential of model based systems engineering.
8. How does this SOA benefit MBSE?
Earlier, in Section 3, we identified several areas where MBSE has not realized its full potential. These
areas involved the difficulties in sharing information and models among disparate engineering tools –
we termed this tool interoperability – and the continued use of documents instead of models in the
engineering of systems. The SOA that has been proposed here offers a solution to these issues.
Addressing the interoperability issue, we have seen how the SOA will tie the engineering team members
together in an environment that provides transformation services for data, information and models. These
services will transform the information into a neutral format at its point of origin within the SOA. Based
on the metadata attached to the information the EngSB will route the information to its destination and,
through the services of the local service container, the information will be translated back into a format
that is accepted by the recipient application. The tools at either end do not need to know the acceptable
format at the other end nor do they have to be concerned with implementing software to make any
translations. The EngSB, through its resident services will identify and select the appropriate translator
of the information. This will be completely invisible to the engineers at either end of the transmission.
Peak [36–38] claims that what is required for realizing MBSE is a foundational shift in engineering thinking from the traditional computing viewpoint to an information/knowledge-based modeling
viewpoint. The traditional view uses tools, data files, drawings and documents in the engineering of a
system. The paradigm shift is toward model creation and interaction; model connection, associativity
and interoperability; knowledge capture and knowledge representation.
SOA is the framework for hosting such a paradigm shift. The SOA environment makes the engineering
processes, methods and tools easily assessable to the engineer. Through its services the SOA provides
requirements analysis, system functional analysis, architectural design, and the execution of design
alternatives for rapid evolution of a design through modeling activities which are integrated through
logical interoperability. In addition, SOA invokes standard services for implementing reusable data
archives and data exchanges.
To achieve the full information/knowledge-based modeling viewpoint within the systems engineering
SOA will require standards for the exchange of product model data. The evolving technologies of the
ISO 103037 application protocols (APs) are a promising means for accomplishing the services that will
perform the transformations of the engineering information. ISO 10303 is an International Standard for
the computer-interpretable representation and exchange of product data. The objective is to provide a
mechanism that is capable of describing product data throughout the life cycle of a product, independent
of any particular system. The nature of this description makes it suitable not only for neutral file
exchange, but also as a basis for implementing and sharing product databases and archiving. There are
7
ISO 10303 is commonly referred to a STEP (STandard for the Exchange of Product model data).
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
69
over forty APs associated with ISO 10303, with the majority dedicated to the exchange of mechanical
and electrical engineering model data. AP233 8 is a more recent AP dedicated to the systems engineering
domain. It is central to all the APs within the ISO standard. AP233 provides the necessary rigor and
formalism to guarantee the unambiguous interaction that is needed to implement a full SOA for systems
engineering. Frisch [19] indicates that AP233 can be implemented as an API in the envelop content of
a web service. In other words the data transformational service of AP233 would reside as an API in the
service container of the applications that requires it. In this way the systems engineering modeling tools
can exchange files in an unambiguous way, thereby enabling the full potential of MBSE.
With regard to the MBSE document vs. model issue, the SOA can make the use of models extremely
appealing through the use of services. With universal access to a common modeling language, common
semantics and ontology, unambiguous data exchange, and data archiving in neutral formats, the use of
models will become the knowledge capture and knowledge representation method of choice. But the
abandonment of documents in favor of models is a cultural change so it may take time even for a SOA
environment to enable the transition to a full model-based approach. But all things considered, it is easy
to see that the SOA will be an enabler of model-based systems engineering.
9. Implementation strategies
Implementing any major new IT system within an enterprise can be disruptive to the enterprise and
it can even be catastrophic if not done correctly and efficiently. There are examples where massive,
ill-planned, enterprise-wide IT revisions have disrupted service which resulted in loss of business with
long and expensive recovery time. Such an undertaking should not use the “big-bang theory” approach
in which everything is done all at once. Implementation efforts must be paced to align with the
organization’s overall ability to absorb change. It is easy to lay out an aggressive plan on paper. However,
to be successful, consideration must be given to the capabilities, readiness, and past experiences of the
organization to develop a realistic, achievable plan. This includes sequencing the deployment of IT
architecture and engineering capabilities into phases of progressive capability.
A big advantage of selecting an EngSB backbone for the SOA is that it can be incrementally implemented. Distributed service containers provide selective deployment of integration services that are
independently scalable [8]. This means that there is no major, one-time overhaul of the IT infrastructure
of the enterprise. The EngSB allows incremental adoption, as opposed to being an all-or-nothing proposition. The federated/autonomous capabilities of the EngSB also contribute to the ability to adopt an EngSB
project-by-project basis as shown in Fig. 12. Incrementally staged deployments of EngSB integration
projects can provide immediate value while working toward the broader organizational initiatives [8]
“Because integration capabilities are themselves implemented as services, they can be independently
deployed anywhere within the network. The result is precise deployment of integration capabilities at
specific locations, which can then be scaled independently as required. It also means that services can
easily be upgraded, moved, replaced, or replicated without affecting the applications that they cooperate
with” [8].
Standards-based messaging forms the core of the bus deployment, interconnecting the various remote
integration services. Chappell [8] points out that another important feature of the EngSB is that the bus
management traffic can share the bus with the application traffic and therefore does not require direct
8
http://www.ap233.org/.
70
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
Engineering
Services
...
...
...
Standards-based
Messaging
EngSB Deployed
at Project A
EngSB Deployed
at Project B
Standards-based
Messaging
EngSB Deployed
at Project C
Single-Point
Bus Management
Fig. 12. Co-Located Deployment of EngSB.
connection to each container. The ESB service container handles the inflow and outflow of management
data such as configuration, auditing, and fault handling. Sharing the bus eliminates the need for any
additional holes in firewalls. An example of a management interface implementation would be the Java
Management eXtensions (JMX) if the EngSB implementation supports Java.
Another extremely important consideration when implementing an SOA is the need for standardsbased interfaces. Vendor integration broker projects can be expensive due to the proprietary nature of
their technology. Using integration brokers often results in vendor lock-in and makes any modifications
expensive, with prohibitive integration costs for software licensing and consulting services.
Even with all the integration advantages of the EngSB, Halley [22] points out that transitioning to a
SOA is a difficult task and the system engineering organization takes the central role in the transition.
“Despite the benefits of the new architecture for solving the complexities of large scale integration, little
is known about the most effective way of transitioning IT architecture to a Service Oriented Architecture.
The transition involves uncoupling pre-existing systems, creating modular components, and loosely recoupling these components via common standards. This IT transformation must also be accomplished
without disrupting current IT mission critical systems and operations”.
Halley [22] developed an “issues framework” to guide the transformation based upon what he observed
as successful and unsuccessful transformations. By focusing on these issues, he claims that systems
engineers were better able to successfully implement an SOA initiative.
10. Conclusion
In this paper we have examined service oriented architectures (SOA), which are architectural designs
for linking business and computational resources on demand to achieve the desired results for service
consumers which can be end users or other services.
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
71
We have shown that from a system engineering perspective there appears to be significant advantages to
structuring systems engineering functions along the lines of a SOA, particularly in the product formulation
and design phases of the product lifecycle.
The implementation of a SOA presented here is indeed a new and viable approach to realizing a more
capable structure for the engineering of systems. We have seen how SOA can be an enabler of MBSE.
We also believe that this is the first study to address human contributions as services within the context
of a SOA.
This paper is an architecture study. In this paper we have focused on the high-level architecture
with only passing mention of the IT implications. Also we have only addressed the early design and
formulation phases of the product lifecycle when, in reality, it is felt that a SOA can apply equally well
to the later phases of the lifecycle.
There are certainly topics for future work that need to be addressed. One important topic is the trust
and verification of services. If an engineering team is using services that were hosted by an outside
organization they need to trust the data that they receive from that service. Somehow they need to
verify that the service is providing correct information. Denno [13] suggests that the integrity of models
can be verified by tools mechanizing the constraints on usage found in the formal description of the
models. Probabilistic techniques such as [28] can be used to express confidence in a relationship that
spans viewpoints. More explicit expression of the relation may be possible, but supporting technology
in this area is less mature. Bayesian knowledge bases provide a characterization of validity across
viewpoints [28].
Of course, if all the services originate within the engineering organization then the task of verification
is an easier one. But even in that case there needs to be a process for keeping track of all the models
and integrating them. Team members need a way to search for specific models and to check a model’s
heritage – its revision history. The solution, of course, will be in the way meta-models and metadata are
created and managed.
Security is another issue which must be addressed by future research. Voice, video and data need to
be encrypted if they are sent over public networks. But the downside of encryption is that it has Quality
of Service and latency issues. If voice, video and data travel over a secure local area network (LAN)
within the confines of the engineering organization then the security issue is not as significant.
Finally, the impact of an SOA implementation on the engineering organization must be addressed.
How do we get the engineers to embrace the new technology and to work in this new environment? More
importantly, how should we structure the organization to make efficient use of the new technology?
Similar to the transformation of IT systems to SOA there is a parallel transformation required for organizational structures [20,21]. IT architectures often parallel the organization structure of the enterprise.
This relationship between organization and architecture is already a heated subject of discussion. Should
the architecture follow the organization structure, or vice versa? Should we compromise architectural
integrity to align better with the organization? Or should we adapt the organization to serve the desired
architecture? [9]. These are important questions to be examined in future research.
11. Acronyms
AP
API
CBR
DODAF
Application Protocol
Application Programming Interface
Content-Based Routing
Department of Defense Architectural Framework
72
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
DOM
EDI
EngSB
ESB
IB
ISDN
J2EE
JBI
JCA
JCP
JMS
JMX
LAN
MDA
PSTN
PBX
QoP
QoS
SAX
SMS
SMTP
SOAP
STEP
UC
VoIP
XML
XSLT
Document Object Model
Electronic Data Interchange
Engineering Service Bus
Enterprise Service Bus
Integration Broker
Integrated Services Digital Network
Java Message Service (JMS)
Java Business Integration
J2EE Connector Architecture
Java Community Process
Java Message Service (also J2EE)
Java Management eXtensions
Local Area Network
Model-Driven Architecture
Public Switched Telephone Network
Private Branch eXchange
Quality of Protection
Quality of Service
Simple API for XML
Short Message Service
Simple Mail Transfer Protocol
Simple Object Access Protocol
STandards for the Exchange of Product model data
Unified Communications
Voice (and/or Video) over Internet Protocol
eXtensible Markup Language
eXtensible Stylesheet Language Transformation
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
A. Abusharekh, L. Gloss and A.H. Levis, Evaluation of SOA-Based Federated Architectures, Systems Engineering, 2010
(to be published) Currently in INCOSE/Wiley Early View at http://www3.interscience.wiley.com/journal/116837515/
issue.
C. Babcock, Starwood Hotels Continues Its Migration from Mainframe to Services-Oriented Architecture, InformationWeek, July 21, 2005. http://www.informationweek.com/news/showArticle.jhtml?articleID=166401801# .
L. Baker, P. Clemente, R. Cohen, L. Permenter, B. Purves and P. Salmon, Foundational Concepts for Model Driven
System Design, white paper, INCOSE Model Driven System Design Interest Group, International Council on Systems
Engineering, July 15, 2000. http://www.ap233.org/ap233-public-information/reference/PAPER MDDE-INCOSE.pdf.
N. Bieberstein, S. Bose, L. Walker and A. Lynch, Impact of service-oriented architecture on enterprise systems, organizational structures, and individuals, IBM Systems Journal 44(4) (2005).
Y. Bernard, Requirement management in a full model-based engineering approach, submitted to Systems Engineering,
2009.
P. Bianco, R. Kotermanski and P. Merson, Evaluating a Service-Oriented Architecture, Software Engineering Institute,
Technical Report CMU/SEI-2007-TR-015, 2007.
B. Bradley, SOAs Enable Collaboration Within the Department of Defense, CIO Magazine, IT Drilldown, 2007
http://www.cio.com/article/110100/SOAs Enable Collaboration Within the Department of Defense.
D. Chapppell, Enterprise Service Bus, O’Reilly Publishers, Sebastopol, CA, 2004.
R. Cloutier, Model Driven Architecture for Systems Engineering, Paper No. 220, Proceedings of the Conference on
Systems Engineering Research (CSER), University of Southern California, Los Angeles, CA, 2008.
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
73
D. Cocks, M. Dickerson, D. Oliver and J. Skipper, Model Driven Design, INCOSE INSIGHT Magazine 7(2) (July 2004),
5–8.
M. Daconta, L. Obrst and K. Smith, The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge
Management, Wiley Publishing, Indianapolis, IN, 2003.
C. Delp, C. Lee, O. de Weck, C. Bishop, E. Analzone, R. Gostelow and C. Dutenhoffer, The Challenge of Model-based
Systems Engineering for Space Systems, INCOSE INSIGHT 11(5) (2008).
P. Denno, T. Thurman, J. Mettenberg and D. Hardy, On enabling a model-based systems engineering discipline, Proceedings of the 18th Annual INCOSE International Symposium, Amsterdam, 2008 http://www.mel.nist.gov/publications/
view pub.cgi?pub id=824653.
DoD Architecture Framework (DODAF), Version 2.0, May 2009. Available at http://cio-nii.defense.gov/sites/dodaf20/.
D. Duarte and N. Snyder, Mastering Virtual Teams, Jossey-Bass Publishers, Third Edition, 2006.
J. Estefan, Survey of Model-Based Systems Engineering (MBSE) Methodologies, Rev. B, 2008 http://www.omgsysml.
org/MBSE Methodology Survey RevB.pdf.
J. Fisher, Model-Based Systems Engineering: A New Paradigm, INCOSE Insight 1(3) (1998).
S. Friedenthal, R. Griego and M. Sampson, INCOSE Model Based Systems Engineering (MBSE) Workshop Outbrief,
presentation, January 26, 2008.
H. Frisch and C. Stirk, ISO STEP-AP233 Transition Development to Enablement, Presentation to the National Defense
Industrial Association (NDIA) Modeling and Simulation Committee, June 2007 http://www.ap233.org/ap233-publicinformation/presentations/ap%20233%20ndia%20v3.ppt/view.
J. Galbraith, Competing with Flexible Lateral Organizations, Addison Wesley, Reading MA 1994.
J. Galbraith, Designing the Global Corporation, Jossey-Bass, San Francisco CA 2000.
M. Halley, System Engineering Issues in the Transformation to Service Oriented Architecture, The Center for Enterprise
Modernization, The MITRE Corporation, presented at INCOSE International Symposium 2005.
INCOSE, Systems Engineering Vision 2020, INCOSE-TP-2004-004-02, version 2.03, 2007.
IBM, SOA programming model for implementing Web services, Part 1: Introduction to the IBM SOA programming
model, 2005 http://www.ibm.com/developerworks/webservices/library/ws-soa-progmodel/.
B. Kelly and J. Neville, A Framework for Deploying Unified Communications, Wainhouse Research, September 2008
http://www.ivci.com/pdf/whitepaper-framework-for-deploying-unified-communications-wainhouse.pdf.
de Koning, Hans Peter et al., Integration and Collaboration Platform for the Engineering Domains in the Space Industry,
presentation at the NASA/ESA PDE Workshop in Santa Barbara CA, May 2007 http://step.nasa.gov/pde2007.html.
E. Herzog and A. Törne, AP-233 Architecture, Real-Time Systems Laboratory Department of Computer and Information Science, Linköpings Universitet, Sweden, 2000 http://www.ap233.org/ap233-public-information/reference/
PAPER Herzog-Toerne-SEDRES-AP233-Architecture.pdf/view.
K.B. Laskey, MEBN: A Language for First-Order Bayesian Knowledge Bases, George Mason University, Department of Systems Engineering and Operations Research, 2007 http://volgenau.gmu.edu/˜klaskey/papers/Laskey
MEBN Logic.pdf.
J. Long, Model-Based System Engineering: The Complete Process [a tutorial], Vitech Corporation with Dr. Dennis M.
Buede, Innovative Decisions, Inc., Vienna, Virginia, 2008.
Q.H. Mahmoud, Service-Oriented Architecture (SOA) and Web Services: The Road to Enterprise Application Integration
(EAI), Sun Microsystems, April 2005 http://java.sun.com/developer/technicalArticles/WebServices/soa/.
J.N. Martin, Systems Engineering Guidebook: A Process for Developing Systems and Products, CRC Press, Inc.: Boca
Raton, FL, 1996.
Microsoft, Microsoft Office Communications Server 2007 R2 http://technet.microsoft.com/en-us/library/dd440728 (office.13).aspx.
T. Muth, D. Herzberg and J. Larsen, A Fresh View on Model-Based Systems Engineering: The Processing System
Paradigm, presented at INCOSE International Symposium, 2001.
NASA, Systems Engineering Handbook, NASA-SP-2007-6105, Rev. 1, 2007 http://education.ksc.nasa.gov/
esmdspacegrant/Documents/NASA%20SP-2007-6105%20Rev%201%20Final%2031Dec2007.pdf.
R. Orfali, D. Harkey and J. Edwards, The Essential Distributed Objects Survival Guide, John Wiley & Sons, Inc. 1996.
R. Peak et al., Progress on Standards-Based Engineering Frameworks that include STEP AP210 (Avionics), PDM
Schema, and AP233 (Systems), Engineering Framework Interest Group (EFWIG), presented at the 2002 NASA-ESA
Workshop on Aerospace Product Data Exchange ESA/ESTEC, Noordwijk (ZH), The Netherlands, April 9–12, 2002
http://eislab.gatech.edu/pubs/conferences/2002-apde-peak-stds-based-efws/.
R. Peak, R. Burkhart, S. Friedenthal, M. Wilson, M. Bajaj and I. Kim, Simulation-Based Design Using SysML – Part 1: A
Parametrics Primer. INCOSE International Symposium, San Diego, 2007 http://eislab.gatech.edu/pubs/conferences/2007incose-is-1-peak-primer/2007-incose-is-1-peak-primer.pdf.
74
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
J.F. Andary and A.P. Sage / The role of service oriented architectures in systems engineering
R. Peak, R. Burkhart, S. Friedenthal, M. Wilson, M. Bajaj and I. Kim, Simulation-Based Design Using SysML –
Part 2: Celebrating Diversity by Example. INCOSE International Symposium, San Diego, 2007 http://eislab.gatech.
edu/pubs/conferences/2007-incose-is-2-peak-diversity/2007-incose-is-2-peak-diversity.pdf.
T. Rybczynski, UC for All Employees Transforms the Enterprise, Business Communications Review, June 2007 http://
www.allbusiness.com/media-telecommunications/8901430-1.html.
D. Shinder, Creating a secure and reliable VoIP solution, TechRepublic, Special to ZDNet Asia, August 2007 http://
www.zdnetasia.com/insight/communications/0,39044835,62031189,00.htm.
R. Schulte and Y. Natis, Service Oriented Architectures, Parts 1 and 2, SPA-401-068 and SOA-401-069, Gartner, April
1996.
N. Unuth, Quality of Service – QoS and VoIP, About.com Guide, a part of The New York Times Company, http://voip.
about.com/od/voipbasics/a/qos.htm.
N. Unuth, VoiP Codecs, About.com Guide, a part of The New York Times Company, http://voip.about.com/od/ voipbasics/a/voipcodecs.htm.
L. Wagenhals and A. Levis, Service Oriented Architectures, the DoD Architecture Framework 1.5, and Executable
Architectures, Systems Engineering 12(4) (2008), 312–343.
W3C, Web Services Architecture, W3C Working Draft, 14 November 2002 http://www.w3.org/TR/2002/WD-ws-arch20021114/.
James Andary Bio James Andary is an emeritus systems engineer at the NASA Goddard Space Flight
Center with over 40 years experience in space systems design. Mr. Andary holds a BS degree in
mathematics from Boston College and a MA degree in mathematics from the University of Maryland.
He is presently a PhD candidate in the Department of Systems Engineering and Operations Research
at George Mason University. He is an Associate Fellow of the American Institute for Aeronautics and
Astronautics. He is also a member of the International Council on Systems Engineering and the American
Astronautical Society.
Andrew P. Sage – received the BSEE degree from the Citadel, the SMEE degree from MIT and
the Ph.D. from Purdue, the latter in 1960. He received honorary Doctor of Engineering degrees
from the University of Waterloo in 1987 and from Dalhousie University in 1997. He has been
a faculty member at several universities including holding a named professorship and being the
first chair of the Systems Engineering Department at the University of Virginia. In 1984 he became First American Bank Professor of Information Technology and Engineering at George Mason University and the first Dean of the School of Information Technology and Engineering. In
May 1996, he was elected as Founding Dean Emeritus of the School and also was appointed a
University Professor. He is an elected Fellow of the Institute of Electrical and Electronics Engineers, the American Association for the Advancement of Science, and the International Council on Systems Engineering. He
is editor of the John Wiley textbook series on Systems Engineering and Management, the INCOSE Wiley journal Systems
Engineering and is coeditor of Information, Knowledge, and Systems Management. He edited the IEEE Transactions on
Systems, Man, and Cybernetics from January 1972 through December 1998, and also served a two year period as President of
the IEEE SMC Society. In 1994 he received the Donald G. Fink Prize from the IEEE, and a Superior Public Service Award for
his service on the CNA Corporation Board of Trustees from the US Secretary of the Navy. In 2000, he received the Simon Ramo
Medal from the IEEE in recognition of his contributions to systems engineering and an IEEE Third Millennium Medal. In 2002,
he received an Eta Kappa Nu Eminent Membership Award and the INCOSE Pioneer Award. He was elected to the National
Academy of Engineering in 2004 for contributions to the theory and practice of systems engineering and systems management.
In 2007, he was elected as a Charter Member of the Omega Alpha systems engineering honor society. His interests include
systems engineering and management efforts in a variety of application areas including systems integration and architecting,
reengineering, engineering economic systems, and sustainable development.
Information Knowledge Systems Management 9 (2010) 17–46
DOI 10.3233/IKS-2010-0133
IOS Press
17
Integration maturity metrics: Development
of an integration readiness level
Brian Sausera , Ryan Govea, Eric Forbesb and Jose Emmanuel Ramirez-Marqueza
a Stevens
Institute of Technology, School of Systems and Enterprises, Systems Development & Maturity
Laboratory, Castle Point on Hudson, Hoboken, NJ, USA
Tel.: +1 201 216 8589; E-mail: [email protected]
b Northrop Grumman Corporation, Mission Systems Sector, 300 M Street SE, Washington, DC, USA
Abstract: In order to optimize the process of complex system integration, it is necessary to first improve the management of the
process. This can be accomplished through the use of a generally understood metric. One such metric is Technology Readiness
Level (TRL), which is used to determine technology maturity, but does not address integration maturity. Integration Maturity
Metric (IMM) requirements are developed through review of aerospace and defense related literature. These requirements are
applied to currently existing integration maturity metrics, and the proposed Integration Readiness Level (IRL). IRL is then
refined to fully meet these requirements, and applied to three aerospace case studies, along with the other identified metrics, to
compare and contrast the results obtained.
Keywords: Technology readiness level, integration readiness level, integration
1. Introduction
Buede [5] defines system integration as “the process of assembling the system from its components,
which must be assembled from their configuration items.” By this definition, system integration could
intuitively be interpreted as a simplistic process of “putting together” a system from its components,
which in-turn are built from configuration items. However, as Buede later explains, integration is a
complex process containing multiple overlapping and iterative tasks meant to not only “put together” the
system but create a successful system built to user requirements that can function in the environment it
was intended for.
This simple yet effective definition implies a structure to system integration, referred to as simply
integration from this point forward. This structure is often described in the systems engineering (SE)
process as being the “upward slope” of the traditional V-model (see Fig. 1) [5]. It starts with configuration
item integration and ends with verification and validation of the complete system in the operational
environment. Moving from simply integrating configuration items to integrating the system into its
relevant environment is a significant effort that requires not only disciplined engineering, but also
effective management of the entire SE process. While disciplined engineering is something that can be
achieved through the use of mathematics and physics, effective management of the SE process is a much
less structured and quantitative activity. In fact, there is no one standard methodology to follow when
considering the integration of most systems. This issue becomes magnified as the complexity of system
design and scope increases, implying the need for a method to manage the integration process [4]. The
1389-1995/10/$27.50  2010 – IOS Press and the authors. All rights reserved
18
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Fig. 1. Typical systems engineering V-model.
traditional approach in engineering has been a reductionism and discovery approach to understand what
makes a system function. If we were to take this same approach to our understanding of integration, the
question becomes how do we divide and conquer integration? Moreover, what are the tools and practices
that are involved in determining the integration maturity of an extremely complex system? In SE and
project management a fundamental practice for determining effectiveness, efficiency, and direction is
through the use of metrics.
In order to address the concerns relevant to engineering and managing integration, we are proposing an
Integration Readiness Level (IRL) metric for a systematic measurement of the interfacing of compatible
interactions for various technologies and the consistent comparison of the maturity between integration
points. We will present the theory behind the development of this metric and how it compares to
other metrics for system integration management. We then use IRL and three other well documented
integration metrics to describe the integration failure of three well known aerospace projects. We will
use these case analyses to demonstrate how these integration metrics apply different theories that can
provide richer insights for the analysis of integration. We then expand upon this work with the objective
of presenting a verified and validated IRL and supporting “checklist” based on a survey to assess the
criticality of decision criteria in the “checklist.” We conclude with a discussion of the implications of our
IRL to the practice of systems engineering and aerospace and how this may lead to additional questions
for further investigation.
2. Development of an integration maturity metric
2.1. Why integration maturity metrics?
The use of technology maturity metrics within aerospace has been around since the introduction of
Technology Readiness Level (TRL) in the 1980’s, and is a fairly mature practice. Yet, the emergence
of large, complex systems created through the integration of diverse technologies has created the need
for a more modern maturity metric [15]. For example, complex system development and integration has
too often posed significant cost, schedule and technical performance risks to program managers, systems
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
19
engineers, and development teams. Many risk factors have played a key role in degrading this process,
but acceptable technology maturity has often been the principal driver, particularly in programs where
innovation is fundamental to stakeholder requirements. The path of least resistance to this would be to
simply use an already existing metric that is able to provide for an effective solution. Initially, TRLs
seem to provide this capability. They are ambiguous, yet descriptive; applied at many different levels of
system development; and start at concept definition and move all the way through mission/flight proven
technology in the intended environment [22]. TRLs were originally developed by the United States (US)
National Aeronautics and Space Administration (NASA) to rate the readiness of technology for possible
use in space flight [23]. Later, the US Department of Defense (DoD) began using TRL to assess new
technology for insertion into a weapon system [13]. With TRL’s widespread use within NASA and the
DoD, other US government agencies and their contractors (e.g. Department of Energy (DoE), Sandia
National Laboratory) have also adopted the TRL scale. Today, TRLs provide critical functionality in the
decision making and developmental control of projects at both NASA and DoD [9,22,28]. In fact, in
some organizations, different labs, departments, and groups have been organized with responsibility for
bringing new technologies through the various TRL levels, handing off to each other as the technology
matures [9,23,32]. Additionally, in the years following the introduction of TRL, a variety of other
maturity metrics have been proposed as decision support tools for acquisitions (e.g. Design Readiness
Level, Manufacturing Readiness Level; Software Readiness Level; Operational Readiness Level; Human
Readiness Levels; Habitation Readiness Level; Capability Readiness Levels [3,6,7]).
Smith [37] identified that TRLs are unable to assess maturity at the system level and that they tend to
distort many different aspects of readiness into one single number, the most problematic being integration.
The solution he proposed involves using orthogonal metrics in combination with a Pair-wise Comparison
Matrix in order to compare equivalent technologies for insertion into a system. His approach is specific
to the domain of Non-Developmental Item (NDI) software for acquisition normally into defense related
systems. While Smith’s solution may be sophisticated and mathematically based, it does not specifically
address the maturity of integration. He views integration as being binary, either a technology is integrated
or it is not, and integration is simply part of what he terms the ‘overall environmental fidelity’. It may be
the case with NDI software that integration is binary; however, as will be demonstrated by the case studies
presented in this paper, integration is not always a binary act and must be matured, just as technology is
itself.
Mankins [23] identified TRL’s inability to measure the uncertainty involved when a technology is
matured and integrated into a larger system. He points out that TRL is simply a tool that provides basic
guideposts to component technology maturation, independent of system specific integration concerns.
Also, TRL does not denote the degree of difficulty in moving from one TRL to the next for a specific
technology. Mankins’ solution to this problem was the Integrated Technology Analysis Method (ITAM)
which was originally developed by NASA in the mid-1990s. The basic concept behind ITAM is the
Integrated Technology Index (ITI) which is formulated from various metrics including delta-TRL, the
difference in actual to desired TRL, research and development effort required, and technology criticality.
While ITAM and ITI attempt to provide an estimate of difficulty in system development from a technology
maturation viewpoint, Mankins points out that the approach is not always appropriate. Graettinger, et
al. [14] states that while TRLs measure technology maturity, there are many other factors to consider
when making the decision to insert a technology into a system. This implies that while two technologies
might have equivalent TRLs, one may more readily integrate into the system environment. In addition, it
is observed that TRL’s greatest strength is to provide an ontology by which stakeholders can commonly
evaluate component technologies. While it is true that in practice TRLs may not be a perfect metric, we
20
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
must not lose sight that TRL is a tool, and if a tool is used to do something for which it was not created,
then there will be errors, setbacks, or even failures. TRL was never meant to evaluate the integration of
a given technology with another, and especially not within a large, complex system. Despite TRL’s wide
use and acceptance there exists very little literature analyzing the effectiveness of TRLs in relation to
integration. In addition, the metrics and ontology for the coupling and maturation of multiple technologies
and systems has been shown to be an unresolved issue of strategic relevance [26,39]. Finally, component
level considerations relating to integration, interoperability, and sustainment become equally or more
important from a systems perspective during development [33]. Indeed, Mosher [25] described system
integration as the most difficult part of any development program. This limitation in TRL’s ability can
be filled by another metric specifically geared towards integration readiness assessment.
The application of ontology metrics to support integration has been extensively used in the computer
industry to define coupling of components [30,31], but a common ontological approach to technology
integration for system development has been far less developed. In order to clarify what an integration
maturity metric should provide we conducted a review of the literature that encompassed both work done
on integration maturity metrics and the practice and lessons learned about TRLs’ use in government and
industry. We concluded that an effective integration metric must be considered from both the lowest
level (e.g. configuration item) to the system level. We contend that this can only be accomplished
through the use of a metric that describes integration in general enough terms to be used at each level,
but specific enough to be practical. We concluded that the limitations found in TRL can be translated
into requirements for an Integration Maturity Metric (IMM). These limitations include:
– Distorts many aspects of technology readiness into one metric, the most problematic being integration [37];
– Cannot assess uncertainty involved in maturing and integrating a technology into a system [7,35–38];
– Does not consider obsolescence and the ability of a less mature technology to meet system requirements [35,37,38]; and
– Unable to meet need for a common platform for system development and technology insertion
evaluation [9,11].
If these basic concepts are translated into IMM requirements we can begin to investigate a solution
that satisfies these requirements. The IMM requirements are as follows:
1. IMM shall provide an integration specific metric, to determine the integration maturity between
two configuration items, components, and/or subsystems.
2. IMM shall provide a means to reduce the risk involved in maturing and integrating a technology
into a system.
3. IMM shall provide the ability to consider the meeting of system requirements in the integration
assessment so as to reduce the integration of obsolete technology over less mature technology.
4. IMM shall provide a common platform for both new system development and technology insertion
maturity assessment.
2.2. Finding a solution
TRL was formally purposed in a 1989 Acta Astronautica article by Sadin, et al. and was based upon
a well known technology maturation model used by NASA at the time [22,32]. Initially, TRL was determined by the assigning of numbers to the phases of technology development, such that management,
engineering, and outside vendors could communicate a common language, mainly for contractual purposes. It has become the benchmark and cornerstone of technology acquisition and project budgeting for
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
21
many government funded technology development projects, but as for integration maturity, the literature
reveals limited research that measures integration on any scale.
Before examining our options we must first differentiate what is meant by the term IMM. There is a
large number of metrics that can be used to evaluate integration, however, not integration maturity. In
addition to the IMM requirements we are seeking a metric that can be understood by all the relevant
stakeholders and evaluates integration maturity. One example is the DoD’s Levels of Information
Systems Interoperability (LISI) which measures aspects of integration such as Processes, Applications,
Infrastructure, and Data (PAID) [8]. While these are all critical concepts to consider during integration,
methodologies to assess these areas are fairly mature and can be dealt with by information technology
practices.
One of the more refined examples is Mankins’ Integrated Technology Index (ITI) which he proposes
as a method for uncertainty reduction in the developmental effort of space systems. ITI uses the concepts
of delta-TRL (∆TRL), R&D Degree of Difficulty (R&D3 ), and Technology Need Value (TNV) to
calculate the system ITI, which can then be used to compare and contrast different technologies for
insertion/acquisition, see Equation 1. ITI is essentially an average of the product of delta-TRL, R&D 3 ,
and TNV for all the subsystem technologies within a system. By this method the lower the ITI, the
lower the overall risk of technology maturity impacting successful system development, integration, and
operational deployment.
IT 1 =
[Σ(# Subsystem Technologies)(∆T RL ∗ R&D 3 ∗ T N V )]
Total Number of Subsystem Technologies
(1)
Ultimately, ITI can be used to make management decisions and provides a mathematical base for systemlevel technology assessment [23]. If we compare ITI to our IMM requirements, Requirement 1 is not met
since ITI measures the difficulty of integrating, not the specific integration maturity between component
technologies. Requirement 2 is met by ITI’s use of R&D effort and delta-TRL as variables which work
to reduce the uncertainty involved in system integration. Requirement 3 is not met since ITI has no
variable to consider the integrated system’s ability to meet system requirements. Finally, Requirement 4
is met because there is no limiting factor that binds ITI to either new system development or technology
insertion, as long as it is used as a relative metric.
Another solution proposed by Fang, et al. [12] developed a “Service Interoperability Assessment
Model” which is intended to be used as an autonomous assessment model for determining service
interoperability in a distributed computing environment. The key aspect of this model is the identification
of five levels of integration: Signature, Protocol, Semantic, Quality, and Context. The assessment model
uses a weighted sum that calculates what they term K or the degree of interoperability. K is composed of
five factors that are normalized assessments of each level of integration. Each factor can be a normalized
combination of the other sub-factors, such as semantics, which uses a concept tree to produce mappings
between input and output relationships connecting the integrating services, or a subjective scoring.
Benchmarking this model against the IMM requirements we find that Requirement 1 is met as the
metric explicitly identifies integration maturity between components/sub-systems. Requirement 2 is
met by the quantitative assessment of the identified levels of service interoperability for uncertainty
reduction. Requirement 3 is not met since the level that might be able to assess the meeting of system
requirements is the context level, which the authors specifically identify as being incomplete in its
definition. Requirement 4 is met due to clearly defined mathematical scales, with the exception of the
context level, which do not limit this metric to any specific integration activity.
Another integration metric developed by Nilsson, et al. [29] was created to assess system-level integration using a four-level scale with each level having multiple strategies/sub-levels to describe how the
22
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Table 1
Nilsson, et al. Breakdown [29]
1. Integration Technology
1.1 Levels of Integration Technology
– Low – Unreliable and time consuming data transfer methodology.
– Medium – Reliable and effective byte-stream transfer between systems.
– High – The use of defined protocols, remote procedure calls, and mechanisms for automated data conversion.
1.2 Strategies for Integration Technology
– Manual Data Transfer – Implies the transfer of data is triggered by the user or another component/system.
– Automatic Data Transfer – Implies automated data exchange triggered by either scripts or autonomous processes/systems.
– Common Database – Implies a common data medium that is shared by all integrating systems.
2. Integration Architecture
2.1 Levels of Integration Architecture
– Access to User Interface – “Black Box” functionality, only the user interface is presented to the integrating component/system.
– Access to Data – The integrating component/system can access the data of another component/system.
– Access to Functionality – The integrating component/system has the ability to execute internal functionality.
2.2 Strategies for Integration Architecture
– Controlled Redundancy – More than one component/system stores the data, however the overall data is controlled
centrally.
– Common Data Storage – All the data is stored and controlled by a single entity.
– Distributed Data Storage – More than one component/system stores the data, and overall data control is also distributed.
– Activating Other Systems – The integrating component/system can activate other systems and therefore bring additional
data online.
– Abstract Data Types – Data is stored and controlled both centrally and distributed, but with the addition of tagging such
as units, version numbers, modification dates, etc.
3. Semantic Architecture
– Separate Sets of Concepts – Systems built by different vendors, lacking a common data translation language.
– Common Concepts for Common Parts – System built by one vendor or group of vendors that set a common language for
describing data elements.
4. User Integration
– Accessibility
∗ One at a Time – The user can control only one component/system independently from the rest.
∗ Simultaneously – The user can control multiple components/systems without any performance/control related issues.
– User Interface Style
∗ Different – Each component/system has a different interface.
∗ Common – All components/systems share a common interface.
level could be achieved. Nilsson et al. [29] offer warnings for combinations of sub-metrics that present
risk in the integration. The levels with their associated breakdowns are displayed in Table 1.
When compared to the IMM requirements, Requirement 1 is met in that the model is derived from a
standard network model, Transmission Control Protocol (TCP), despite the fact that this framework is
really applied at the system level. Requirement 2 is met since the direct focus of this work was to reduce
uncertainty through the use of the framework, thus making for a more “Integration Friendly” system.
Requirement 3 is not met primarily because this work is directed toward user-system integration, not
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
23
Table 2
OSI conceptual levels
Level
7
6
5
4
3
2
1
Conceptual level
Verified and Validated
Accept, Translate, and Structure Information
Control
Quality and Assurance
Compatibility
Interaction
Interface
Table 3
Integration readiness levels [35]
IRL
7
6
5
4
3
2
1
Definition
The integration of technologies has been verified and validated with sufficient detail to be actionable.
The integrating technologies can accept, translate, and structure information for its intended application.
There is sufficient control between technologies necessary to establish, manage, and terminate the integration.
There is sufficient detail in thequality and assurance of the integration between technologies.
There is compatibility (i.e. common language) between technologies to orderly and efficiently integrate and interact.
There is some level of specificity to characterize the interaction (i.e. ability to influence) between technologies
through their interface.
An interface (i.e. physical connection) between technologies has been identified with sufficient detail to allow
characterization of the relationship.
the meeting of system requirements such as performance, throughput, etc. Requirement 4 is not met
since this model primarily deals with developing new systems to be “Integration Friendly”, although the
authors discuss the possible application to legacy systems as future work. The highest levels are built
from the International Standards Organization’s TCP (ISO/TCP) model, and the concept of using an
open standard as the base of a metric is appealing, as it is built upon an established protocol.
In fact, there exists a standardized model for inter-technology integration, one which starts at the
lowest level of integration and moves all the way through to verification and validation. This model is the
International Standards Organization’s Open Systems Interconnect (ISO/OSI) model, the TCP model is
a specific sub-set of the OSI model. This model is used in computer networking to create commonality
and reliability in the integration of diverse network systems. The OSI model is 7-layers, or levels, with
each level building upon the previous [18].
OSI appears to be a highly technical standard which is solely intended for network system application,
yet, if the layer descriptions are abstracted to conceptual levels, this model can describe integration in
very generic terms. In fact, these generic terms are the method by which the OSI model is taught and
can be found in most computer networking textbooks. Table 2 represents the conceptual levels defined
in a fundamental network systems textbook [2].
Using these conceptual levels we, just as Nilson, et al. [29], used a standard (i.e. OSI) as our foundation
for developing an IMM. Our initial IMM, termed Integration Readiness Level (IRL), was initially
proposed in a previous paper and is summarized in Table 3 [35].
It might appear as though IRL is a completed metric at this point and it should be applied to some
examples as to determine its usefulness. However, it is interesting to note that TRL itself started out as
a 7-level metric [32].
IMM Requirement 1 is met, since IRL’s main concept is to be used to evaluate the integration of two
TRL assessed technologies. Requirement 2 is met as IRL 1 through 6 are technical gates that reduce
uncertainty in system integration maturity. Requirement 3 is met specifically by IRL 7, which requires
24
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
the integration to be successful from a requirements standpoint. Requirement 4 is met, but with some
uncertainty, since there is no reason that this metric could not be applied to both system development
and technology insertion, but it is difficult to know when development should end and operations begin.
This is rooted in the fact that the OSI model came from the information technology industry where
technology insertion is done almost on a daily basis, just not always successfully. What is truly causing
this uncertainty is the fact that IRL has no indication of when the integration is complete. Thus, we have
uncovered a problem with our initial IRL, it does not have any measures equivalent to TRLs 7–9 (i.e.
environmental testing, operational support, and mission proven maturity). As with the original 7-level
TRL scale, IRL only evaluates to the end of development and does not address the operational aspects
of the integration. NASA’s transition to a 9-level TRL scale was prompted to allow TRL to be used well
past the developmental stage. Having identified this deficiency, it is now certain that we do not meet
Requirement 4, so IRL must be expanded at least one level to meet Requirement 4.
In expanding IRL, it should be kept in mind that by IRL 7 all of the technical integration issues are
resolved, so what we are looking for are qualities that describe operational support and proven integration,
into the system environment. Being that IRL was created to be used with TRL and the fact that they
share many qualities it makes sense to look at TRLs 8 and 9. TRL 8 is defined as:
“Actual system completed and “flight qualified” through test and demonstration (ground or
space)” [23]
TRL 8 implies at the very least demonstration and testing in the system environment. IRL 8 can
be extracted directly from this definition with the only change being in the term “flight”. While the
original TRLs implied a technology being “ready to fly,” it might be the case that the integration is
between two pieces of technology, such as ground software, that will never actually “fly” and are simply
brought together to fill a need. In this scenario, we are integrating two technologies for a specific mission
requirement, so we will change “flight” to “mission” to broaden the possible applications of IRL. Thus
IRL 8 is:
IRL 8 – “Actual integration completed and “Mission Qualified” through test and demonstration, in
the system environment.”
At this point IRL is still not complete, specifically, how do we know when we are fully mature? TRL
provides this in TRL 9, and consequently IRL should have the same highest-level maturity assessment.
TRL 9 is defined as:
“Actual system “flight proven” through successful mission operations” [23]
Again we will change “Flight Proven” to “Mission Proven” for IRL 9.
IRL 9 – “Actual integration “Mission Proven” through successful mission operations”
As was previously stated, our initial IRL met Requirements 1, 2 and 3. The addition of IRLs 8
and 9 facilitate the satisfaction of Requirement 4. We now have a metric that can completely describe
the maturity assessment of the integration of two pieces of technology. Also, since IRL is capable
of describing mature system integration well past the developmental stage, it is possible to use it to
assess technology insertion into a mature system. Our proposed IRL scale is represented in Table 4.
Additionally, IRL is compared and contrasted to the other IMMs along with the IMM requirements in
Table 5.
For further clarification, the nine levels of IRL presented in Table 4 can be understood as having three
stages of integration definition: semantic, syntactic, and pragmatic. Semantics is about relating meaning
Syntactic
The integrating technologies can Ac- IRL 6 is the highest technical level to be achieved, The risk of not providing this level of integration
cept, Translate, and Structure Infor- it includes the ability to not only control integra- could be a misunderstanding of translated data.
mation for its intended application.
tion, but specify what information to exchange,
unit labels to specify what the information is, and
the ability to translate from a foreign data structure to a local one.
There is sufficient Control between IRL 5 simply denotes the ability of one or more of
technologies necessary to establish, the integrating technologies to control the integramanage, and terminate the integration. tion itself; this includes establishing, maintaining,
and terminating.
There is sufficient detail in the Quality Many technology integration failures never Vulnerability to interference, and security conand Assurance of the integration be- progress past IRL 3, due to the assumption that if cerns that the data could be changed if part of its
two technologies can exchange information suc- path is along an unsecured medium
tween technologies.
cessfully, then they are fully integrated. IRL 4
goes beyond simple data exchange and requires
that the data sent is the data received and there
exists a mechanism for checking it.
6
5
4
The risk of not having integration control, even in
the case of technologies that only integrate with
each other, is that one technology can dominate
the integration or worse, neither technology can
establish integration with the other.
The integration of technologies has been IRL 7 represents a significant step beyond IRL 6;
Verified and Validated and an acquisi- the integration has to work from a technical pertion/insertion decision can be made.
spective, but also from a requirements perspective. IRL 7 represents the integration meeting
requirements such as performance, throughput,
and reliability.
If the integration does not meet requirements then
it is possible that something must be changed at a
lower level, which may result in the IRL actually
going down, however, in most cases the work done
to achieve higher levels can be re-implemented.
Actual integration completed and Mis- IRL 8 represents not only the integration meeting The system is still only “Laboratory Proven” and
sion Qualified through test and demon- requirements, but also a system-level demonstra- has yet to see real-world use. This can allow
tion in the relevant environment. This will re- unforeseen integration issues to go unattended.
stration, in the system environment.
veal any unknown bugs/defect that could not be
discovered until the interaction of the two integrating technologies was observed in the system
environment.
Risk of Not Attaining
The development stage was never completed; this
is more of a programmatic risk. However, if the
IRL model was used up to this point there should
be no (technical) setbacks to stop the integrated
system from moving to operational use.
7
8
Table 4
Integration readiness levels
IRL Definition
Description
Pragmatic 9 Integration is Mission Proven through IRL 9 represents the integrated technologies besuccessful mission operations.
ing used in the system environment successfully.
In order for a technology to move to TRL 9 it
must first be integrated into the system, and then
proven in the relevant environment, so attempting to move to IRL 9 also implies maturing the
component technology to TRL 9.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
25
Semantic
The risks of not attaining, or attempting to skip,
this level can be data collisions, poor bandwidth utilization, and reduced reliability of the
integration
It is impossible to have integration without defining a medium, so there are no real risks here,
however, the selection of a poor medium may impact performance requirements later on.
Once a medium has been defined, a “signaling”
method must be selected such that two integrating
technologies are able to influence each other over
that medium. Since IRL 2 represents the ability
of two technologies to influence each other over a
given medium, this represents integration proofof-concept.
There is some level of specificity to
characterize the Interaction (i.e. ability to influence) between technologies
through their interface.
An Interface between technologies This is the lowest level of integration readiness
has been identified with sufficient de- and describes the selection of a medium for
tail to allow characterization of the integration.
relationship.
1
Risk of Not Attaining
If two integrating technologies do not use the
same data constructs, then they cannot exchange
information at all.
Description
IRL 3 represents the minimum required level to
provide successful integration. This means that
the two technologies are able to not only influence
each other, but also communicate interpretable
data. IRL 3 represents the first tangible step in
the maturity process.
2
IRL Definition
3 There is Compatibility (i.e. common language) between technologies
to orderly and efficiently integrate and
interact.
Table 4, continued
26
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
+
+
+
+
“quantitative measure of the
relative technological challenge
inherent in various candidate/competing advanced systems concepts” [23]
“a reference assessment model of service interoperability by
fuzzy quantization of the interoperability.” [12]
ITI
Fang et. al.
Nilsson et. al. System integration can be improved if it is split into four
“aspects of integration”: Integration technology, Integration
architecture, Semantic Integration, and User Integration [29]
IRL
IRL is a metric that is to be used
to evaluate the integration readiness of any two TRL-assessed
technologies.
+
+
Concept
Metric
+
−
−
+
−
+
– Does not consider Cost/Schedule
parameters
– Requires Work Breakdown/System
Architecture to be complete and accurate prior to assessment.
– Requires TRL assessment prior to
ITI assessment.
Weaknesses
– Based on an open, widely accepted – Requires Work Breakdown/System
Architecture to be complete and acstandard (ISO/OSI).
curate prior to assessment.
– Technology readiness is incorpo– Requires TRL assessment prior to
rated in the overall assessment.
IRL assessment.
– Subjective assessment made on
technical data.
– Based on an open, widely accepted – Purely subjective assessment process.
standard (ISO/TCP).
– Provides strategies for improving – Does not consider technology readiness/maturity.
integration.
– Does not require concise data on – Not really created as a metric, simply an attempt at organizing system
the integration.
integration.
– Quantitative assessment process. – Rigorous mathematical assessment
requires much data on the integra– Able to provide a relative measure
tion to be gathered prior to assessof what technology/service proment.
vides the best interoperability.
– Transforms technical data into – Does not consider technology readiness/maturity.
metric for managerial decisions.
– Metric was designed for service integration, not system integration.
Table 5
IMM summarization
IMM IMM IMM IMM Strengths
Req. 1 Req. 2 Req. 3 Req. 4
– Quantitative use of TRL in addi−
+
−
+
tion to two other proven and common metrics.
– Has been successfully implemented in the NASA Highly
Re-useable Space Transportation
study [23]
– Able to provide a relative measure of what technology to insert/integrate.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
27
28
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Fig. 2. Application of IRL.
with respects to clarity and differentiation. Thus IRL 1–3 are considered fundamental to describing what
we define as the three principles of integration: interface, interaction, and compatibility. We contend that
these three principles are what define the subsistence of an integration effort. The next stage is Syntactic,
which is defined as a conformance to rules. Thus IRLs 4–7 are about assurance that an integration effort is
in compliance with specifications. The final stage is Pragmatic, which relates to practical considerations.
Thus, IRLs 8–9 are about the assertion of the application of an integration effort.
Figure 2 represents the application of IRL. It is to be used to assess integration maturity between
two TRL assessed technologies. The combination of TRL and IRL creates a fast, iterative process for
system-level maturity assessment. IRL must now be applied to some real-world case studies, and the
results interpreted.
In the following sections we will present three aerospace cases that had documented integration issues
and analyze how the four integration metrics can describe the integration. We selected these cases
because they encompass multiple types of systems and multiple vendors.
3. Case studies using IMM
3.1. Mars climate orbiter
Mars Climate Orbiter (MCO) crashed into the Martian atmosphere on September 23, 1999. The
Mishap Investigation Board (MIB) found the reason for the loss of the spacecraft to be the use of English
units in a ground software file called “Small Forces”, the output of which was fed into another file
called Angular Momentum Desaturation (AMD) which assumed inputs to be in metric, as were the
requirements of the project. The MCO navigation team then used the output of this file to derive data
used in modeling the spacecraft’s trajectory, and perform corrective thruster firings based upon these
models. The Software Interface Specification (SIS) defined the format of the AMD file and specified
that the data used to describe thruster performance, or the impulse-bit, be in Newton-Seconds (N-s) [19].
While this was the case with the AMD file aboard the spacecraft, which gathered data directly from
onboard sensors, the ground AMD file was populated with data from the “Small Forces” file, which
outputs its measurements in Pound-seconds (lbf-s). Since 1 lbf-s is equal to 4.45 N-s, there existed
a 4.45 error-factor in what the navigation team observed and what the spacecraft actually did. When
the navigation team observed the lower numbers being produced by their software, they assumed the
spacecraft was going off course and immediately began firing the thruster with 4.45 times more force
than was necessary. The thruster firing occurred many times during the 9-month journey from Earth to
Mars and resulted in MCO entering Martian Orbital Insertion (MOI) at 57 kilometers altitude, rather than
the objective of 226 kilometers. The minimum survivable altitude was calculated to be 80 kilometers,
below which the effects of Martian-Gravity would cause the orbiter to burn up in the atmosphere.
The MIB found eight contributing causes that either contributed to, caused, or failed to catch the error
in the ground software. While many of these were identified as organizational issues associated with
NASA and the contractors working on MCO, two are of interest to this research effort. The first is listed as
“System engineering process did not adequately address transition from development to operations” [27,
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
29
Fig. 3. MCO Metric Analysis.
pg 7]. This suggests that the MIB was unsatisfied in how the SE process determines system operational
maturity. The second is similar, but more directed toward integration, “Verification and Validation
process did not adequately address ground software” [27, pg 7]. This suggests that some sort of testing
process should have been in place to catch the units error. While testing is absolutely necessary, it is not
always capable of catching the many small errors that can occur when two different pieces of software
and/or hardware exchange data in a raw format. If the integration of two pieces of technology followed
some sort of maturation process, just as the technology itself does, this would provide an assessment of
integration readiness and a direction for improving integration during the development process.
In our assessment of MCO we evaluated the integration of the ground software “Small Forces” and
“AMD” files as they were the primary technical failure leading to MCO’s demise. Figure 3 is a summary
of how the four IMMs could evaluate MCO.
IRL has uncovered the basic problem in MCO, a misunderstanding of translated data, or in MCO’s
case un-translated data. None of the other metrics catch major risks or issues with the maturity of MCO’s
ground data files. Nilsson et al. only catches the risk of manual data transfer present in the emailing of
data files between project teams.
3.2. ARIANE 5
The ARIANE series of launch vehicles was developed by the European Space Agency (ESA) as
a commercially available, low cost, and partially reusable solution for delivering payloads into Earth
orbit. ARIANE 5’s maiden flight occurred on June 4, 1996 and ended 37 seconds after liftoff, when the
vehicle veered off of its flight path and disintegrated due to aerodynamic stress. In the following days an
independent inquiry board was established to determine the cause of the failure [21].
The board found that the failure began when the backup Inertial Reference System (SRI) failed 36.7
seconds after H0 (H0 is the point in time at which the main engines were ignited for liftoff) due to a
software exception. Approximately 0.05 seconds later the active SRI went offline due to the same software
exception, and began dumping diagnostic information onto the databus. The On-Board Computer (OBC)
was reading the databus and assumed the data was the correct attitude data, and since both SRIs were
in a state of failure, it had no way to control data on the bus. The misinterpreted diagnostic data was
factored into calculations for thruster nozzle angles by the OBC. The incorrect thruster nozzle angles
forced the launcher into an angle-of-attack that exceeded 20 degrees, leading to aerodynamic stress that
30
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Fig. 4. ARIANE 5 Metric Analysis.
resulted in the booster engines prematurely separating from the main stage, which finally triggered the
self-destruct of the main stage at approximately H0 + 39s [21].
After recovering both SRIs from the crash site, data recovery efforts found that the software exception
was caused during the conversion from a large 64-bit floating point number to a 16-bit signed integer.
The variable that was converted was the horizontal velocity of ARIANE 5, which was within thresholds
for the proposed trajectory. The reason the software was unable to handle the high velocity value was
due to the fact that it was unchanged from ARIANE 4, whose horizontal velocity never approached the
values ARIANE 5 was built to achieve. If either of the SRIs had ignored the software exception, an
ability that they had, the launcher would have continued functioning flawlessly [21].
For ARIANE 5 we examined the integration between the two SRIs and the OBC, we will assume
that the software exception is unpreventable and thus examine how the integration maturity affected the
OBC’s ability to function. Figure 4 is a summary of how the four IMMs could evaluate ARIANE 5.
Based on these evaluations a few conclusions can be drawn. First, IRL recommends that there should
be some form of integration control, otherwise one technology could dominate the integration, and this
is exactly what occurred with ARIANE 5. The OBC had no way to control what data the SRIs dumped
onto the databus. Nilsson et al. highlights risk in a low level of Integration Technology coupled with
Automated Data Transfer, this metric suggests that for Automated Data Transfer a protocol, i.e. high,
level of Integration Technology should be used to avoid errors or misinterpreted data. One caveat is
that a protocol adds overhead, which influences performance, and this metric does not evaluate that. ITI
indicates low risk in the integration of the OBC with the SRIs. Since ITI is a relative measure, we will
create a ‘new’ SRI that does not suffer from the same software exception, therefore, the highest TRL
it could achieve is 7, so ∆TRL = 2 (we want to reach TRL 9), R&D 3 = 2 (the new SRI requires new
development, but it is based on the old SRI), TNV = 1 (the old SRI is now an option). Calculating ITI
now yields 3.33. What is revealed is that ITI indicates less risk in the old SRI, which is logical since
ITI calculates integration maturity based on technology maturity. Fang et al. [12] indicates some risk at
the context level, simply due to the fact that the SRIs are basically sensors, and the metric specifically
speaks of sensors being a context level concern, even though the context level is unfinished.
3.3. Hubble space telescope – Part 1
From the identification of the need for a space-based observatory in 1923 to the latest on-orbit
retrofitting in March 2002 by STS-109, the SE behind Hubble has been nearly flawless, even in the face
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
31
Fig. 5. Service Mission 1 Metric Analysis.
of daunting challenges and unforeseeable obstacles [24]. In this brief case example we will examine
how much of the technical success the Hubble program has enjoyed is a direct result of management of
the integration.
From its beginning HST was envisioned as an upgradeable space observing platform, meaning that
in addition to being the most detailed optical telescope in history, it was also built to carry additional
science equipment, and was built such that this equipment could be added, replaced, or maintained by
an extra-vehicular activity event. In order to meet this requirement HST was designed to be as modular
as possible, and the outer-shell even included such details as handholds for servicing astronauts. This
modularity and upgradeability would soon prove their value as, after initial launch and the start of science
operations, it was discovered that the primary mirror had an aberration that caused a blurring of far-off
objects, the very objects that were HST’s primary objective to study. It seemed as though HST was a
failure, however, in December 1993 the STS-61 crew was able to successfully attach Corrective Optics
Space Telescope Axial Replacement (COSTAR) to correct the problem, in addition to performing other
minor repairs such as fully unfolding a solar panel, tightening bolts, and securing access doors. That
mission was designated SM-1 (Servicing Mission – 1), SM-2 occurred five years later and it significantly
upgraded the scientific capabilities of HST, in addition to providing minor maintenance support. SM3A and SM-3B occurred in December 1999 and March 2002 respectively and also enhanced scientific
capability while also extending HST’s life expectancy. NASA seemed to have found a successful recipe
for HST success and longevity. In fact, HST has been the most successful project in NASA history from
the perspective of the amount of scientific knowledge gained [20,24].
The primary mirror aberration was undiscovered due to poor testing at the manufacturer, which did not
detect the imperfection despite it being outside of acceptable tolerances. The reason it was not detected
during the original integration of the optical system was that it was small enough to not be detected in
ground testing, and testing to the extent that would have detected it required a space environment where
atmospheric distortion was not a factor [24].
Hubble is an incredibly complex architecture that has changed and evolved over time, for simplicity we
will examine the integration of SM-1 components, since this mission represents a significant contribution
to HST’s success. Figure 5 represents the SM-1 assessment with the four IMMs.
HST SM-1 demonstrates that IRL is able to identify a successful architecture. The integrations of
the COSTAR and WFPC2 need to be matured further, but they must be integrated into the system and
used in the mission environment in order to accomplish this, which is exactly what was done. ITI also
32
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
indicates low risk, however it is interesting to note that ITISM-1 > ITIARIANE5 > ITIMCO. This
seems to indicate more risk in the HST SM-1 architecture, which was a success, as opposed to MCO
and ARIANE 5, which were failures. Of course, HST SM-1 is a much more complex evaluation as
compared to MCO and ARIANE. The other two metrics basically indicate no real risk, however, there is
not enough information available to accurately assess Fang et al. at all levels.
3.4. Hubble space telescope – Part 2
As of 2007, HST had surpassed its expected lifetime and was slowly dying in space, its gyroscopes
were approaching the end of their lifecycle, its batteries were nearing their lower levels of acceptable
performance, and the fine-guidance sensors had already begun failing. If HST was not serviced and
the batteries ran out, or navigational and attitude control was lost, certain instruments aboard could be
permanently damaged either due to low temperatures or direct exposure to sunlight. Meanwhile, demand
for scientific time with the telescope had only increased since its inception, while the data rate with which
HST delivers new information had increased by a factor of 60 due to upgrades during SM-3B. NASA
has since performed SM-4 to keep Hubble operating well into the future, at a great risk to human life due
to Hubble’s high orbit. The Columbia Accident Investigation Board (CAIB) requirements for shuttle
operations highlighted the risk of HST SM-4 since time and fuel considerations would not allow the
crew to safely dock to the International Space Station (ISS) if damage was found on the heat shield. To
combat this problem a robotic servicing mission (RSM) had been suggested; an independent committee
was established to determine the feasibility of this mission, their findings include [20]:
– The TRLs of the component hardware/software are not high enough to warrant integration into a
critical mission system, these include the LIDAR for near-field navigation at TRL 6, due to not being
proven in a space-environment.
– The more mature components such as the Robotic Manipulator System (RMS) and the Special
Purpose Dexterous Manipulator (SPDM) are space-proven but have never been integrated in a
space-environment.
– The ability of a robotic system to capture and autonomously dock to an orbiting spacecraft has never
been tested under the conditions which HST presents.
– The technology and methodology of shuttle docking and service of HST has been proven and the
risks to the crew and vehicle are no greater than any other mission.
– While a shuttle mission to HST would be costly and interrupt the already delayed construction of
the ISS, HST’s value and contributions to the scientific community greatly outweigh these.
Due to the timeframe of HSTs impending failure, which was calculated to occur sometime between 2008
and 2010, a robotic mission was not be feasible [20]. What is interesting is that an independent committee
has considered more than simply technology readiness in their assessment of the options. In fact, they
specifically speak of the integrated performance of all the components involved. Furthermore, some of
the TRLs of the components will be matured in other space bound systems, such as the United States
Air Force’s XSS-11, which will move the TRL of the LIDAR past TRL 6, but its specific integration
maturity will be unchanged [20].
The previous case studies have been conducted on operational systems; in this examination we will
assess the integration of the key technologies involved in the hypothetical RSM development. This
will highlight the technologies and integrations that must be matured for RSMs to be possible in the
future. Figure 6 represents the approximate, simplified system architectural analysis of the dexterous
robot envisioned to service HST [20].
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
33
Fig. 6. HST Dexterous Servicing Robot Architecture Metric Analysis.
The evaluation of the HST RSM provides some interesting insights. First, not only do both ITI and
IRL indicate risk in the maturity, but both highlight the same components/integrations as being in need
of further maturity. Nilsson et al. also highlights the fact that the use of different vendors on this project
has caused separate sets of concepts to be used, the solution here would be a standards document that
could be shared between all stakeholders. Also, risk is present in the low level of Integration Technology.
Once again, there is not enough data present to fully evaluate the Fang et al. metric.
3.5. Case study summary
The case studies have provided some insight into how each of the metrics assesses integration maturity,
risk, and operational readiness. Table 6 is a summary of what the case studies revealed about each metric.
4. Development of a guide for IRL
In the previous section we have described the development of an IRL based on a set of IMM requirements. In this section we will give further explanation of this IRL as we begin the development of a
verified and validated set of IRL metrics that can be useful in developing a more comprehensive systems
maturity assessment methodology that addresses the complexity of integration in a less heuristic or subjective manner. In the context of this effort, verification addresses whether or not the correct IRLs were
identified/defined and validation addresses the relevance or criticality of each IRL. Thus, in creation of
the IRL checklist, we used two forms of assessment to specify the decision criteria that may define each
IRL: (1) review of systems engineering and acquisition standards, policy, research, and other guidance
documents (e.g. DoD 5000.02, INCOSE Systems Engineering Handbook, IEEE 15288, NASA Systems
Engineering Handbook), and (2) discussions with subject matter experts (SME) in systems engineering,
program management, and acquisition across government, industry, and academia. In all cases an effort
was made to capture those documents (e.g., Systems Engineering Plan, TEMP) or document content
(e.g., requirements, architecture, compatibility, interoperability, etc.) deemed most significant to an
assessment of integration maturity. What resulted was a list of decision criteria for each IRL as shown
in Tables 8–16. It should be emphasized that the list of maturity metrics under each IRL is not in order
This metric provides in-depth,
quantitative and descriptive
analysis; however, the amount
of data required to accurately assess a system introduces more
complexity and work than is
necessary. Also, the final assessment may not be interpreted
equally among all stakeholders,
and does not provide direction
for further maturation.
IRL is able to successfully identify the
risks involved in the integration maturity both at the component and system
levels. IRL also provides a common
language for all stakeholders. What
IRL lacks is the ability to measure the
difficulty in maturing a technology or
integration, such as cost, R&D effort,
and/or criticality. Also, IRL requires a
system-level quantitative assessment,
for complex net-centric systems.
ITI is limited in indicating any risk or
need for maturity when the component
technologies are themselves mature. ITI
does however uncover the difficulty involved in maturing a technology, and the
risks if it is the only option.
This metric successfully identified most of the risks involved
in the integration maturity; with
the only exception being the
ability of the integration to meet
system requirements. However,
this work was not originally intended to be used as a metric,
only straightforward ontology.
Summary
IRL suggests significant risks involved
in integration maturity. It highlights
the very same technologies ITI does
as being the most troublesome; this is
due to TRL being the basis of both
metrics.
ITI indicates significant risk; this is Value of K is approximately Uncovered low technical integration and the use of multiple
where ITI’s strengths are apparent due to 100%. No risk.
vendors as a significant risk.
the large R&D effort required for successful technology/integration maturation. ITI = 13.67
HST 2
IRL 4 would have indicated and identified risk. IRL 5: Control, risk of
not attaining: “. . . one technology can
dominate the integration. . . ”
ITI indicates low risk; however an ITI of Value of K is approximately This metric indicated no real IRL displayed a relatively mature ar3.43 is higher than the ITI for MCO and 100%. No risk.
risk.
chitecture. Since HST-SM1 was a sucARIANE 5, despite HST-SM1 being a
cess this is to be expected.
successful integration.
Metric suggests the need for a
higher level of technical integration between components, but
does not consider performance
affects of those protocols.
IRL
Low IRL (IRL 5) indicates risk in
the exchange of impulse bit data. Risk of not attaining IRL 6:
“. . . misunderstanding of translated
data”; the cause of MCOs failure.
HST 1
ARIANE 5 ITI indicates that the old SRIs from AR- Some risk indicated, but reIANE 4 will integrate better than a new quires more data for accurate
SRI built to ARIANE 5 requirements. assessment.
ITI = 2.
Case
MCO
Table 6
Case Study Summary
ITI
Fang et al.
Nilsson et al.
ITI indicates very little risk in the MCO Value of K is approximately Identified issues with manual
ground software. ITI = 1.
100%. No risk.
data transfer, and the lack of
units attached to impulse bit,
which were the exact technical
causes of MCO’s failure.
34
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
35
Table 7
Demographics of subject matter experts
Sector
Sample
Government
Industry
TOTAL
13
20
33
0–5
2
3
5
Years of experience
5–10 10–15 15–20
2
1
1
9
2
2
11
3
3
20+
7
4
11
Table 8
IRL 1 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 1 decision criteria
1.1 Principal integration technologies have been identified
1.2 Top-level functional architecture and interface points have been
defined
1.3 Availability of principal integration technologies is known and
documented
1.4 Integration concept/plan has
been defined/drafted
1.5 Integration test concept/plan has
been defined/drafted
1.6 High-level Concept of Operations and principal use cases have
been defined/drafted
1.7 Integration sequence approach/
schedule has been defined/drafted
1.8 Interface control plan has been
defined/drafted
1.9 Principal integration and test
resource requirements (facilities,
hardware, software, surrogates,
etc.) have been defined/identified
1.10 Integration & Test Team
roles and responsibilities have been
defined
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.91
Enhancing
Desirable
0.09
0.58
0.33
0.03
0.06
0.39
0.52
0.06
0.03
0.00
0.91
0.09
0.15
0.39
0.36
0.06
0.03
0.55
0.42
0.18
0.45
0.21
0.12
0.03
0.64
0.33
0.12
0.36
0.33
0.18
0.00
0.48
0.52
0.06
0.21
0.55
0.15
0.03
0.27
0.70
0.06
0.36
0.33
0.21
0.03
0.42
0.55
0.03
0.12
0.67
0.18
0.00
0.15
0.85
0.09
0.36
0.30
0.18
0.06
0.45
0.48
0.12
0.24
0.33
0.24
0.06
0.36
0.58
of criticality. It should also be emphasized that the lists are not considered to be comprehensive or
complete; they are merely an attempt to capture some of the more important decision criteria associated
with integration maturity in order to afford practitioners the opportunity to assess the criticality of each
decision criteria relative to the IRL it is listed under.
Thus, to establish further verification and validation to the decision criteria, we deployed a survey that
asked Subject Matter Experts (SMEs) to evaluate each decision criteria in the context of its criticality to
the specified IRL. The criticality criteria for assessing the IRL decision criteria were defined as:
–
–
–
–
–
Critical – IRL cannot be assessed without it
Essential – without it, IRL can be assessed but with low to medium confidence in the results
Enhancing – without it, IRL can be assessed with medium to high confidence in the results
Desirable – without it, IRL can be assessed with very high confidence in the results
N/A – the metric is not applicable to the IRL assessment
36
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Table 9
IRL 2 Decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 2 decision criteria
2.1 Principal integration technologies function as stand-alone units
2.2 Inputs/outputs for principal integration technologies are known,
characterized and documented
2.3 Principal interface requirements
for integration technologies have
been defined/drafted
2.4 Principal interface requirements
specifications for integration technologies have been defined/drafted
2.5 Principal interface risks for integration technologies have been
defined/drafted
2.6 Integration concept/plan has
been updated
2.7 Integration test concept/plan has
been updated
2.8 High-level Concept of Operations and principal use cases have
been updated
2.9 Integration sequence approach/
schedule has been updated
2.10 Interface control plan has been
updated
2.11 Integration and test resource
requirements (facilities, hardware,
software, surrogates, etc.) have
been updated
2.12 Long lead planning/coordination of integration and test resources
have been initiated
2.13 Integration & Test Team
roles and responsibilities have been
updated
2.14 Formal integration studies have
been initiated
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.45
Enhancing
Desirable
0.55
0.18
0.27
0.24
0.30
0.52
0.36
0.06
0.06
0.00
0.88
0.12
0.39
0.33
0.24
0.03
0.00
0.73
0.27
0.27
0.45
0.24
0.03
0.00
0.73
0.27
0.06
0.24
0.61
0.09
0.00
0.30
0.70
0.06
0.42
0.42
0.09
0.00
0.48
0.52
0.09
0.27
0.52
0.12
0.00
0.36
0.64
0.12
0.18
0.45
0.21
0.03
0.30
0.67
0.09
0.27
0.45
0.18
0.00
0.36
0.64
0.06
0.30
0.61
0.03
0.00
0.36
0.64
0.15
0.39
0.27
0.15
0.03
0.55
0.42
0.12
0.30
0.30
0.24
0.03
0.42
0.55
0.03
0.15
0.58
0.21
0.03
0.18
0.79
0.12
0.33
0.21
0.21
0.12
0.45
0.42
We sampled 33 SMEs from government and industry with experience in systems engineering, software
engineering, program management, and/or acquisition. Table 7 indicates the demographics of the 33
SMEs with respects to years of experience and employment in government or industry. Of these, 85%
had greater than five years experience and 33% had greater than 20 years of experience.
For each decision criteria we calculated the relative and cumulative frequencies of the criticalities
(reported in Tables 8–16). Relative frequency is the proportion of all responses in the data set that fall
in the category (i.e. decision criteria for any IRL). Cumulative relative frequency allows for additional
information to be understood about the sensitivity of the response frequency based on a class interval (i.e.
Critical/Essential versus Enhancing/Desirable). This is meant to help to identify whether the criticality
categories originally identified are too fine and should be modified.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
37
Table 10
IRL 3 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 3 decision criteria
3.1 Preliminary Modeling & Simulation and/or analytical studies have
been conducted to identify risks &
assess compatibility of integration
technologies
3.2 Compatibility risks and associated mitigation strategies for integration technologies have been defined (initial draft)
3.3 Integration test requirements
have been defined (initial draft)
3.4 High-level system interface diagrams have been completed
3.5 Interface requirements are defined at the concept level
3.6 Inventory of external interfaces
is completed
3.7 Data engineering units are identified and documented
3.8 Integration concept and other planning documents have been
modified/updated based on preliminary analyses
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.55
Enhancing
Desirable
0.45
0.18
0.36
0.45
0.00
0.09
0.39
0.52
0.00
0.00
0.48
0.52
0.15
0.48
0.24
0.12
0.00
0.64
0.36
0.48
0.27
0.24
0.00
0.00
0.76
0.24
0.24
0.70
0.06
0.00
0.00
0.94
0.06
0.24
0.33
0.42
0.00
0.00
0.58
0.42
0.06
0.45
0.24
0.21
0.03
0.52
0.45
0.18
0.27
0.42
0.09
0.03
0.45
0.52
4.1. Semantic (IRL 1-3)
This is the stage at which we fundamentally define the integration needs and the manner in which it
will take place. From Tables 8–10 we observe that in IRLs 1–3 a single decision criterion for each IRL
is rated as critical by the respondents. For IRL 1 this is 1.1 Principal integration technologies have been
identified. This can indicate that at this level of maturity the criticality of the integration is in the proper
identification of the technologies to be integrated.
Obviously, identifying integration elements is the first step in successful integration. Though it may
seem trivial, this activity is indispensable as unknown or undefined elements can derail a project that is
well along in the development process. Application of proper time and resources at this stage is essential
in order to build a proper foundation for future planning and maturation activities. For IRL 2, we observe
that the criticality has transferred to an understanding of the input/output (I/O) for the integration.
With the elements of the system integration effort defined at IRL 1 the next step logically moves on to
the definition of the I/O requirements of the system. This was identified by SMEs as a critical step and is
needed in order to understand the type and complexity of the integrations between technology elements.
Indeed, all integration is not the same and survey results show that successful system integration is highly
dependent on the accurate understanding of the degree of work needed to successfully connect disparate
systems. This information then drives factors such as the application of cost, schedule, and resources
during later development activities.
At IRL 3, the data denotes an importance in the diagramming of the system interfaces. To reach this
stage of maturity requires leveraging all of the information defined previously. The identified technologies
can be mapped and the I/O requirements are drivers for how those elements are to be connected. At this
38
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Table 11
IRL 4 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 4 decision criteria
4.1 Quality Assurance plan has been
completed and implemented
4.2 Cross technology risks have
been fully identified/characterized
4.3 Modeling & Simulation has
been used to simulate some interfaces between components
4.4 Formal system architecture development is beginning to mature
4.5 Overall system requirements for
end users’ application are known/
baselined
4.6 Systems Integration Laboratory/Software test-bed tests using
available integration technologies
have been completed with favorable
outcomes
4.7 Low fidelity technology “system” integration and engineering
has been completed and tested in a
lab environment
4.8 Concept of Operations, use cases and Integration requirements are
completely defined
4.9 Analysis of internal interface requirements is completed
4.10 Data transport method(s) and
specifications have been defined
4.11 A rigorous requirements inspection process has been implemented
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.03
Critical
Essential
0.45
Enhancing
Desirable
0.52
0.18
0.27
0.36
0.15
0.12
0.52
0.33
0.03
0.00
0.64
0.36
0.06
0.24
0.70
0.00
0.00
0.30
0.70
0.09
0.52
0.36
0.03
0.00
0.61
0.39
0.24
0.55
0.15
0.06
0.00
0.79
0.21
0.09
0.52
0.36
0.03
0.00
0.61
0.39
0.06
0.36
0.52
0.06
0.00
0.42
0.58
0.12
0.30
0.55
0.00
0.03
0.42
0.55
0.09
0.61
0.27
0.03
0.00
0.70
0.30
0.12
0.36
0.48
0.03
0.00
0.48
0.52
0.27
0.30
0.21
0.21
0.00
0.58
0.42
stage the system truly begins to take shape as an interconnected system and the functionality of the parts
can be seen from a system perspective. In many cases, development projects tend to bypass or minimize
this stage because of time or funding constraints. However, the lack of upfront planning comes back in
the form of reduced or unintended functionality later in development that can lead to even larger time
and resource hits. Only by completing a comprehensive mapping of the system early in development
can the true magnitude of the task be understood and successfully planned for.
In looking back at the key identified elements of the semantic stage we see a clear flow mapped out by
integration SMEs. By considering the fundamental components of an integration effort as the technologies, their identified linkage (e.g. I/O), and a representation of this relationship (e.g. architecture), then
our data indicates this in 1.1 Principal integration technologies have been identified, 2.2 Inputs/outputs
for principal integration technologies are known, characterized and documented, and 3.4 High-level
system interface diagrams have been completed. This progression is in keeping with the best practices
laid out by numerous studies and system engineering guides and reflects a steady evolution of knowledge
from the time that the components required are identified until a formal architecture is developed.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
39
Table 12
IRL 5 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 5 decision criteria
5.1 An Interface Control Plan has
been implemented (i.e., Interface Control Document created, Interface Control Working Group
formed, etc.)
5.2 Integration risk assessments are
ongoing
5.3 Integration risk mitigation strategies are being implemented &
risks retired
5.4 System interface requirements
specification has been drafted
5.5 External interfaces are well defined (e.g., source, data formats,
structure, content, method of support, etc.)
5.6 Functionality of integrated configuration items (modules/functions/assemblies) has been successfully demonstrated in a laboratory/synthetic environment
5.7 The Systems Engineering Management Plan addresses integration
and the associated interfaces
5.8 Integration test metrics for endto-end testing have been defined
5.9 Integration technology data has
been successfully modeled and simulation
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.03
Critical
Essential
0.91
Enhancing
Desirable
0.06
0.33
0.58
0.06
0.00
0.06
0.48
0.45
0.00
0.00
0.55
0.45
0.03
0.52
0.39
0.06
0.00
0.55
0.45
0.39
0.36
0.24
0.00
0.00
0.76
0.24
0.27
0.55
0.18
0.00
0.00
0.82
0.18
0.21
0.52
0.27
0.00
0.00
0.73
0.27
0.15
0.18
0.33
0.12
0.21
0.33
0.45
0.12
0.33
0.52
0.03
0.00
0.45
0.55
0.06
0.67
0.18
0.09
0.00
0.73
0.27
4.2. Syntactic (IRL 4-7)
For IRLs 4 and 5, we see less clarity in the identification of IRL decision criteria with more ambiguity
in what is most important. This is not too different from what has been described with TRL, in that the
transition from TRL 3 to 4 is the most ill defined and difficult to determine [1]. A great deal of this
uncertainty can be attributed to the broad array of activities taking place at this stage of development, many
of which are highly dependent on the type of project being worked. Depending on the complexity, goals,
and knowledge base of work being undertaken, key activities could vary dramatically. For an effort that
is truly revolutionary and untested, significantly more attention would be spent on risk analysis, quality
assurance, and modeling and simulation whereas projects involving work of a more known quantity
would be justified in focusing less in these areas and instead leveraging the significant number of lessons
learned from projects that have gone before them. As reflected by the tightly grouped results, all criteria
are important considerations and should receive attention while those that are of greatest impact to the
project should be identified via careful consideration of project needs, priorities and risks.
For IRL 6 and 7 we begin to see more clarity again as IRL 6 shows two decision criteria as being
critical. This is reflective of the common string of development activities that being to again reign
supreme independent of the type of project being worked. As the technology elements are brought
40
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Table 13
IRL 6 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 6 decision criteria
6.1 Cross technology issue measurement and performance characteristic validations completed
6.2 Software components (operating system, middleware, applications) loaded onto subassemblies
6.3 Individual modules tested to
verify that the module components
(functions) work together
6.4 Interface control process and
document have stabilized
6.5 Integrated system demonstrations have been successfully completed
6.6 Logistics systems are in place to
support Integration
6.7 Test environment readiness assessment completed successfully
6.8 Data transmission tests completed successfully
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.67
Enhancing
Desirable
0.33
0.27
0.39
0.33
0.00
0.45
0.33
0.12
0.03
0.06
0.79
0.15
0.48
0.42
0.09
0.00
0.00
0.91
0.09
0.09
0.48
0.36
0.03
0.03
0.58
0.39
0.21
0.58
0.15
0.06
0.00
0.79
0.21
0.12
0.42
0.27
0.18
0.00
0.55
0.45
0.06
0.52
0.33
0.06
0.03
0.58
0.39
0.18
0.64
0.06
0.06
0.06
0.82
0.12
Table 14
IRL 7 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 7 decision criteria
7.1 End-to-end Functionality of
Systems Integration has been successfully demonstrated
7.2 Each system/software interface
tested individually under stressed
and anomalous conditions
7.3 Fully integrated prototype demonstrated in actual or simulated operational environment
7.4 Information control data content
verified in system
7.5 Interface, Data, and Functional
Verification
7.6 Corrective actions planned and
implemented
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.79
Enhancing
Desirable
0.21
0.61
0.18
0.21
0.00
0.33
0.55
0.12
0.00
0.00
0.88
0.12
0.42
0.45
0.09
0.03
0.00
0.88
0.12
0.24
0.55
0.18
0.00
0.03
0.79
0.18
0.33
0.55
0.09
0.03
0.00
0.88
0.12
0.15
0.48
0.27
0.09
0.00
0.64
0.36
together and the interfaces are fully defined and made to function an urgent need to initiate testing comes
about for development efforts. In order to mitigate the difficulty of large system testing later in the
development cycle it is viewed as a critical step that smaller elements or modules of functionality be
flexed in order to assess the completeness of their integration. (see 6.3 Individual modules tested to
verify that the module components (functions) work together). This then evolves as these modules are
further integrated into an overarching functional system for continued testing. For IRL 7 we indicate
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
41
Table 15
IRL 8 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 8 decision criteria
8.1 All integrated systems able to
meet overall system requirements in
an operational environment
8.2 System interfaces qualified and
functioning correctly in an operational environment
8.3 Integration testing closed out
with test results, anomalies, deficiencies, and corrective actions
documented
8.4 Components are form, fit, and
function compatible with operational system
8.5 System is form, fit, and function
design for intended application and
operational environment
8.6 Interface control process has
been completed/closed-out
8.7 Final architecture diagrams have
been submitted
8.8 Effectiveness of corrective actions taken to close-out principal design requirments has been
demonstrated
8.9 Data transmission errors are
known, characterized and recorded
8.10 Data links are being effectively
managed and process improvements
have been initiated
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.00
Critical
Essential
0.97
Enhancing
Desirable
0.03
0.85
0.12
0.03
0.00
0.61
0.36
0.03
0.00
0.00
0.97
0.03
0.39
0.52
0.09
0.00
0.00
0.91
0.09
0.42
0.48
0.06
0.03
0.00
0.91
0.09
0.42
0.45
0.09
0.03
0.00
0.88
0.12
0.24
0.45
0.24
0.06
0.00
0.70
0.30
0.36
0.12
0.42
0.09
0.00
0.48
0.52
0.24
0.48
0.24
0.03
0.00
0.73
0.27
0.36
0.33
0.21
0.09
0.00
0.70
0.30
0.18
0.52
0.27
0.03
0.00
0.70
0.30
that end-to-end testing (see 7.1 End-to-end Functionality of Systems Integration has been successfully
demonstrated) is critical before moving to our next phase – Pragmatic (or operation). We believe this
is consistent with prescribed system development phases [10]. Unfortunately, many programs see this
critical end-to-end testing phase squeezed in a race to field a capability or stay on schedule. In order to
successfully pass the IRL 7 stage, however, it is essential that a complete and thorough test of the newly
developed system be conducted to prove that the functionality is as desired and that the reliability of the
system is suitable for operation.
4.3. Pragmatic (IRL 8–9)
Since Pragmatic addresses the operational context of the integration, it is not surprising that decision
criteria such as meeting requirements become paramount. At this phase of system maturation, developmental and operational testing activities are used to determine the degree to which the system meets the
requirements outlined for the effort at project initiation (8.1 All integrated systems able to meet overall
system requirements in an operational environment; 8.2 System interfaces qualified and functioning
correctly in an operational environment).
These activities ensure that the system can function fully not only in a laboratory or experimental
situation but in a realistic environment where many factors cannot be readily controlled or anticipated.
42
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Table 16
IRL 9 decision criteria and criticality assessment
Relative frequency (RF); n = 33
IRL 9 decision criteria
Cumulative RF
Critical
Essential
Enhancing
Desirable
N/A
0.82
0.09
0.09
0.00
0.64
0.27
0.06
0.24
0.42
0.21
9.1 Fully integrated system has
demonstrated operational effectiveness and suitability in its intended or a representative operational
environment
9.2 Interface failures/failure rates
have been fully characterized
and are consistent with user
requirements
9.3 Lifecycle costs are consistent
with user requirements and lifecycle
cost improvement initiatives have
been initiated
0.00
Critical
Essential
0.91
Enhancing
Desirable
0.09
0.03
0.00
0.91
0.09
0.09
0.03
0.67
0.30
Unfortunately, in recent years there has been a trend towards the waiving of requirements not attained by
this system late in the design cycle. Instead of ensuring the that system is fully capable, the symptoms of
a dysfunctional integration process often result in the acceptance of a system that is of a lesser capability
than was desired or needed. This is one of the shortcomings that the development of a rigorous integration
scale is intended to mitigate. The final stage of integration maturity, IRL 9, can only be attained after
a system has truly been flexed by the operator and is independent of the type of project undertaken.
The important criteria principally take into account quantification and demonstration in operational
environment (9.1 Fully integrated system has demonstrated operational effectiveness and suitability in
its intended or a representative operational environment), and failure rate characterization (9.2 Interface
failures/failure rates have been fully characterized and are consistent with user requirements) all of which
were rated high by SMEs. At this final stage the fruits of a successful system maturation process can be
seen through a highly functional capability with robust reliability. An inability to achieve satisfactory
results should be prevented through the proper application and tracking of Technology and Integration
Readiness Levels.
4.4. Summary and future research
Theoretically the two activities of technology development and integration could be represented on
a linear plane. Although, we do not contend that these developments are parallel paths, thus there is
a dynamic, non-linear causality akin to the embedded systems engineering life cycle (or “V within the
V within the V. . . ”). We presented IRL as a management tool built from the 7-layer Open Systems
Interconnect model used to build computer networks. IRL has been designed to be used in conjunction
with an established technology metric, i.e. TRL, to provide a system-level readiness assessment. Ideally,
the two of these metrics used in conjunction with each other can provide a common language that can
improve organizational communication of scientists, engineers, management, and any other relevant
stakeholders on integration within documented systems engineering guidance (e.g. [10,16,17,28]). We
complemented the IRL with a checklist that would allow for the removal of some of the subjectivity that
exist in many of the maturity metrics.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
43
This said, IRL is not a complete solution to integration maturity determination; it is although a tool
that increases the stakeholder communication, something that has proven to be critical in all the case
studies presented [21,24,27,32]. Yet, the case studies indicate:
– IRL lacks the ability to assess criticality and R&D effort
– IRL assessment of complex, net-centric systems requires a more quantitative algorithm to reduce
multiple integrations to a single assessment
– IRL does not evaluate cost and schedule
Additionally, for this study the participants were asked to assess the criticality of each IRL metric
within the context of the IRL they were listed under rather than being allowed to identify metrics that they
considered useful in assessing the IRL as defined. In other words, participants were given a “canned”
list of metrics and a “fixed” context (i.e., the IRL construct and the specific IRL that a set of metrics
was assigned to). Therefore, it is recommended that additional work be conducted (perhaps via multiple
working groups comprised of seasoned practitioners or SMEs) to review and modify the current list of
IRL metrics while using the criticality assessment as a baseline. This effort should address two aspects
of the IRL checklist: the metrics themselves and the weight that should be assigned to each based on
criticality data. Additionally, the issue of whether or not the integration type is an important factor
concerning how an IRL is determined needs to be examined.
Finally, integration is a complex topic and the respondents may have been biased by the type of
integration experience they have had (i.e., software, hardware, software and hardware, etc.); the wording
of each IRL metric may have been interpreted differently by the participants; and some decision criteria
may belong within a different IRL scale, thereby altering its criticality.
IRL is not without its limitations, and it is these issues that must be the focus of future work. One
example resulting from the case study assessment is that IRL is able to uncover integration maturity
concerns even if the TRLs of the integrating technologies are high. This was not the case with all of
the metrics; however, ITI is able to factor in R&D effort and technology criticality into the assessment,
assuming technology maturity is still low. It may be that a hybrid metric that uses both IRL and ITI is
the solution to this situation.
Future work includes:
–
–
–
–
–
Apply IRL to systems under development to better understand how the metric works in practicality.
What is the impact of emergent behavior in system integration, and how does IRL handle this?
At what level of the system architecture does IRL get applied?
What are the dynamics of progressing through the IRL scale?
Incorporate ITI assessment into the TRL/IRL assessment process:
∗ III (Integrated Integration Index)?
∗ In a net-centric architecture does the simple summarization in ITI still apply?
– Determine how it is possible to simplify complex architectures to represent system-level readiness:
∗ System Readiness Levels (SRL) = f (TRL,IRL)? [34]
∗ What is the value of a SRL?
∗ If system architectures can be represented as graphs, how can graph theory be applied to determine
a SRL as a function of TRL, IRL, and possibly ITI?
∗ How can UML/SysML be applied to create a framework for system maturity assessment?; Can a
new “view” be created within SysML specifically to address TRL, IRL, and SRL?
44
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
M. Austin, J. Zakar, D. York, L. Pettersen and E. Duff, A Systems Approach to the Transition of Emergent Technologies
into Operational Systems – Herding the Cats, the Road to Euphoria and Planning for Success International Conference
of the International Council on Systems Engineering, INCOSE, Netherlands, 2008.
J.S. Beasley, Networking, Pearson Education, Inc., Upper Saddle River, NJ, 2004.
J.W. Bilbro, A Suite of Tools for Technology Assessment, Technology Maturity Conference: Multi-Dimensional Assessment of Technology Maturity, AFRL, Virginia Beach, VA, 2007.
R.T. Brooks, and A. Sage, System of systems integration and test, Information Knowledge Systems Management
5(2005/2006), 261–280.
D.M. Buede, The Engineering Design of Systems, John Wiley & Sons, New York, 2000.
J. Connelly, K. Daues, R.K. Howard and L. Toups, Definition and Development of Habitation Readiness Level (HRL)
for Planetary Surface Habitats, 10th Biennial International Conference on Engineering, Construction, and Operations in
Challenging Environments, Earth and Space, 2006, 81.
D. Cundiff, Manufacturing Readiness Levels (MRL), Unpublished Whitepaper, 2003.
DoD, Levels of Information Systems Interoperability, 1998.
DoD, Technology Readiness Assessment (TRA) Deskbook. in: DUSD(S&T), (Ed.), Department of Defense, 2005.
DoD, Operation of the Defense Acquisition System, Instruction 5000.02 Department of Defense, Washington, DC, 2008.
T. Dowling and T. Pardoe, in: TIMPA – Technology Insertion Metrics, M.o. Defense, ed., QINETIQ, 2005, p. 60.
J. Fang, S. Hu and Y. Han, A Service Interoperability Assessment Mode l for Service Composition, IEEE International
Conference on Services Computing (SCC’04), 2004.
GAO, Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. in: GAO,
(Ed.), 1999.
C.P. Graettinger, S. Garcia, J. Siviy, R.J. Schenk and P.J.V. Syckle, Using the "Technology Readiness Levels" Scale to
Support Technology Management in the DoDs ATD/STO Enviornments. in: SEI, (Ed.), Carnegie Mellon, 2002.
M. Hobday, H. Rush and J. Tidd, Innovation in complex products and system, Research Policy 29(7–8) (2000), 793–804.
IEEE, Systems and software engineering – System life cycle processes, IEEE 15288, 2008.
INCOSE, INCOSE Systems Engineering Handbook, Version 3, International Council on Systems Engineering, 2007.
ISO, Information Technology – Open Systems Interconnection – Basic Reference Model: The Basic Model, ISO/IEC
7498-1, 1994, 1–68.
JPL, Mars Global Surveyor: Mission Operations Specifications: Volume 5, Part 1, California Institute of Technology,
1996, 1–51.
L.J. Lanzerotti, in: Assessment of Options for Extending the Life of the Hubble Space Telescope: Final Report, N.R.
Council, ed., The National Academies Press, Washington D.C., 2005.
J.L. Lions, ARIANE 5: Flight 501 Failure, Report by the Inquiry Board, Paris, 1996.
J.C. Mankins, Technology Readiness Levels, NASA, 1995.
J.C. Mankins, Approaches to Strategic Research and Technology (R&T) Analysis and Road Mapping. Acta Astronautica
51(1–9) (2002), 3–21.
J.J. Mattice, in: Hubble Space Telescope: Systems Engineering Case Study, C.f.S. Engineering, ed., Air Force Institute
of Technology, 2005.
D.E. Mosher, Understanding the Extraordinary Cost Growth of Missile Defense, Arms Control Today(2000), 9–15.
S. Nambisan, Complementary product integration by high-technology new ventures: The role of initial technology
strategy, Management Science 48(3) (2002), 382–398.
NASA, Mars Climate Orbiter, Mishap Investigation Board: Phase I Report, 1999.
NASA, NASA Systems Engineering Handbook, NASA/SP-2007-6105, National Aeronautics and Space Administration,
Washington, DC, 2007.
E.G. Nilsson, E.K. Nordhagen and G. Oftedal, Aspects of Systems Integration, 1st International System Integration, 1990,
434–443.
A.M. Orme, H. Yao and L.H. Etzkorn, Coupling Metrics for Ontology-Based Systems, IEEE Software (2006), 102–108.
A.M. Orme, H. Yao and L.H. Etzkorn, Indicating ontology data quality, stability, and completeness throughout ontology
evolution, Journal of Software Maintenance and Evolution 19(1) (2007), 49–75.
S.R. Sadin, F.P. Povinelli and R. Rosen, The NASA Technology Push Towards Future Space Mission Systems, Acta
Astronautica 20(1989), 73–77.
P.A. Sandborn, T.E. Herald, J. Houston and P. Singh, Optimum Technology Insertion Into Systems Based on the
Assessment of Viability, IEEE Transactions on Components and Packaging Technologies 26(4) (2003), 734–738.
B. Sauser, J. Ramirez-Marquez, D. Henry and D. DiMarzio, A System Maturity Index for the Systems Engineering Life
Cycle, International Journal of Industrial and Systems Engineering 3(6) (2008), 673–691.
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
[35]
[36]
[37]
[38]
[39]
45
B. Sauser, D. Verma, J. Ramirez-Marquez and R. Gove, From TRL to SRL: The Concept of System Readiness Levels,
Conference on Systems Engineering Research (CSER), Los Angeles, CA USA, 2006.
R. Shishko, D.H. Ebbeler and G. Fox, Nasa Technology Assessment Using Real Options Valuation, Systems Engineering
7(1) (2003), 1–12.
J. Smith, An Alternative to Technology Readiness Levels for Non-Developmental Item (NDI) Software, Carnegie Mellon,
Pittsburgh, PA, 2004.
R. Valerdi and R.J. Kohl, An Approach to Technology Risk Managment, Engineering Systems Division Symposium, MIT,
Cambridge, MA, 2004.
R.J. Watts and A.L. Porter, R&D cluster quality measures and technology maturity, Technological Forecasting & Social
Change 70(2003), 735–758.
Brian Sauser holds a B.S. from Texas A&M University in Agricultural Development with an emphasis
in Horticulture Technology, a M.S. from Rutgers, The State University of New Jersey in Bioresource
Engineering, and a Ph.D. from Stevens Institute of Technology in Project Management. He is currently
an Assistant Professor in the School of Systems & Enterprises at Stevens Institute of Technology. Before
joining Stevens in 2005, he spent more than 12 years working in government, industry, and academia
both as a researcher/engineer and director of programs. His research interest is in the management of
complex systems. This includes system and enterprise maturity assessment and the advancement of a
foundational science of systems thinking. He is currently the Director of the Systems Development and
Maturity Laboratory (http://www.systems-development-maturity.com), which seeks to advance the state
of knowledge and practice in systems maturity assessment. He teaches courses in Project Management of Complex Systems,
Designing and Managing the Development Enterprise, and Systems Thinking. In addition, he is a National Aeronautics and
Space Administration Faculty Fellow, Editor-in-Chief of the Systems Research Forum, an Associate Editor of the IEEE Systems
Journal, and the Associate Editor of the ICST Transactions on Systomics, Cybernetics, and e-Culture.
Ryan Gove is a Systems Engineer at Lockheed Martin Simulation, Training, and Support in Orlando
Florida where he supports Turn-Key Flight Training development. Ryan received his B.S. in Computer
Engineering from University of Delaware and his M.E. in Systems Engineering from Stevens Institute
of Technology. He is currently pursuing a PhD in Systems Engineering at Stevens. His current research
interests include system maturity assessment frameworks as applied to the management of complex
system design, integration, and operation. He is also interested in system of systems management, graph
theory, and UML/SysML applications.
Eric Forbes is a marketing and business development specialist with 3M’s Aerospace and Aircraft
Maintenance Department focusing on the application of advanced adhesive, acoustic, and thermal technologies to the aerospace market. During the writing of this paper, Eric worked as a systems engineer
with the Northrop Grumman Corporation’s Littoral Combat Ship Mission Package Integrator program
and was part of the joint industry, government and academia development of the System Readiness Level
methodology. His previous work within Northrop Grumman included research and development, systems
engineering, and business development activities for a wide cross section of missile systems and C4ISR
projects. Forbes earned a Bachelor’s degree in Aeronautical and Astronautical Engineering from the
University of Washington and a Master’s degree in Aerospace Engineering from the Georgia Institute of
Technology.
46
B. Sauser et al. / Integration maturity metrics: Development of an integration readiness level
Jose Emmanuel Ramirez-Marquez received the M.Sc. degree in statistics from the Universidad
Nacional Autonoma de Mexico, Mexico City, Mexico, and the M.Sc. and Ph.D. degrees in industrial
engineering from Rutgers University, New Brunswick, NJ. He is currently an Assistant Professor in the
School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ. He was a Fulbright
Scholar. His current research interests include the reliability analysis and optimization of complex
systems, the development of mathematical models for sensor network operational effectiveness, and
the development of evolutionary optimization algorithms. He has conducted funded research for both
private industry and government. He is the author or coauthor of more than 50 refereed manuscripts
related to these areas in technical journals, book chapters, conference proceedings, and industry reports.
Ramirez-Marquez has presented his research indings both nationally and internationally in conferences such as Institute for
Operations Research and Management Science (INFORMS), Industrial Engineering Research Conference (IERC), Applied
Reliability Symposium (ARSym), and European Safety and Reliability (ESREL). He is the Director of the Quality Control and
Reliability (QCRE) Division Board of the Institute of Industrial Engineers and is a member of the Technical Committee on
System Reliability for the ESRA.
Information Knowledge Systems Management 9 (2010) 75–76
DOI 10.3233/IKS-2010-0132
IOS Press
75
Book Review
Nguyen, Ngoc Thanh, (2008), Advanced Methods for Inconsistent Knowledge Management, London,
Springer-Verlag, 351 pages.
It has long been a common practice to distinguish between data, information, and knowledge. Information is data arranged in meaningful patterns such that it is potentially useful for decision making.
Knowledge is something that is believed, and is true, effective, and reliable. Knowledge is necessarily
associated with an experiential context, and so is generally more valuable than information. However,
knowledge is much harder to assimilate, understand, transfer, and share than is information. Many organizational scholars, seeking enhanced performance approaches, have begun to investigate and advocate
the initiation of knowledge management efforts. Many organizations are acquiring systems to enable
the sharing, exchange, and integration of knowledge. Knowledge, which is created in the mind of the
individuals, is often of little value to an enterprise unless it is shared. Managers are rapidly learning that
just because technology exists, knowledge will not necessarily flow freely throughout an organization.
Cultural issues are regularly cited as a major concern of those implementing knowledge management
initiatives. It is generally well recognized that the benefit of knowledge management initiatives will not
be realized unless the associated cultural, management, and organizational elements are aligned well,
individually and collectively.
There are two primary contemporary schools of thought on knowledge management: one that focuses
on existing explicit knowledge, and one that focuses on the building or creation of knowledge. The
first school focuses almost entirely upon information technology tools, whereas the second focuses on
knowledge management as a transdisciplinary subject with major behavioral and organizational, as well
as technology, concerns. Literature in the computer science and artificial intelligence disciplines often
focuses primarily on explicit knowledge and associated tools and technology. It is not uncommon in this
first school to have enterprise knowledge management defined as the formal management of resources to
facilitate access and reuse of knowledge that is generally enabled by advanced information technology.
Works in the second school of thought generally focus on knowledge generation and knowledge creation.
There is a major environmental context associated with this knowledge. Knowledge is generally thought
to be a powerful source of innovation. In this school, knowledge management is viewed from a holistic
point of view that encompasses both tacit and implicit knowledge.
This work is primarily in the first school of knowledge management. It is primarily concerned with
what the author calls inconsistent knowledge management (IKM) and deals extensively with methods
for realizing the inconsistent content of knowledge, when this exists. The table of contents reflects well
the subjects considered by the author:
1.
2.
3.
4.
5.
6.
7.
Inconsistency of Knowledge (12 pages devoted primarily to an overview of the book);
Model of Knowledge Conflict (34 pages devoted to consistency and conflict profiles);
Consensus as a Tool for Conflict Solving (54 pages devoted to analysis for consensus);
Model for Knowledge Integration (23 pages devoted to introducing analysis for integration);
Processing Inconsistency on the Syntactic Level (41 pages devoted to syntactic analysis);
Processing Inconsistency on the Semantic Level (37 pages devoted to semantic analysis);
Consensus for Fuzzy Conflict Profiles (20 pages on fuzzy system consensus models);
1389-1995/10/$27.50  2010 – IOS Press and the authors. All rights reserved
76
Book Review
8. Processing Inconsistency of Expert Knowledge (17 pages on knowledge consistency analysis);
9. Ontology Integration (22 pages on algorithms for conflict resolution through building consensus);
10. Application of Inconsistency Resolution Methods in Intelligent Learning Systems (44 pages on
algorithms for intelligent learning system based consensus building);
11. Processing Inconsistency in Information Retrieval (27 pages on consensus based approaches for
conflict resolution and agent based reasoning in information retrieval);
12. Conclusions (2 pages devoted so a summary of advantages for agent based metasearch for inconsistent knowledge management);
References (163 references are provided, primarily to the computer science based analysis methods
for inconsistent knowledge management discussed in the 12 chapters).
The first part of the book is comprised of Chapters 2 and 3. In these, the author presents theoretical
foundations for conflict analysis and, to a lesser extent, conflict resolution. Chapters 4 through 9
comprise the second part of the book and deal with knowledge inconsistency at both a semantic level and
a syntactic level. He develops both representation structures for knowledge inconsistency and detailed
analysis algorithms for inconsistency processing. Chapters 10 and 11 comprise the third part of the
book and deal with inconsistency resolution approaches in learning systems and development of a search
engine that allows recommendations based on information retrieval agent based search. Conclusions
and directions the author suggests for future research are discussed in a very brief Chapter 12, the final
chapter of the book.
In general, this is a worthy book. The author covers a wealth of material from an advanced computer
science perspective and provides a work that should be particularly useful to graduate level computer
science students, and also those in information management and knowledge management who aspire to
dealing with knowledge inconsistencies using the approaches discussed.
Andrew P. Sage
Co-Editor IKSM
University Professor and First American Bank Professor
Department of Systems Engineering and Operations Research
George Mason University
Fairfax, VA 22030, USA