WORKING P A P E R Lessons from Defense and Analysis for Thin About Systems of Syst PAUL K. DAVIS WR-459-OSD January, 2007 Prepared for the Symposium on Complex System Engineering January 11-12, 2006 RAND, Santa Monica, California This product is part of the RAND working paper series, which is intended to share researchers’ current work and to solicit informal comments. This WR has been approved for circulation by RAND ‘s National Security Research Division, but has not been formally edited or peer reviewed. Unless otherwise indicated, working papers can be quoted and cited without permission of the author, provided the source is clearly referred to as a working paper. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. is a registered trademark. ABSTRACT One issue in SoS work is how to approach modeling and analysis. This paper draws upon experiences from another domain, “defense planning” or “force planning,” for lessons-learned that carry over into system engineering of SoS. The bottom line is that we need to resurrect the art, science, and prestige of higher level design, while employing modern methods of modeling and analysis, such as exploratory analysis and multi-resolution modeling. Although doing so may seem like common sense to some, the psychological and organizational obstacles to doing so are high. Leadership by the senior system engineers at this conference could help a great deal. 1. INTRODUCTION 1.1 PREFACING COMMENTS In the spirit of this interdisciplinary conference, in this paper I draw upon past research in another domain for methods that can be used in the study of systems of systems (SoS). That “other domain” is defense planning, or what is sometimes called force planning. Ironically, some of the research on which I draw was inspired by fine examples from engineering. Thus, in a sense I am gong full circle. I should acknowledge some key assumptions at the outset: Designing and building SoS is different in many respects from normal engineering as it is practiced. The paradigm of complex adaptive systems is valid for addressing many of the most fundamental issues in SoS work. The paper is written for those who need no further elaboration or justification of the assumptions. 1.2 CLASSIC DEFENSE PLANNING 1.1.1 Characteristics Until about a decade ago, many veteran defense planners believed that there was a relatively well-defined methodology to be followed. Major ingredients included: (1) identifying the principal threats e.g., the USSR; (2) developing planning scenarios for those threats; and (3) 2 analyzing and choosing among alternative ways for the United States to cope in an affordable way. Figure 1 shows a cartoonish depiction that roughly mirrors analysis of ways to defend Western Europe during the cold war. The top curve shows the assumed buildup of enemy forces versus time. If one believed that the enemy force level should never be more than 50% higher than the friendly force level (a theater-level force ratio of 1.5 to 1), then one could draw a “requirement curve” as shown. In the absence of changes to the defense program, friendly forces would have a severe shortfall in the early portions of the scenario (bottom dashed line). If war actually began during such a period, defeat would be expected. Thus, one would consider alternative defense programs. For the alternative depicted (the higher dashed line), friendly force levels would just barely meet the requirement. This might be accomplished, for example, by buying equipment sets to store near the defense zone and buying aircraft to permit rapid deployment of personnel to “marry up” with their equipment. The cost to the US alone might be substantial, but the program would meet the requirement and all would be well. This, in essence, was the logic that underllay the POMCUS program of the 1970s and 1980s in which enough equipment sets were to be deployed so as to permit building to ten US divisions in ten days on the NATO central front. Figure 1—Schematic Version of Point-Scenario Planning During Cold War This planning construct was well suited to organizational processed and proved politically successful as well. It conveyed a sense that DoD had an understandable and sensible rationale for 3 its defense program. Although people argued about many details, such as the force-ratio requirement, the approach was rather circumscribed. Underlying such simple depictions, there were simulation-based analyses showing results of simulated combat day by day. They revealed a myriad of details that had to be addressed to actually achieve the buildups depicted in the simpler model. As a result, there was great effort, e.g., to avoid congestion at individual airports and seaports, to hone and exercise capabilities that might otherwise be merely theoretical, and to improve the likelihood that the United States would actually recognize and act upon warning soon enough to deploy forces rapidly. Further, the C-17 airlifter was designed so as to be highly efficient for reinforcing Western Europe. Thus, there was a great deal of richness involved in translating the simple concept into real-world capability. 1.2.2 Shortcomings Despite such richness of detail, the concept was simple. In some respects it was positively simplistic. Many observations could be made about this style of planning, but some of its obvious attributes were: Its “mechanical” quality, with no explicit discussion of commanders, troops, morale, strategy, tactics, and so on. War, for the purposes of this type of planning, was treated as simply a matter of relative resources. An emphasis on numbers and data, which were the fodder for both simple modeling and sophisticated simulation. A very deterministic image of war. Minimal discussion of uncertainty more generally. Indeed, although analysts did some excursions, much of the planning was built around “point scenarios.” 1.3 MODERN CONCEPTS OF DEFENSE PLANNING 1.3.1 Major Tenets The shortcomings of the analysis approach described above were recognized and written up twenty years ago {Davis, 1988, #4827} and relatively detailed suggestions for improvement described in the early 1990s {Davis, 1994, #8538}. The odler approach persisted, however, because it had so many practical virtues, many of them organizational and sociological. Even when the cold war ended, the DoD initially attempted to extend the same analysis approach, focusing primarily on Iraq and North Korea. In time, however, it was widely recognized that something different was needed. That “something” is often called capabilities-based planning {Rumsfeld, 2001; Davis, 2002}. As often happens, a good concept can be misinterpreted and 4 misapplied, but the fundamentals of capabilities-based planning are sound and enduring. These include: Confronting the problem of deep and ubiquitous uncertainty Recognizing that the salient uncertainties are both simple (e.g., future enemies) and complex (e.g., how future commanders will choose to employ forces; how erroneous perceptions at the time will cause major errors by friendly and enemy leaders, commanders, and infantry soldiers; how local tribes in an occupied area will or will not unite, and in support or who and what… Recognizing that enemy commanders will likely seek to arrange circumstances, strategy, and tactics so as to maximize their odds of success. Worrying about such possibilities and trying to prepare for accordingly so as to leave the enemy no good options. Such elements of CBP are challenging even when resources are lush, but in practice resources are limited and a principle of CBP is accomplishing the above while working within an economic framework.1 That, of course, implies balancing risks, making a variety of tradeoff judgments, and ultimately making some painful decisions. All of this may seem very difficult—especially dealing forthrightly with massive uncertainty, but effective planning occurs in all walks of life every day. Planning under uncertainty is entirely feasible. To the extent that there is a school solution for planning uncertainty, it is planning for adatpiveness (Davis, 1994a; Davis, 2002) This usage of “adaptiveness” (equivalent to what some mean by “agility”) is shorthand for something multifaceted (Alberts et al., 2003) A longer expression would be: The key to planning under uncertainty is adopting strategies that are flexible, adaptive, and robust (i.e., FAR strategies). In this more elaborated context, “flexibility” refers to the ability to deal with new missions and challenges, “adaptive” means refers to the ability to deal with diverse circumstances, and “robust” refers to the ability to withstand and recover gracefully from adverse shocks. Although the three words are often used synonymously within the English language, they are used here in a way that exploits some traditional shades of meaning. Others use the words in somewhat 1 Another dimension involves working within strategic and political constraints, such as when allies refuse to cooperate on some alternative, or some expenditures are mandated by Congress for other-than-military reasons. 5 different ways, but what matters most is recognizing and covering all three of the attributes. The attributes overlap, but only to some degree. This emphasis on FAR strategies was accepted by a recent National Academy panel as it recommended the way ahead for DoD modeling, simulation, and analysis (National Research Council, 2006). 1.3.2 Simple Concept, But Major Implications for Analysis What could be less exceptionable than suggesting FAR strategies? Is this not mere common sense? The answer is no. Knowing to buy insurance and to avoid potentially ruinous bets are learned skills—for both individuals and governments. It is not accidental that The Netherlands plans for once-in-a-century events; the Dutch suffered grievously from past flooding. Whether and what the U.S. government has learned from Hurricane Katrina remains to be seen. Shifting to a more happy topic, consider football teams. No professional coach imagines that he can develop a team and hone its skills suitably by focusing on a single image of next year’s championship game with a particular opponent in a particular stadium and set of weather conditions. All of this has profound implications for analysis. If the objective is to find good FAR strategies, that is very different from finding the best strategy for the most likely case. If uncertainties were modest, this would not be so, but in strategic planning uncertainties are commonly ubiquitous, large, and “deep.” The word “deep” here refers to the nature, as well as the magnitude of uncertainty.. Another major implication is for the types of modeling and simulation needed to support analysis of alternative FAR strategies. Since the real world is not so neatly mechanical and deterministic as in the earlier discussion, if uncertainties are everywhere, what kind of M&S is needed? And how does one even do analysis? Exploratory Analysis. The answer, it seems to me, includes prominently “exploratory analysis (EA).” In EA, one examines the goodness of strategy throughout the possibility space. Instead of sensitivity analysis, in which one typically has a best-estimate baseline and considers excursions by varying parameters one at a time, in EA one sees the consequences of all the possible combinations of input values. To be less abstract, suppose that in a defense problem the inputs include warning time, the axis of the enemy’s attack, officers on duty, and the real-world “sharpness” of the troops. One could have a base case and vary each of these parameters separately, but in EA, one would also see cases in which the enemy did everything he possibly could, simultaneously, to maximize advantage. The result might be minimal warning time AND an unexpected axis of attack AND a situation where the defending commander is taking 6 Christmas dinner with some friends, AND the troops are sleepy. Even though that’s a “corner” of the space, it happens to be an important corner. Many people immediately think to use probability distributions for EA. My colleagues and I have usually avoided doing this (except for truly stochastic phenomena such as white noise, and for dealing with large numbers of smallish and apparently uncorrelated factors) because: (1) many of the relevant inputs are correlated, as in the example above (which is far more probable, given that war actually occurs, than some fairly low probability to the fourth power; (2) people do not have a stellar record in characterizing probability distributions of deep uncertainty; and (3) we want to retain maximum transparency, knowing why and for what combinations, results are good or bad. When probability distributions are used, one is integrating over variables, which can then make it difficult to understand outcomes. Conducting EA by sampling from discrete cases over the entire space might be misinterpreted as assuming that all points as equally probable, but we are merely displaying results as a function of location in the n-dimensional space—deferring the decision about what portions of the space to discard as implausible. I believe that deferring such judgments is generally a good idea. In any case, RAND has done a great deal of work on EA over the last decade or so, and has published many of the methods and findings. I shall not elaborate further on these matters in this paper.2 The Need for Low-Resolution Models. The fundamental problem with EA is the curse of dimensionality. Given a model with thousands of inputs, varying all of them over their uncertainty ranges is beyond the capacity of any current computer—or, more fundamental—of any analyst to understand. It is therefore exceedingly useful to have simplified models for the exploration phase of analysis. If a model has 5-20 parameters, it is relatively easy to do insightful exploratory analysis with only a desktop computer and commercial programs such as Analytica®, or more specialized display technology such as CARs ®.3 But the Models Must Be Appropriate. One small catch is that EA is no better than the model used for the analysis. If, for example, the model reflects a mechanistic, deterministic, nopeople-to-mess-things-up attitude about phenomenology, then the analysis may well be worthless. 2 Many published sources exist for this material {Davis, Gompert, and Kugler, 1996; Davis et al., 2001, #43700}; Davis, 2002, #30013; Davis, 2003, #10535}. Colleagues Steve Bankes and Robert Lempert have published closely related work using phrases such as “robust adaptive planning” and “exploratory modeling” (Lempert et al., 2003). 3 Analytica was developed at Carnegie Mellon and is marketed by Lumina Corp. CARs was developed as a spinoff of work at RAND by Steven Bankes and others at Evolving Logic. 7 Some may recall the China Syndrome movie twenty years ago or so, which vividly pointed out the obvious about nuclear power plants: engineers can do a superb job of working out technical fault trees and designing for robustness, but if they or the builders and administrators overlook shortcomings such as human greed, corruption, and sloppiness when bored, everything else may be for naught. In the modern world of defense planning, we must now demand that M&S have within them the capacity to represent all sorts of “soft and slippery” stuff, such as the occasional imperfections of our own commanders and troops, and the adaptations of enemy, the behavior of local tribes, the emergence of low-tech “asymmetric” tactics. Again, it may seem obvious that the models used need to be appropriate, but the reality is that legacy DoD M&S have generally not had the features needed. The shortcomings of current DoD M&S are so fundamental that there is need to broaden the concept of M&S to include human-inthe-loop simulation, human gaming, use of experts, use of historical information, and so on—all so as to increase the likelihood that “analysis,” broadly construed, will take into account a sufficiently broad range of possibilities (Davis and Henninger, forthcoming; National Research Council, 2006). Figure 2 summarizes the point that a family of tools should be used, rather than relying entirely upon large simulations. 8 Figure 2 Relative Merits of Illustrative Items in a Family of Tools SOURCE: National Research Council (2006); Davis and Henninger, forthcoming, #46588} Even if we restrict ourselves to technical issues, it is often the case that simple models are overly simplistic. In particular, they ma do a poor job of representing the aggregate consequences of the details over which they gloss. They may be inappropriately linear; they may assume independent probabilities; they may ignore some factors altogether. Thus, developing good simple models is often not straightforward, but there are some multi-resolution modeling methods to help {Davis and Bigelow, 1998; Davis and Bigelow, 2003, #10598}. Structural Uncertainties. Another reason that developing the right models is not so straightforward is “structural uncertainty.” In many cases we do not even know with confidence the direction of arrows in the causal diagrams (e.g., influence diagrams) that many of us favor in 9 modeling complex systems.4 Further, some of the most important factors affecting outcomes are often “exogenous,” i.e., outside the explicit model itself. This is not simply for lack of imagination by modelers, but a reflection of reality. Things happen. Situations change. People do the unexpected. Small events can have consequences grossly out of proportion to their objective significance. And, of course, even some of the factors that “ought” to be endongenous and well understood may not even be recognized at a given time. All of these matters are examples of structural uncertainty. No one knows how to deal well them, and some spinoff of Godel’s theorem would probably prove that it’s impossible to do so perfectly. Still, experience tells us that doing something is often better than doing nothing. My colleagues and I have made some advances in recent work on “massive scenario generation” (MSG). In this exploratory analysis work we allowed for “randomness” of such things as the directionality and magnitude of causality, and for potential exogenous events with large consequences. We also modeled at a high enough level of abstraction so that a vast range of detailed factors, which we could not hope to capture, were plausibly captured at least in the aggregate. Although very research, the experience was both sobering and encouraging (Davis et al., forthcoming). 3. SYSTEMS OF SYSTEMS AND THE CAS PERSPECTIVE Is is probably becoming apparent how all this relates to complex adaptive systems and the design and development of systems of systems—and in more than a metaphorical sense. Let me move rather quickly to what I see as the major implications. 3.1. PRINCIPLES When developing capabilities, including relevant hardware and software, to deal with complex adaptive systems, it is essential in my view to begin with attitudes markedly different 4 Many writers claim that complex adaptive systems lack causal relations or, as a minimum, that causality is hard to understand. My own view is that while the difficulties are certainly real, we should not and need not give up on causality so long as we include sufficient exploratory analysis, including over structural uncertainties. To use an example from a social situation, a policeman who intervenes in a family crisis may calm everyone down as intended or find himself attacked (perhaps literally) by all involved. How the situation develops can depend on exquisite details, such as the history of the fracas and emotions in the seconds before the intervention. Reliable prediction is not in the cards, and some of the factors determining outcome may not be known at the crucial point, but causality still exists and with experience police officers are able to improve their odds of success when forced into such unwelcome and dangerous interventions. 10 from what characterizes a good deal of modern-day engineering. From a strategic perspective within a development, this attitude includes: Approaching the problem system on its own terms, rather than attempting to impose an engineer’s concepts (e.g., resolve to deal with the unpleasant and squishy “people problems” and the near certainty of unexpected developments in the world) Looking to broaden the system construct rather than to narrow the range of factors considered, the case space, and so on. Assuming from the outset that the SoS will probably include people and that this is highly desirable because people are sometimes more creative, capable, or wise than our best machines (the opposite is also true). Thus, the challenge becomes one of designing the best man-machine systems, rather than one of designing the best closed systems Recognizing that even the smartest of clients is unlikely to have a very good sense of technical “requirements” at the outset, nor even of what will prove to be needed functionality. Thus, the client’s desires and wisdom should be accepted with great interest and attention as inputs, but “requirements” should not be established early. Although the strategic view should arguably be expansive, patient, curious, and open, it will also be true that progress will occur only when finite interim problems are taken on, defined rigorously, and worked. To a substantial degree all of us learn from doing and seeing. This said, it becomes very important to have a sufficiently good concept of the larger system problem so as to define interim steps that will truly be in the right direction. Yes, chasing some red herrings will probably prove necessary and ultimately useful, but interim approaches that are inherently fatally flawed with respect to the larger “real” problems should typically be avoided. When engineers began developing what became stealthy aircraft in the 1970s, they were wise enough to take an approach that addressed not just radar cross section, but all aspects of signature, and that also anticipated countermeasures. In contrast, in the 1990s when the DoD decided to develop JWARs, it prohibited interactiveness—because of the wrongheaded notion that “analysis” required reproducibility, which in turn required closed models, and (implicit) that computer models would be able to represent al the phenomena necessary. Modularity, Building Blocks, and Tailoring.. Continuing on this theme of somehow reconciling the need for a broad, expansive perspective and discrete interim steps, let me now recall a point made earlier, that the school solution in dealng with uncertainty is adaptiveness. How does one build that into an SoS? A crucial part of the answer is modularity. In the real world, of course, we often discover that achieving modularity is far more difficult than first-order 11 thinking might suggest, and underlying phenomena are intertwined (everything depends on everything). Nonetheless, if the modularity is given sufficient priority, then much can be done. The interfaces will be much more complicated than one might hope (e.g., ten inputs to a module rather than two), but they can be defined nonetheless. The best modules may be larger than one might wish (i.e., incorporating components that one might naively think would be separate), but better this than no modularity at all. This type of thing should be familiar. Consider our personal computers. Even though computer manufacturers have great incentives to modularize, we can’t just replace any old “part” that we think of conceptually. Instead, we may have to have a whole new motherboard, or a whole new laptop top (display, lid, etc.) . Why? I can only assume that companies like Apple and Dell know what they’re doing, especially since it’s an exceedingly competitive business. Thus, what they have concluded constitute the best modules reflects not just engineering issues, but issues of fabrication, cost, and labor times. The bottom line here, however, is that at the end of the day the computers are quite modular and that continuous improvements build on this. Let me now use the related term, building blocks. Adaptation tends to be easy or hard depending on whether the “right” building blocks have been designed in, i.e., essentially the right “modules, and on whether those building blocks are readily available and on whether personnel know who to swap them in and out . This relates to practical matters such as inventory, trained personnel, practice, practice, and practice. One reason that practice is so important is that adaptations often require more than mere replacement of building blocks; often, some tailoring is needed. Whether that is easy or hard depends on the engineering, and whether it is feasible depends on training of people and tools in the field. 3.2 IMPLICATIONS FOR REQUIREMENTS, ANALYSIS, AND METRICS 3.2.1 IMPLICATIONS If one buys into the ideas I have been reviewing, then it should also be clear that there are major implications for all aspects of development. The premium would be on features such as: Ability to accommodate new building-block modules Ability to “plug in” to a wide variety of systems (publish and subscribe capabilities, rather than point to point connectivity) Slack in physical space, bandwidth, heat tolerance, etc. so as to allow for new or changed components that are not yet identified or developed. This premium would be reflected in requests for proposals, contracts and criteria for assessing proposals, and so on. 12 As for analysis, I believe that the implications for SoS are quite analogous to what I described earlier. We would want to design for FARness (flexibility, adaptiveness, and robustness), and to conceive and test alternatives one needs to have models and analytic plans suitable to that goal. This implies broad exploratory analysis, as well as detailed analysis. How does this relate to complex adaptive systems, a key element of this conference’s theme? The answer, as I hope is already evident, is that the features of the specially troublesome SoS imply the need to forego traditional versions of requirements, specifications, and predictive analysis, and to instead design for FARness as discussed. I mentioned at the outset that there was an aspect of moving full circle in the paper. When I first began working on what is now called exploratory analysis, my inspirations were (1) firstorder analysis such as has been so crucial in the development of theoretical physics and engineering, and (2) design engineering in which one has a strong concept of the “design space” and an abiding desire to “cover” as much of that space as possible given the resources available and constraints. Although early aircraft designers did not have to deal with complex adaptive systems in the sense that phrase is understood today, they assuredly understood the need to go far beyond immediate expectations about what an aircraft would be used for, and to build in flexibility and growth potential. Similarly for ship designers and many other people that we would associate with system engineering. 3.2.2. OBSTACLES One of the lessons I have learned over time is that the skills that go with terms such as design, architecture, scoping, and first-order analysis are fairly rare. Moreover, it is a continuing lament by many who have looked into the matter that schools tend not to teach “design.” Further, and more insidious, I have come to understand better than previously how organizational and social factors, as well as normal professional practice, are often the enemy. I will not elaborate on this here, but rather mention briefly some of the expressions that often come up by those resisting capabilities-based planning. I believe that you will quickly see analogues in the realm of system engineering. Common Assertions That Constitute Part of the Problem The contract and underlying DoD policy is that we will evaluate alternatives against standard planning scenarios, using standard models, and agreed data bases. If we have time, we may do some excursions, be probably won’t. Simple analysis can’t be trusted. The only results that will be accepted are those with Monster_Model. Oh, things are much more complicated than any simple model could handle 13 To illustrate what has proven so distressing about the last claim is that people don’t necessarily change their minds even if shown evidence. Figure 3 shows schematically some buildup curves for deploying forces and supplies to a future war. Many members of the relevant organizations insist that simple mobility models can’t be any good because there are hundreds of ships, aircraft, bases, and routes, and there are any number of terrible complications in conducting a large-scale real-world deployment. The top, dashed line , looks roughly like results from a detailed simulation for a planning case: lots of structure, as individual units or ships arrive (many more than shown here). The two light solid lines show results of a much simpler, back-of-theenvelope calculation with differing assumption about when deployment begins relative to warning (at time 0). These results have no detailed structure. The last, dashed line, is a smoothed version of an actual buildup, one in which warning wasn’t used effectively and, for various reasons, the deployment wasn’t as efficient as it was expected to be. Any decent analyst would learn from such work that the simple model is far better for planning purposes, so long as one considers uncertainties. All that fine structure in the simulation results is purely artifactual: in particular simulation runs, particular aircraft and ships show up here and there a bit earlier or later depending on random minutia. What matters strategically, however, is the fact that the deployment might look as good as the left simple-model curve, or as bad as the right simplemodel curve if merely response times varied—a function of how quickly the President makes decisions amidst uncertainty. Further, a real deployment will show us some surprises. In the real-world case shown illustrative, airlift proceeded more efficiently than expected, while sealift ran into some problems. That is, some planning factors had proven pessimistic, while others proved a bit optimistic. Other standard laments that I have heard from those resisting broad, exploratory analysis, are: Oh, we agree with you in principle, but we don’t have the simple models that you mention and the general wouldn’t let us use them if we had them. Oh, we tried doing simple modeling, but the results weren’t very good A physicist or design engineer might think about using a better simple model, recognizing that simple doesn’t mean simplistic, and that it may take some time and effort to come up with the right simple model, but the big-model folks are fast to “prove” that only their detailed model will be sufficient. 14 Figure 2—Schematic Comparison of Simple and Detailed Models with Reality To conclude, let me suggest some complaints about SoS work that I have heard: What Resisters Say in the SoS Realm Don’t give me all that philosophy. What are the specs? Until we get specs, we can’t do anything. That GIG stuff is all very fine in theory, but we need the specifics of who will need to talk with whom and in what format? Simple models can’t be trusted; we have to do it right. Who knows how the system will be used in the future; we’ve designed it for the use cases and planning scenario that were specified. Fortunately, these folks lose out eventually. We have myriad examples over time of how good system engineers have done a great job of planning under uncertainty. For the Navy, I think of the Aegis ships, which continue to have new missions and operating circumstances, and new generations of equipment. I think of examples such as Aegis ships, F-4 aircraft, the DSP satellites, and GPS navigation. In the commercial realm, I think of my trusty PowerBook computer (now a “MacBook Pro), which has evolved dramatically over a period of five years or so. Finally, a word about spiraling, evolution, and so on. Based on my personal experience, primarily in developing large and complex models (million-line-of-code stuff) is the tension that exists between those who want to pedantically design the ultimate system and those who want to “get on with it” and just build something, from which presumably much will be learned. The cross-cutting principle, I believe, is that at every stage of development, one needs design. I have 15 seen intelligent rapid prototyping and dumb rapid prototyping, by equally smart and talented eople. What mattered was whether some time was spent up front—even if days rather than months—thinking about the design space in relatively simple terms. 3.2.3. THE CHALLENGE FOR CONFERENCE PARTICIPANTS Those at this conference represent the cream of system engineering. I suspect that everyone in this room recognizes well the issuses that I have raised and would be likely to argue that I have been merely describing some key elements of good system engineering generally. My assertion, however, is that normal practice is very different and if we are to conceive, design, build and manage SoS well, it will take strong leadership by the best in the community—often in opposition to thse who are comfortable with more normal practices and the comfort levels provided by tight specifications and blinders. BIBLIOGRAPHY Alberts, David S., and Richard E. Hayes, Power to the Edge: Command and Control in the Information Age, Washington: Command and Control Research Program, Department of Defense, 2003. Davis, Paul K., ed., New Challenges in Defense Planning: Rethinking How Much is Enough, Santa Monica, Calif.: RAND Corporation, 1994b. Davis, Paul K., Analytic Architecture for Capabilities-Based Planning, MissionSystem Analysis, and Transformation, Santa Monica, Calif.: RAND Corporation, 2002. Davis, Paul K., ed., New Challenges for Defense Planning: Rethinking How Much is Enough, RAND Corporation, 1994a. Davis, Paul K., Steven C. Bankes, and Michael Egner, Enhancing Strategic Planning With Massive Scenario Generation: Theory and Experiments, Santa Monica, Calif.: RAND Corporation, TR-392-OSD, forthcoming. Davis, Paul K., and James P. Kahan, Theory and Methods for Supporting HighLevel Decision Making, Santa Monica, Calif.: RAND Corporation, TR-422AF, forthcoming. Kugler, Richard L., Policy Analysis in National Security Affairs: New Methods for a New Era, National Defense University, 2006. 16 Lempert, Robert J., Steven W. Popper, and Steven C. Bankes, Shaping the Next One Hundred Years: New Methods for Quantitative Long-Term Policy Analysis, Santa Monica, Calif.: RAND Corporation, 2003. National Research Council, Developing Modeling, Simulation, and Analysis: Meeting the Challenge, Washington, D.C.: National Academies Press, 2006. 17
© Copyright 2025 Paperzz