Assessing NASA`s Safety Culture: The Limits and Possibilities of

Arjen Boin
Louisiana State University
Paul Schulman
Mills College
Critical
Questions
about Safety
and Security
Assessing NASA’s Safety Culture: The Limits and Possibilities
of High-Reliability Theory
Arjen Boin is the director of the
Stephenson Disaster Management
Institute and an associate professor in
the Public Administration Institute at
Louisiana State University. He writes
about crisis management, public
leadership, and institutional design.
E-mail: [email protected]
After the demise of the space shuttle Columbia on Febru- aged several tiles covering a panel door that protected
ary 1, 2003, the Columbia Accident Investigation Board the wing from the extreme heat that reentry into the
earth’s atmosphere generates. The compromised desharply criticized NASA’s safety culture. Adopting the
fense at this single spot caused the demise of Columbia.
high-reliability organization as a benchmark, the board
concluded that NASA did not possess the organizational
The Columbia Accident Investigation Board (CAIB)
characteristics that could have prevented this disaster.
strongly criticized NASA’s safety culture. After discovFurthermore, the board determined that high-reliabilering that the “foam problem”
ity theory is “extremely useful in
had a long history in the space
describing the culture that should
The Columbia Accident
shuttle program, the board asked
exist in the human spaceflight
organization.” In this article, we
Investigation Board . . . strongly how NASA could “have missed
argue that this conclusion is based
criticized NASA’s safety culture. the signals that the foam was
sending?” (CAIB 2003, 184).
on a misreading and misapplicaMoreover, the board learned that
tion of high-reliability research. We
several NASA engineers had tried to warn NASA
conclude that in its human spaceflight programs, NASA
management of an impending disaster after the launch
has never been, nor could it be, a high-reliability orgaof the doomed shuttle, but the project managers in
nization. We propose an alternative framework to assess
question had reportedly failed to act on these
reliability and safety in what we refer to as reliabilitywarnings.
seeking organizations.
Paul Schulman is a professor of
government at Mills College in Oakland,
California. He has done extensive research
on high-reliability organizations and
has written Large-Scale Policy Making
(Elsevier, 1980) and, with Emery Roe,
High Reliability Management (Stanford
University Press).
E-mail: [email protected]
I
n January 2001, the National Aeronautics and
Space Administration (NASA) discovered a wiring
problem in the solid rocket booster of the space
shuttle Endeavor. The wire was “mission critical,” so
NASA replaced it before launching the shuttle. But
NASA did not take any chances: It inspected more
than 6,000 similar connections and discovered that
four were loose (Clarke 2006, 45). The thorough
inspection may well have prevented a shuttle disaster.
This mindfulness and the commitment to followthrough concerning this safety issue could be taken as
indicators of a strong safety climate within NASA. It
would confirm the many observations, academic and
popular, with regard to NASA’s strong commitment
to safety in its human spaceflight programs ( Johnson
2002; McCurdy 1993, 2001; Murray and Cox 1989;
Vaughan 1996).
Two years later, the space shuttle Columbia disintegrated above the southern skies of the United States.
The subsequent inquiry into this disaster revealed that
a piece of insulating foam (the size of a cooler) had
come loose during the launch, then struck and dam1050
Public Administration Review • November | December 2008
Delving into the organizational causes of this disaster,
the board made extensive use of the body of insights
known as high-reliability theory (HRT). The board
“selected certain well-known traits” from HRT and
used these “as a yardstick to assess the Space Shuttle
Program” (CAIB 2003, 180).1 The board concluded
that NASA did not qualify as a “high-reliability organization” (HRO) and recommended an overhaul of
the organization to bring NASA to the coveted status
of an HRO.
In adopting the HRO model as a benchmark
for past and future safety performance, CAIB
tapped into a wider trend. It is only slightly hyperbolic to describe the quest for high-reliability
cultures in large-scale organizations—in energy,
medical, and military circles—in terms of a Holy
Grail (Bourrier 2001; Reason 1997; Roberts et al.,
forthcoming; Weick and Sutcliffe 2001). An entire
consultancy industry has sprouted up around the
notion that public and private organizations can
be made more reliable by adopting the characteristics
of HROs.
All this raises starkly the question, what, exactly, does
high-reliability theory entail? Does this theory explain
organizational disasters? Does it provide a tool for
assessment? Does it offer a set of prescriptions that can
help organizational leaders design their organizations
into HROs? If so, has HRT been applied in real-life
cases before? If NASA is to be reformed on the basis
of a theoretical assessment, some assessment of the
theory itself seems to be in order.
Interestingly, and importantly, HRT does not offer
clear-cut answers to these critical questions (cf. LaPorte 1994, 1996, 2006). The small group of highreliability theorists (as they have come to be known)
has never claimed that HRT could provide these
answers, nor has the theory been developed to this
degree by others. This is not to say that HRT is irrelevant. In this article, we argue that HRT contains
much that is potentially useful, but its application to
evaluate the organizational performance of nonHROs requires a great deal of further research. We
offer a way forward to fulfill this potential.
We begin by briefly revisiting the CAIB report and
outlining the main precepts of high-reliability theory.
Building on this overview, we argue that NASA, in its
human spaceflight program, never did adopt, nor
could it ever have adopted, the characteristics of an
HRO.2 We suggest that NASA is better understood as
a public organization that has to serve multiple and
conflicting aims in a politically volatile environment
(Wilson 1989). We offer the beginnings of an alternative assessment model, which allows us to inspect for
threats to reliability in those organizations that seek
reliability but by their nature cannot be HROs.
The CAIB Report: A Summary of Findings
The CAIB presented its findings in remarkably speedy
fashion, within seven months of the Columbia’s demise.3 The board uncovered the direct technical cause
of the disaster, the hard-hitting foam. It then took its
analysis one step further, because the board subscribed
to the view “that NASA’s organizational culture had as
much to do with this accident as foam did” (CAIB
2003, 12). The board correctly noted that many accident investigations make the mistake of defining
causes in terms of technical flaws and individual failures (CAIB 2003, 77). As the board did not want to
commit a similar error, it set out to discover the organizational causes of this accident.4
The board arrived at some far-reaching conclusions.
According to the CAIB, NASA did not have in place
effective checks and balances between technical and
managerial priorities, did not have an independent
safety program, and had not demonstrated the characteristics of a learning organization. The board found
that the very same factors that had caused the Challenger disaster 17 years earlier, on January 28, 1986,
were at work in the Columbia tragedy (Rogers Commission 1986). Let us briefly revisit the main findings.
Acceptance of escalated risk. The Rogers Commission (1986) had found that NASA operated with a
deeply flawed risk philosophy. This philosophy prevented NASA from properly investigating anomalies
that emerged during previous shuttle flights. One
member of the Rogers Commission (officially, the
Presidential Commission on the Space Shuttle Challenger Accident), Nobel laureate Richard Feynman,
described the core of the problem (as he saw it) in an
official appendix to the final report:
The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because
of this, obvious weaknesses are accepted again,
sometimes without a sufficiently serious attempt
to remedy them, or to delay a flight because of
their continued presence. (Rogers Commission
1986, 1, appendix F; emphasis added)
The CAIB found the very same philosophy at work:
“[W]ith no engineering analysis, Shuttle managers used
past success as a justification for future flights” (CAIB
2003, 126). This explains, according to the CAIB, why
NASA “ignored” the shedding of foam, which had
occurred during most of the previous shuttle launches.
Flawed decision making. The Rogers Commission
had criticized NASA’s decision-making system, which
“did not flag rising doubts” among the workforce with
regard to the safety of the shuttle. On the eve of the
Challenger launch, engineers at Thiokol (the makers of
the O-rings) suggested that cold temperatures could
undermine the effectiveness of the O-rings. After
several rounds of discussion, NASA management
decided to proceed with the launch. Similar doubts
were raised and dismissed before Columbia’s fateful
return flight. Several engineers alerted NASA management to the possibility of serious damage to the thermal protection system (after watching launch videos
and photographs). After several rounds of consultation, it was decided not to pursue further investigations (such as photographing the shuttle in space).
Such an investigation, the CAIB report asserts, could
have initiated a life-saving operation.
Broken safety culture. Both commissions were
deeply critical of NASA’s safety culture. The Rogers
Commission noted that NASA had “lost” its safety
program; the CAIB speaks of “a broken safety culture.” In her seminal analysis of the Challenger disaster, Diane Vaughan (1996) identified NASA’s
susceptibility to “schedule pressure” as a factor that
induced NASA to overlook or downplay safety concerns. In the case of Columbia, the CAIB observed
that the launch date was tightly coupled to the
Assessing NASA‘s Safety Culture 1051
completion schedule of the International Space Station. NASA had to meet these deadlines, the CAIB
argues, because failure to do so would undercut its
legitimacy (and funding).5
avoided such failure while providing operational capabilities under a full range of environmental conditions
(which, as of this writing, most of these designated
HROs have managed to do).
Dealing with Obvious Weaknesses
The common thread in the CAIB findings is NASA’s
lost ability to recognize and act on what, in hindsight,
seem “obvious weaknesses” (cf. Rogers Commission,
appendix F, 1). According to the CAIB, the younger
NASA of the Apollo years had possessed the right
safety culture. Ignoring the 1967 fire and the near miss
with Apollo 13 (immortalized in the blockbuster
movie), the report describes how NASA had lost its
way somewhere between the moon landing and the
new shuttle. The successes of the past, the report tells
us, had generated a culture of complacency, even hubris. NASA had become an arrogant organization that
believed it could do anything (cf. Starbuck and Milliken 1988). “The Apollo era created at NASA an
exceptional “can-do” culture marked by tenacity in the
face of seemingly impossible challenges” (CAIB 2003,
101). The Apollo moon landing “helped reinforce the
NASA staff ’s faith in their organizational culture.”
However, the “continuing image of NASA as a ‘perfect
place’ … left NASA employees unable to recognize
that NASA never had been, and still was not, perfect.”6
What makes HROs special is that they do not treat
reliability as a probabilistic property that can be
traded at the margins for other organizational values
such as efficiency or market competitiveness. An
HRO has identified a specific set of events that must
be deterministically precluded; they must simply
never happen. They must be prevented not by technological design alone, but by organizational strategy
and management.
The CAIB highlighted NASA’s alleged shortcomings
by contrasting the space agency with two supposed
high-reliability organizations: The Navy Submarine
and Reactor Safety Programs and the Aerospace Corporation (CAIB 2003, 182–84). These organizations,
according to the CAIB, are “examples of organizations
that have invested in redundant technical authorities
and processes to become highly reliable” (CAIB 2003,
184). The CAIB report notes “there are effective ways
to minimize risk and limit the number of accidents”
(CAIB 2003, 182)—the board clearly judged that
NASA had not done enough to adopt and implement
those ways. The high-reliability organization thus
became an explicit model for explaining and assessing
NASA’s safety culture. The underlying hypothesis is
clear: If NASA had been an HRO, the shuttles would
not have met their disastrous fate. How tenable is this
hypothesis?
Revisiting High-Reliability Theory: An
Assessment of Findings and Limits
High-reliability theory began with a small group of
researchers studying a distinct and special class of
organizations—those charged with the management
of hazardous but essential technical systems (LaPorte
and Consolini 1991; Roberts 1993; Rochlin 1996;
Schulman 1993). Failure in these organizations could
mean the loss of critical capacity as well as thousands
of lives both within and outside the organization. The
term “high-reliability organization” was coined to
denote those organizations that had successfully
1052
Public Administration Review • November | December 2008
This is no easy task. In his landmark study of organizations that operate dangerous technologies, Charles
Perrow (1999) explained how two features—complexity and tight coupling—will eventually induce and
propagate failure in ways that are unfathomable by
operators in real time (cf. Turner 1978). Complex and
tightly coupled technologies (think of nuclear power
plants or information technology systems) are accidents waiting to happen. According to Perrow, their
occurrence should be considered “normal accidents”
with huge adverse potential.
This is what makes HROs such a fascinating research
object: They somehow seem to avoid the unavoidable.
This finding intrigues researchers and enthuses practitioners in fields such as aviation, chemical processing,
and medicine.
High-reliability theorists set out to investigate the
secret of HRO success. They engaged in individual
case studies of nuclear aircraft carriers, nuclear power
plants, and air traffic control centers. Two important
findings surfaced. First, the researchers found that
once a threat to safety emerges, however faint or distant, an HRO immediately “reorders” and reorganizes
to deal with that threat (LaPorte 2006). Safety is the
chief value against which all decisions, practices, incentives, and ideas are assessed—and remains so under
all circumstances.
Second, they discovered that HROs organize in remarkably similar and seemingly effective ways to serve
and service this value.7 The distinctive features of these
organizations, as reported by high-reliability researchers, include the following:
● High technical competence throughout the
organization
● A constant, widespread search for improvement
across many dimensions of reliability
● A careful analysis of core events that must be
precluded from happening
● An analyzed set of “precursor” conditions that
would lead to a precluded event, as well as a clear
demarcation between these and conditions that lie
outside prior analysis
● An elaborate and evolving set of procedures and
practices, closely linked to ongoing analysis, which
are directed toward avoiding precursor conditions
● A formal structure of roles, responsibilities, and
reporting relationships that can be transformed under conditions of emergency or stress into a decentralized, team-based approach to problem solving
● A “culture of reliability” that distributes and
instills the values of care and caution, respect for
procedures, attentiveness, and individual responsibility for the promotion of safety among members
throughout the organization
Organization theory suggests that, in reality, such an
organization cannot take on all of these characteristics
(LaPorte 2006; LaPorte and Consolini 1991). Overwhelming evidence and dominant theoretical perspectives in the study of public and private organizations
assert that the perfect operation of complex and dangerous technology is beyond the capacity of humans,
given their inherent imperfections and the predominance of trial-and-error learning in nearly all human
undertakings (Hood 1976; Perrow 1986; Reason
1997; Simon 1997). Further, these same theories warn
that it would be incredibly hard to build these characteristics, which are central to the development of
highly reliable operations, into an organization
(LaPorte and Consolini 1991; Rochlin 1996).
to a highly interactive form of degradation. His normal accident theory gives reason to believe that no
organizational effort can alter the risks embedded in
the technical cores of these systems (Perrow 1999).
Quite the contrary: Organizational interventions
(such as centralization or adding redundancy) are
likely to escalate the risks inherent in complex and
tightly coupled technologies. In this perspective, the
very idea of “high-reliability” organizations that successfully exploit dangerous technologies is at best a
temporary illusion (Perrow 1994).
This controversy, in its most extreme form, centers
around an assertion that cannot actually be disproved
because of its tautological nature. No amount of good
performance can falsify the theory of normal accidents
because it can always be said that an organization is
only as reliable as the first catastrophic failure that lies
ahead, not the many successful operations that lie behind. Yet ironically, this is precisely the perspective that
many managers of HROs share about their organizations. They are constantly seeking improvement because
they are “running scared” from the accident ahead, not
complacent about the performance records compiled in
the past. This prospective approach to reliability is a
distinguishing feature that energizes many of the extraordinary efforts undertaken within HROs.
The high-reliability theory/normal accident theory
controversy aside, it is clear that HRT has limits both
in terms of explanation and prescription. High-reliability
An HRO can develop these special features because
researchers readily acknowledge that they have studied
external support, constraints, and regulations allow for a fairly limited number of individual organizations
it. Most public organizations
at what amounts to a single
cannot afford to prioritize safety
snapshot in time.9 Whether
Most public organizations
over all other values; they must
features of high-reliability orgacannot afford to prioritize safety nizations can persist throughout
serve multiple, mutually contradicting values (Wilson 1989).
over all other values; they must the lifecycle of an organization is
Thus, HROs typically exist in
as yet unknown. Moreover, we
serve multiple, mutually
closely regulated environments
only know a limited amount
contradicting values.
that force them to take reliability
about the origins of these characseriously but also shield them
teristics (LaPorte 2006): Are they
from full exposure to the market and other forms of
imposed by regulatory environments, the outcome of
environmental competition. Avoiding accidents or
institutional evolution, or perhaps the product of
critical failure is a requirement not only for societal
clever leadership?
safety and security, but also for continued acceptance
and possibly survival in the unforgiving political and
Questions also surround the relation between organiregulatory “niche” these organizations are forced to
zational characteristics and reliability. High reliability
occupy. In fact, it would be considered illegitimate to
has been taken as a defining characteristic of the spetrade safety for other values in pursuit of market or
cial organizations selected for study by HRO researchother competitive advantages.
ers. However, the descriptive features uncovered in
these organizations have not been conclusively tied to
The Limits of High-Reliability Research
the reliability of their performance. High-reliability
The research on HROs has not been without controtheory thus stands not as a theory of causation regardversy.8 Perrow (1994) dismissed HRT findings by
ing high reliability but rather as a careful description
of a special set of organizations.
arguing that organizations charged with the management of complex and tightly coupled technical sysEven if HROs understand which critical events must
tems (the type usually studied in reliability research)
can never hope to transcend the intrinsic vulnerability be avoided, it remains unclear how they evolve the
Assessing NASA‘s Safety Culture 1053
capacity to avoid these events. Trial-and-error learning—the most conventional mode of organizational
learning—is sharply constrained, particularly in relation to those core events that they are trying to preclude.10 Moreover, learning is impeded by the problem
of few cases and many variables: Because HROs experience few, if any, major failures (or they would not
survive as HROs), it is difficult to understand which
of the many variables they manage can cause them.
HROs could conceivably learn from other organizations, but that would require a fair amount of (near)
disasters somewhere else (and somewhere conveniently far away). If this is true, learners automatically
become laggards.
All this makes HRT-based prescription a rather
sketchy enterprise, well beyond the arguments of
HRT itself. It remains for future researchers to identify which subset of properties is necessary or sufficient to produce high reliability and to determine
which variables and in what degree might contribute
to higher and lower reliability among a wider variety
of organizations. We will now consider in particular
why HRT does not provide an adequate framework
for assessing NASA’s safety practices. The reason is
simple: NASA never was, nor could it ever have been,
an HRO.
Why NASA Has Never Been a HighReliability Organization
In its assessment of NASA’s safety culture, the CAIB
adopted the characteristics of the ideal-typical HRO
as benchmarks.11 It measured NASA’s shortcomings
against the way in which HROs reportedly organize in
the face of potential catastrophe. The board quite
understandably wondered why NASA could not operate as, for instance, the Navy Submarine and Reactor
Safety Programs had done.
We argue that NASA never has been an HRO. More
importantly, NASA could never have become such an
organization, no matter how hard it tried to organize
toward a “precluded-event” standard for reliability.
Therefore, to judge NASA by these standards is both
unfair and counterproductive.
The historic backdrop against which the agency was
initiated made it impossible for reliability and safety
to become overriding values. NASA was formed in a
white-hot political environment. Space exploration
had become a focal point of Cold War competition
between the United States and the Soviet Union after
the successful flight of the Russian Sputnik (Logsdon
1976). The formation of NASA was a consolidation of
space programs under way in several agencies, notably
the U.S. Air Force, Navy, and Army. This consolidation was one way of addressing the implicit scale
requirements associated with manned spaceflight
(Schulman 1980). So, too, was the galvanizing
1054
Public Administration Review • November | December 2008
national commitment made by President John F.
Kennedy in 1961 of “landing a man on the
moon by the end of the decade and returning him
safely to earth.”
While Kennedy’s commitment included the word
“safely,” safety was only one part of the initial NASA
mission. The most important part of the lunar landing
commitment was that the goal, and its intermediate
milestones, be achieved and achieved on time. In this
sense, NASA was born into an environment of schedule pressure—inescapable and immensely public. This
pressure—absent in the environment of HROs—
would dog NASA through the years.
NASA’s mission commitment was thus something
quite different from the commitment to operational
reliability of an HRO. A public dread surrounds the
events that an HRO is trying to preclude—be they
accidents that release nuclear materials, large-scale
electrical blackouts, or collisions between large passenger jets. These events threaten not just operators or
members of the organization but potentially large
segments of the public as well. A general sense of
public vulnerability is associated with these events.
No similarly dreaded events constrained the exploration of space. No set of precluded events was imposed
on NASA, which, in turn, would have required HRO
characteristics to develop in the organization. The loss
of a crew of astronauts in 1967 saddened but did not
threaten the general population; it certainly did not
cause NASA to miss the 1969 deadline. The loss of
personnel in the testing of experimental aircraft was,
in fact, not an unexpected occurrence in aeronautics
(the first astronauts were test pilots, a special breed of
fast men living dangerously, as portrayed in Tom
Wolfe’s novel The Right Stuf f ).
This is not to say that the safety of the crew was no
issue for NASA’s engineers. Quite the contrary. The
designers of the Apollo spacecraft worked closely with
them and thus knew well the men who were to fly
their contraptions. The initial design phases were
informed by extreme care and a heavy emphasis on
testing all the parts that made up the experimental
spacecraft. If the safety of the crew had been the sole
concern of NASA’s engineers, the space agency could
conceivably have developed into an HRO.
But unlike HROs, which have a clearly focused safety
mission that is built around a repetitive production
process and relatively stable technology, NASA’s mission has always been one of cumulatively advancing
spaceflight technology and capability ( Johnson 2002;
Logsdon 1999; Murray and Cox 1989). Nothing
about NASA’s human spaceflight program has been
repetitive or routine. Multiple launches of Saturn
rockets in the Apollo project each represented an
evolving technology, each rocket a custom-built system. They were not assembly-line copies that had
been standardized and debugged over production runs
in the thousands.
The shuttle is one of the world’s most complex machines, which is not fully understood in either its
design or production aspects (CAIB 2003). After
more than 120 missions in nearly three decades, the
shuttle still delivers surprises. Further, as its components age, the shuttle presents NASA engineers and
technicians with new challenges. Each shuttle mission
is hardly routine—there is much to learn cumulatively
with each one.
The incomplete knowledge base and the unruly nature
of space technology force NASA to be a research and
development organization, which makes heavy use of
experimental design and trial-and-error learning. Each
launch is a rationally designed and carefully orchestrated experiment. Each successful return is considered a provisional confirmation of the “null
hypothesis” that asserts the designed contraption can
fly (cf. Petroski 1992).
the space agency. This was first impressed upon NASA
in 1964, after NASA administrator James Webb realized that progress was too slow. Webb brought in Dr.
George Mueller, who subsequently terminated the
practice of endless testing, imposing the more practical yet rationally sound philosophy of all-up testing
( Johnson 2002; Logsdon 1999; McCurdy 1993;
Murray and Cox 1989). This philosophy prescribes
that once rigorous engineering criteria have been
met, only actual flight can validate the design
(cf. Petroski 1992).
The apparent success of this philosophy fueled expectations with regard to the speedy development of
new space technology. From the moment it left the
design table, NASA has been under pressure to treat
the shuttle as if it were a routine transportation system (CAIB 2003). Rapid turnaround was a high
priority for original client agencies such as the Defense Department, which depended on NASA for its
satellite launching capabilities. Research communities depended on the shuttle for projects such as
the Hubble space telescope and other space exploration projects. Political rationales forced NASA to
complete the International Space Station and have
led NASA to fly senators and, with tragic results, a
teacher into space.
In this design philosophy, tragedy is the inevitable
price of progress. Tragic failure came when Apollo 6
astronauts Gus Grissom, Ed White, and Roger
Over time, however, NASA’s political environment
Chaffee (the original crew for the moon landing)
perished in a fire during a capsule test at Cape Canav- has become increasingly sensitive to the loss of astronauts, certainly when such tragedies transpire in the
eral. The disaster revealed many design failures that
were subsequently remedied. Within NASA, the 1969 glaring lights of the media. A shuttle failure is no
longer mourned and accepted as the price for progress
lunar landing was considered a validation of its institutionalized way of spacecraft development. While the toward that elusive goal of a reliable space transportation system. Today, NASA’s
general public seemed to accept
environment scrutinizes the
that tragedy as an unfortunate
Today, NASA’s environment
paths toward disaster, identifying
accident, times have changed.
scrutinizes the paths toward
“preventable” and thus condemShuttle disasters are now genernable errors, with little or no
ally considered avoidable failures.
disaster, identifying
empathy for the plight of the
“preventable” and thus
Trial-and-Error Learning in
condemnable errors, with little organization and its members.
a Politically Charged
or no empathy for the plight of NASA’s political and societal
Environment
the organization and its
environment, in short, has placed
The development of space techmembers.
the agency in a catch-22 situanology is fraught with risk. Only
tion. It will not support a rapid
frequent missions can enhance a
complete understanding of this relatively new and
and risky shuttle flight schedule, but it does expect
spectacular results. Stakeholders expect NASA to
balky technology. A focus solely on safety and reprioritize safety, but they do not accept the costs and
liability would sharply limit the number of missions,
which would make technological progress, including
delays that would guarantee it.
building a full knowledge base about its core technolThis means that NASA cannot strive to become an
ogy, arduously slow.
HRO unless its political and societal environment
The political niche occupied by NASA since its creexperiences a major value shift. Those values would
have to embrace, among other things, steeply higher
ation, including the political coalitions underlying
costs associated with continuous and major redesigns
its mission commitment and funding, has never
of space vehicles, as well as the likelihood, at least in
supported a glacial, no-risk developmental pace.
the near term, of far fewer flights. In other words, a
NASA must show periodic progress by flying its
research and development organization such as NASA
contraptions to justify the huge budgets allocated to
Assessing NASA‘s Safety Culture 1055
cannot develop HRO characteristics because of the
political environment in which it exists.
How to Assess Reliability-Seeking
Organizations
Even if NASA cannot become an HRO, we expect
NASA at least to seek reliability. Given the combination of national interests, individual risks, and huge
spending, politicians and taxpayers deserve a way of
assessing how well NASA is performing. More generally, it is important to develop standards that can be
applied to organizations such as NASA, which have to
juggle production or time pressures, substantial technical uncertainties, safety concerns, efficiency concerns, and media scrutiny. Any tool of assessment
should take all of these imposed values into account.
Based on our reading of organization theory, public
administration research, the literature on organizational crises, and the findings of high-reliability theorists, we propose a preliminary framework for
assessing a large-scale public research and development organization that pursues the development of
risky technology within full view of the general public. These assessment criteria are by no means complete or definitive. They provide a starting point for
evaluating the commitment of reliability-seeking
organizations such as NASA. They broaden the inquiry from pure safety-related questions to include the
institutional context in which reliability challenges
must be dealt with. They offer a way to assess how the
agency—from its executive leaders down to the work
floor—balances safety against other values.
This framework is based on the premise that spaceflight technology is inherently hazardous to astronauts, to work crews, and to bystanders. Therefore,
safety should be a core value of the program, even if it
cannot be the sole, overriding value informing NASA’s
organizational processes. We accept that reliability
must always be considered a “precarious value” in its
operation (Clark 1956). Reliability and safety must be
actively managed and reinforced in relation to crosscutting political pressures and organizational objectives. With these premises in mind, we suggest three
analytical dimensions against which reliability-seeking
organizations should be assessed.
A Coherent Approach to Safety
The first dimension pertains to the operating philosophy
that governs the value trade-offs inherent in this type
of public organization (cf. Selznick 1957). This dimension prompts assessors to consider whether the organization has in place a clearly formulated and widely
shared approach that helps employees negotiate
the safety–reliability tensions that punctuate the development and implementation phases of a new and risky
design trajectory. The presence of such an approach
furthers consistency, eases communication, and nurtures
1056
Public Administration Review • November | December 2008
coordination, which, in turn, increase the likelihood of
a responsible design effort that minimizes risk. More
importantly, for our purposes, it relays whether the
organization is actively and intelligently seeking reliability (whether it achieves it is another matter).
It is clear that NASA has always taken the search for
reliability very seriously (Logsdon 1999; Swanson
2002; Vaughan 1996). Over time, NASA developed a
well-defined way to assess safety concerns and weigh
them against political and societal expectations
(Vaughan 1996). This approach of “sound engineering,”
which has been informed and strengthened both by
heroic success and tragic failure, asserts that the combination of top-notch design and experiential learning
marks the way toward eventual success. It accepts that
even the most rational plans can be laid to waste by the
quirks and hardships of the space environment.
The NASA approach to safety prescribes that decisions
must be made on the basis of hard science only (no
room exists for gut feelings). Protocols and procedures
guide much of the decision-making process (Vaughan
1996). But reliability frequently comes down to single,
real-time decisions in individual cases—to launch or
not to launch is the returning question. The NASA
philosophy offers to its managers a way to balance in
real-time safety concerns with other organizational
and mission values. NASA clings to its safety approach, but it accepts that it is not perfect. Periodic
failure is not considered the outcome of a flawed
philosophy but a fateful materialization of the
ever-existing risk that comes with the space territory.
Rather than assessing NASA’s safety approach against
absolute reliability norms used by HROs, one should
assess it against alternative approaches. Here we may
note that a workable alternative to NASA’s heavily
criticized safety approach has yet to emerge.
Searching for Failure: A Real-Time Reliability
Capacity
The second dimension focuses our attention on the
mechanisms that have been introduced to minimize
safety risks. The underlying premise holds that safety
is the outcome of an error-focused process. It is not
the valuation of safety per se, but rather the unwillingness to tolerate error that drives the pursuit of high
reliability. All else being equal, the more people in an
organization who are concerned about the misidentifications, the misspecifications, and the misunderstandings that can lead to potential errors, the higher the
reliability that organization can hope to achieve
(Schulman 2005). From this we argue that the continual search for error in day-to-day operations should
be a core organizational process (Landau 1969; Landau and Chisholm 1995; Weick and Sutcliffe 2001).
In NASA, the detection of critical error requires
real-time capacity on the part of individuals and
teams to read signals and make the right decision at a
critical time. As this real-time appraisal and decision
making is crucial to safety, it is important to develop
standards for the soundness of this process. A variety
of organizational studies, including studies of HROs,
offer some that appear particularly relevant.12
The first standard involves avoiding organizational
features or practices that would directly contradict the
requirement for error detection. Because the potential
for error or surprise exists in many organizational
activities, from mission planning to hardware and
software development to maintenance and mission
support activities, information that could constitute
error signals must be widely available through communication nets that can cut across departments and hierarchical levels. Communication barriers or blockages can
pose a threat to feedback, evidence accumulation, and
the sharing of cautionary concerns.
A realistic evaluation of NASA’s safety system
would start with an assessment of how such a largescale organization can share information without
getting bogged down in a sea of data generated by
thousands of employees. We know it is often clear
only in hindsight what information constitutes a
critical “signal” and what is simply “noise.”13 A
realistic and useful reliability assessment must recognize this built-in organizational dilemma and
establish what can be reasonably expected in the
way of feedback. The standard should not be every
possible piece of information available to every
organizational member; the organization should
have evolved a strategy so that information of high
expected value regarding potential error (in the potential consequences adjusted by their likelihood)
can be available to key decision makers prior to the
point of real-time critical decisions.
Organizational studies remind us that the reporting of
error or concerns about potential errors should be
encouraged, or at least not subject to sanction or
punishment. Organizations that punish the reporting
of error can expect errors to be covered up or underreported, which would certainly reduce the reliability
they hope to attain (Michael 1973; Tamuz 2001). A
realistic assessment would consider whether the organization has removed significant barriers for dissident
employees to speak up. It would also consider whether
the organization has done enough to encourage people to step forward. One such assessment is found in
Vaughan’s (1996) analysis of the Challenger disaster, in
which she concludes that all engineers within NASA
had a real opportunity to bring doubts to the table
(provided these doubts were expressed in the concepts
and the rationales of “sound engineering”).
Another standard of reliability is a reasonably “distributed ability” to act in response to error: to adjust or
modify an error-prone organizational practice, correct
errors in technical designs, or halt a critical process if
errors are suspected. This distribution of action or
veto points does not have to be as widely distributed
as in the Japanese factory in which any assembly line
worker could stop the line, but it does have to extend
beyond top managers and probably, given past cases,
beyond program heads. A realistic analysis would
consider whether the distribution of veto points (perhaps in the form of multiple sign-offs required within
departments) has penetrated deeply enough without
paralyzing the organization’s pursuit of other core
values.
Beyond searching for contradictions between these
requirements and organizational practices, a reliability
assessment should also scan for contradictions in logic
that might appear in reliability perspectives and analyses themselves. One such contradiction actually did
appear in NASA. It was evident in the widely diverging failure probability estimates reportedly held by top
managers and shuttle project engineers prior to the
Challenger disaster (Vaughan 1996). This disparity has
been reported in other organizations as well (Hutter
2005). Contradictory risk assessments cannot all be
right, and organizations that buffer such contradictions face a larger risk of error in their approach to
managing for higher reliability.
Another logical contradiction can develop between
prospective and retrospective orientations toward
reliability. This can be compounded by an asymmetrical treatment of formal and experiential knowledge in
maintaining each orientation. NASA did in fact experience trouble with its assessment of error reports,
insofar as it has traditionally evaluated against standards of “sound engineering,” which tend to undervalue “intuitive” (e.g., experiential) concerns. When a
member of an organization expresses a “gut feeling” or
concern for the reliability or safety of a system, others
may insist that these concerns be expressed in terms of
a formal failure analysis, which places the burden of
proof on those with concerns to show in a specific
model the ways (and probabilities) in which a failure
to occur. This approach does not do justice to the
experiential or tacit knowledge base from which a
worrisome pattern might be detected or a failure
scenario imagined (Weick and Sutcliffe 2001).
It is hard to bridge these two modes of assessment
(Dunbar and Garud 2005). But while it has discounted experiential or tacit knowledge concerns in
assessing prospective error potential in the shuttle,
NASA has traditionally relied heavily on past operational experience in retrospectively assessing shuttle
reliability. This contradictory orientation—requiring
failure prospects to be formally modeled but accepting
a tacit, retrospective confirmation of reliability—in
NASA’s treatment of safety concerns about the shuttle
Assessing NASA‘s Safety Culture 1057
Columbia after its tile strike during the fateful launch
in 2003 has understandably drawn much criticism.
Having such a contradiction at the heart of its perspective on reliability has proven to be a serious impediment to the detection of error (Dunbar and
Garud 2005).
An additional organizational practice to be assessed in
connection with the pursuit of higher reliability in an
organization such as NASA is the generation and propagation of cumulative knowledge founded on error.
Whereas high-reliability organizations may have
sharply curtailed opportunities for trial-and-error
learning, reliability-seeking organizations should
evidence a commitment to learning all that can be
learned from errors, however regrettable, and translating that learning into an ever more extensive knowledge base for the organization transmitted to
successive generations of its members. Careful study
of errors to glean potential reliability improvements
should be a norm throughout the organization. While
the organization must move on and address its other
core operational values, there should be a resistance to
premature closure in error investigations before undertaking some collaborative root-cause analysis involving some outside perspectives.
Evidence of cumulative learning can also be found
in the treatment of organizational procedures and
the process of procedure writing and modification.
Procedures should be taken seriously throughout
the organization as a living documentation of the
knowledge base of the organization. They should be
consistently corrected or enhanced in light of experience and should be “owned” not just by top managers but also by employees down to the shop level.
Members of the organization should understand the
logic and purpose of a procedure and not regard it
simply as a prescription to be mindlessly followed.
Preserving Institutional Integrity
The third dimension pertains to what Philip Selznick
(1957) referred to as the institutional integrity of the
organization. This dimension directs us to consider
how an organization balances its established way of
working against the shifting demands imposed on the
organization by its stakeholders. An organization’s way
of working typically is the result of trial-and-error
learning, punctuated by success and failure. Over
time, as path dependency theorists remind us (Pierson
2004), established routines and procedures may well
become ends in themselves. The organization then
becomes protective of its way of working, defending
against outside critics by denial or overpromising.
NASA has not performed well on this dimension
since the early 1970s. Whereas NASA enjoyed high
levels of support during the famed Apollo years, it was
an unstable support, shifting from euphoria after a
successful manned flight to a loss of public interest
and, ultimately, to concern about the costs of space
exploration relative to other pressing domestic policy
demands. After the moon landing, societal and political support for highly ambitious and expensive space
missions plummeted.
Yet NASA felt compelled to keep its human spaceflight program alive. The search for new projects that
would capture the popular imagination—a new
Apollo adventure—ran into budgetary constraints and
political hesitation (President Nixon slashed the budget). Rather than adapting to this new reality by scaling down ambitions, NASA overpromised and
oversold the reliability of its technology. For political
reasons, the shuttle project was presented as a highly
reliable, routine near-space transportation system
(even if space shuttle missions never became routine,
nor were they treated as such) (Vaughan 1996; cf.
CAIB 2003). According to Vaughan (1996), this
pursuit of goals that were just out of reach generated
pressures on the organization’s safety culture.
Many of these error-focused standards can indeed be
observed in HROs. NASA must pursue them within a
The explosion of Challenger
far less supportive environment.
stripped NASA of whatever
HROs operate within a frame. . . NASA, given the unsettled mythical status it had retained.
work of settled knowledge
nature of its technology and the The empty promise of a reliable
founded on long operational
and efficient shuttle transportaincomplete knowledge base
experience and prior formal
analysis. In a nuclear power
governing its operations, must tion system would become a key
plant, for instance, operating
operate in key respects outside factor in NASA’s diminishing
status. The technology of the
“outside of analysis” is a regulaof analysis—an invitation to
shuttle had never been settled
tory violation.14 Yet NASA, given
error.
such that it could allow the
the unsettled nature of its techroutinization of flight. At the
nology and the incomplete
same time, there was no galvanizknowledge base governing its
ing goal such as the lunar landing, the progression
operations, must operate in key respects outside of
toward which could validate the failures in the
analysis—an invitation to error. Given these limitations, it is important that standards for error detection development of this technology. As a result, there was
no support for major delays or expenditures that were
be taken seriously, even when other organizational
reliability and not production focused.
values are prominent.
1058
Public Administration Review • November | December 2008
Caught in the middle of an unstable environment in
which there is little tolerance for either risk or production and scheduling delays, NASA has become a condemnable organization—it is being judged against
standards it is in no position to pursue or achieve. This
plight is, of course, shared by many public organizations and creates a set of leadership challenges that may
be impossible to fulfill (Selznick 1957; Wilson 1989).
Yet where some public organizations make do (Hargrove and Glidewell 1990), it appears that NASA was
less adept at coping with its “impossible” predicament.
probabilistic risk assessments and other risk-assessment methodologies, but they are not beyond assessing through intensive organizational observations and
interviews, as well as survey research. In fact, the willingness of NASA to accord periodic access to independent reliability researchers would itself be a test of its
commitment to error detection. This could be done
under the auspices of an organization such as the
National Academy of Engineering or the American
Society for Public Administration with funding from
the National Science Foundation or NASA itself.15
Conclusion: Toward a Realistic Assessment
of Reliability-Seeking Organizations
If, as we argue, NASA is not a high-reliability organization in the sense described by HRO theorists, some
important implications follow. First, it is both an
analytic and a practical error to assess NASA—an
agency that is expected to experiment and innovate—
by the standards of an HRO (in which experimentation is strongly discouraged). To do so is misleading
with respect to the important differences in the mission, technology, and environment of NASA relative
to HROs (LaPorte 2006).
Such an assessment procedure should certainly not be
adversarial. It should be a form of cooperative research. It should be ongoing and not a post hoc review undertaken only on the heels of a major incident
or failure. Further, and perhaps most importantly, it
should not be grounded in unrealistic standards imported inappropriately from the peculiar world of
HROs. This in itself would constitute an insuperable
contradiction for any reliability assessment—it would
be grounded at its outset in analytic error.
It is also unhelpful to evaluate NASA by standards that
it is in no position to reach. Such evaluations lead to
inappropriate “reforms” and punishments. The irony is
that these could transform NASA into the opposite of
an institutionalized HRO—that is, a permanently
failing organization (cf. Meyer and Zucker 1989).
We may well wonder whether the recommendations
of the CAIB report would help NASA become one if
it could. In HROs, reliability is achieved through an
ever-anxious concern with core organizational processes. It’s about awareness, making critical decisions,
sharing information, puzzling, worrying, and acting.
The CAIB recommendations, however, are of a structural nature. They impose new bureaucratic layers
rather than designing intelligent processes. They impose new standards (“become a learning organization”)
while ignoring the imposed standards that make it
impossible to become an HRO (“bring a new Crew
Exploration Vehicle into service as soon as possible”
and “return to the moon during the next decade”).
Starting from false premises, the CAIB report thus
ends with false promises. The idea that safety is a
function of single-minded attention may hold true for
HROs, but it falls flat in organizations that can never
become HROs.
In this article, we have argued that reliability-seeking
organizations that simply cannot become HROs require and deserve their own metric for assessing their
safety performance. We have identified a preliminary
set of assessment dimensions. These dimensions go
beyond those narrow technical factors utilized in
In the final analysis, reliability is a matter of organizational norms that help individual employees at all
levels in the organization to make the right decision.
The presence of such norms is often tacitly viewed as
an erosion of executive authority, which undermines
the responsiveness of public organizations to pressures
from Congress and media. It is a leadership task to
nurture and protect those norms while serving legitimate stakeholders (Selznick 1957).
But such leadership, in turn, requires that the organization and its mission be institutionalized in the political
setting in which it must operate. A grant of trust must
be extended to leaders and managers of these organizations regarding their professional norms and judgment.
If the organization sits in a precarious or condemnable
position in relation to its political environment, then it
“can’t win for losing” because of the trade-offs that go
unreconciled in its operation. Participants will fail to
establish any lasting norms because of the fear of hostile external reactions to the neglect of either speed or
safety in key decisions. Ultimately, then, the pursuit of
reliability in NASA depends in no small measure on
the public’s organizational assessment of it and the
foundation on which it is accorded political support.
Acknowledgments
The authors thank Todd LaPorte, Allan McConnell,
Paul ‘t Hart and the three anonymous PAR reviewers
for their perceptive comments on earlier versions of
this paper.
Notes
1. The board also made use of normal accident
theory, which some academics view in contrast to
Assessing NASA‘s Safety Culture 1059
HRT. The board clearly derived most of its
seriously. But HROs cannot adopt a trial-and-
insights and critiques from its reading of HRT,
error strategy because the political, economic, and
however. If it had adhered to normal accident
institutional costs of key errors are unlikely to be
theory, we can conjecture that the CAIB would
offset by the benefits of learning (but see Wil-
have been more sympathetic to NASA’s plight (as
it probably would have considered the shuttle
disaster a “normal accident”).
2. NASA comprises 10 separate centers that serve
role in explicating the HRO model to the CAIB
members. Professors Karlene Roberts, Diane
the different formal missions of the agency. In
Vaughan, and Karl Weick are recognized experts
this article, we are exclusively concerned with
on the workings of HROs and consulted with the
NASA’s human spaceflight program and the
CAIB. See Vaughan (2006) for a behind-the-
centers that serve this program. Here we follow
scenes account of the CAIB deliberations. Their
the CAIB report (2003).
involvement, of course, does not make them
3. See Starbuck and Farjoun (2005) for a discussion
of the findings of this report.
4. This is an important step in the analysis of
responsible for CAIB’s diagnosis.
12. Even if NASA cannot operate fully as an HRO,
as a reliability-seeking organization, it cannot
organizational disasters, which sits well with the
ignore HRO lessons in error detection. If it is
conventional wisdom found in theoretical trea-
forced to pursue values such as speed, efficiency,
tises on the subject (Perrow 1999; Smith and
or cost reductions at increased risk, it is impor-
Elliott 2006; Turner 1978).
tant to understand as clearly as possible, at the
5. The CAIB presents no firm evidence to back up
this claim. See McDonald (2005) for a resolute
dismissal of this claim. The accusation that NASA
would press ahead with a launch because of
“schedule pressure” is rather audacious. NASA
has a long history of safety-related launch delays;
the schedule pressure in the Columbia case was a
direct result of earlier delays. In fact, the CAIB
(2003, 197) acknowledged that NASA stood
down from launch on other occasions when it did
suspect problems were manifest. To NASA
people, the idea that a crew would be sent up in
the face of known deficiencies is outrageous. As
one engineer pointed out, “We know the astronauts” (Vaughan 1996).
6. The CAIB takes its reference to a “perfect place”
from Gary Brewer’s (1989) essay on NASA. It
should be noted that Brewer is speaking about
external perceptions of NASA and readily admits
point of decision, the character of that risk.
13. This issue is raised in Roberta Wohlstetter’s
(1962) classic analysis of intelligence “failures”
associated with the Pearl Harbor attack.
14. Nuclear Regulatory Commission, Code of
Federal Regulations, Title 10, part 50.
15. See Perin (2005) for a complementary approach.
References
Bourrier, Mathilde, ed. 2001. Organiser la fiabilité.
Paris: L’Harmattan.
Brewer, Gary D. 1989. Perfect Places: NASA as an
Idealized Institution. In Space Policy Reconsidered,
edited by Radford Byerly, Jr., 157–73. Boulder,
CO: Westview Press.
Clark, Burton R. 1956. Organizational Adaptation
and Precarious Values: A Case Study. American
Sociological Review 21(3): 327–36.
in his essay, “I know precious little about NASA
Clarke, Lee. 2006. Worst Cases: Terror and Catastrophe
or space policy … the little I know about NASA
in the Popular Imagination. Chicago: University of
and space means that I can speak my mind
without particular preconceptions” (157). The
CAIB, however, cites from Brewer‘s essay as if he
has just completed a thorough study into the
organizational culture of this “perfect place.”
7. In fact, the closer observations were to the major
Chicago Press.
Columbia Accident Investigation Board (CAIB).
2003. Columbia Accident Investigation Report.
Burlington, Ontario: Apogee Books.
Dunbar, Roger, and Raghu Garud. 2005. Data
Indeterminacy: One NASA, Two Modes. In
hazard points, the more similar these practices
Organization at the Limit: Lessons from the
became.
Columbia Accident, edited by William H. Starbuck
8. See the special issue of the Journal of Contingencies and Crisis Management (1994) for a heated
discussion. See also Sagan (1993) and Rijpma
(1997).
9. It should be noted that the number of cases is
gradually growing, but there is very little effort to
systematically compare cases. One notable
exception is Rochlin and Von Meier (1994).
10. This is not to say that errors do not occur within
HROs. They do, and HROs take them extremely
1060
davsky 1988).
11. Several experts no doubt played an influential
Public Administration Review • November | December 2008
and Moshe Farjoun, 202–19. Malden, MA:
Blackwell.
Hargrove, Erwin C., and John C. Glidewell, eds.
1990. Impossible Jobs in Public Management.
Lawrence: University Press of Kansas.
Hood, Christopher C. 1976. The Limits of
Administration. New York: Wiley.
Hutter, Bridget. 2005. “Ways of Seeing”:
Understandings of Risk in Organisational Settings.
In Organizational Encounters with Risk, edited by
Bridget Hutter and Michael Power, 67–91.
Cambridge: Cambridge University Press.
Johnson, Stephen B. 2002. The Secret of Apollo: Systems
Management in American and European Space
Programs. Baltimore: Johns Hopkins University Press.
Landau, Martin. 1969. Redundancy, Rationality, and
the Problem of Duplication and Overlap. Public
Administration Review 29(4): 346–58.
Landau, Martin, and Donald Chisholm. 1995. The
Arrogance of Optimism. Journal of Contingencies
and Crisis Management 3(2): 67–80.
LaPorte, Todd R. 1994. A Strawman Speaks Up.
Journal of Contingencies and Crisis Management
2(4): 207–11.
———. 1996. High Reliability Organizations:
Unlikely, Demanding and At Risk. Journal of
Contingencies and Crisis Management 4(2):
60–71.
———. 2006. Institutional Issues for Continued
Space Exploration: High-Reliability Systems
Murray, Charles, and Catherine Bly Cox. 1989. Apollo:
The Race to the Moon. New York: Simon & Schuster.
Perin, Constance. 2005. Shouldering Risks: The Culture
of Control in the Nuclear Power Industry. Princeton,
NJ: Princeton University Press.
Perrow, Charles. 1986. Complex Organizations: A
Critical Essay. New York: McGraw-Hill.
———. 1994. The Limits of Safety: The
Enhancement of a Theory of Accidents. Journal of
Contingencies and Crisis Management 2(4):
212–20.
———. 1999. Normal Accidents: Living with HighRisk Technologies. Princeton, NJ: Princeton
University Press.
Petroski, Henry. 1992. To Engineer Is Human: The Role of
Failure in Successful Design. New York: Vintage Books.
Pierson, Paul. 2004. Politics in Time: History,
Institutions, and Social Analysis. Princeton, NJ:
Princeton University Press.
Presidential Commission on the Space Shuttle
Across Many Operational Generations—
Challenger Accident (Rogers Commission). 1986.
Requisites for Public Credibility. In Critical Issues
Report to the President by the Presidential Commission
in the History of Spaceflight, edited by Steven J.
on the Space Shuttle Challenger Accident.
Dick and Roger D. Launius, 403–27.
Washington, DC: National Aeronautics and
Space Administration.
LaPorte, Todd R., and Paula M. Consolini. 1991.
Washington, DC: Government Printing Office.
Reason, James. 1997. Managing the Risks of
Organizational Accidents. Aldershot: Ashgate.
Rijpma, Jos A. 1997. Complexity, Tight-Coupling
Working in Practice but Not in Theory:
and Reliability: Connecting Normal Accidents
Theoretical Challenges of “High-Reliability
Theory and High Reliability Theory. Journal of
Organizations.” Journal of Public Administration
Contingencies and Crisis Management 5(1):
Research and Theory 1(1): 19–48.
The Limits to Safety: A Symposium. 1994. Special
issue, Journal of Contingencies and Crisis
Management 2(4).
Logsdon, John M. 1976. The Decision to Go to the
15–23.
Roberts, Karlene H., ed. 1993. New Challenges to
Understanding Organizations. New York: Macmillan.
Roberts, Karlene H., Peter Madsen, Vinit Desai, and
Daved Van Stralen. Forthcoming. A High
Moon: Project Apollo and the National Interest.
Reliability Health Care Organization Requires
Chicago: University of Chicago Press.
Constant Attention to Organizational Processes.
———, ed. 1999. Managing the Moon
Program: Lessons Learned from Project Apollo.
Monographs in Aerospace History 14,
Washington, DC: National Aeronautics and
Space Administration.
McCurdy, Howard E. 1993. Inside NASA: High
Technology and Organizational Change in the U.S.
Space Program. Baltimore: Johns Hopkins
University Press.
———. 2001. Faster, Better, Cheaper: Low-Cost
Innovation in the U.S. Space Program. Baltimore:
Johns Hopkins University Press.
McDonald, Henry. 2005. Observations on the
Columbia Accident. In Organization at the Limit:
Lessons from the Columbia Disaster, edited by
William H. Starbuck and Moshe Farjoun, 336–46.
Malden, MA: Blackwell.
Meyer, Marshall W., and Lynne G. Zucker. 1989.
Permanently Failing Organizations. Newbury Park,
CA: Sage Publications.
Michael, Donald N. 1973. On Learning to Plan—And
Planning to Learn. San Francisco: Jossey-Bass.
Quality and Safety in Health Care.
Rochlin, Gene I. 1996. Reliable Organizations:
Present Research and Future Directions. Journal of
Contingencies and Crisis Management, 4(2): 55–59.
Rochlin, Gene I., and Alexandra von Meier. 1994.
Nuclear Power Operations: A Cross-Cultural
Perspective. Annual Review of Energy and the
Environment 19: 133–87.
Sagan, Scott D. 1993. The Limits of Safety:
Organizations, Accidents, and Nuclear Weapons.
Princeton, NJ: Princeton University Press.
Schulman, Paul R. 1980. Large-Scale Policy-Making.
New York: Elsevier.
———. 1993. The Negotiated Order of
Organizational Reliability. Administration & Society
25(3): 353–72.
———. 2005. The General Attributes of Safe
Organizations. Quality and Safety in Health Care
13(2): 39–44.
Selznick, Philip. 1957. Leadership in Administration: A
Sociological Interpretation. Berkeley: University of
California Press.
Assessing NASA‘s Safety Culture 1061
Simon, Herbert A. 1997. Administrative Behavior: A
Study of Decision-Making Processes in Administrative
Organizations. 4th ed. New York: Free Press.
Smith, Denis, and Dominic Elliott, eds. 2006.
Key Readings in Crisis Management: Systems and
Structures for Prevention and Recovery. London:
Routledge.
Starbuck, William H., and Moshe Farjoun, eds.
2005. Organization at the Limit: Lessons from the
Columbia Accident. Malden, MA: Blackwell.
Starbuck, William H., and Frances J. Milliken. 1988.
Challenger: Fine-Tuning the Odds Until
Something Breaks. Journal of Management Studies
25(4): 319–40.
Swanson, Glen E., ed. 2002. Before This Decade Is
Out… Personal Reflections on the Apollo Program.
Gainesville: University Press of Florida.
Tamuz, Michal. 2001. Learning Disabilities for
Regulators: The Perils of Organizational Learning
in the Air Transportation Industry. Administration
& Society 33(3): 276–302.
Turner, Barry A. 1978. Man-Made Disasters. London:
Wykeham.
Vaughan, Diane. 1996. The Challenger Launch
Decision: Risky Technology, Culture and Deviance at
NASA. Chicago: University of Chicago Press.
———. 2006. NASA Revisited: Ethnography, Theory
and Public Sociology. American Journal of Sociology
112(2): 353–93.
Weick, Karl E., and Kathleen M. Sutcliffe. 2001.
Managing the Unexpected: Assuring High
Performance in an Age of Complexity. San Francisco:
Jossey-Bass.
Wildavsky, Aaron. 1988. Searching for Safety. New
Brunswick, NJ: Transaction Books.
Wilson, James Q. 1989. Bureaucracy: What
Government Agencies Do and Why They Do It. New
York: Basic Books.
Wohlstetter, Roberta. 1962. Pearl Harbor: Warning and
Decision. Stanford, CA: Stanford University Press.
Wolfe, Tom. 2005. The Right Stuff. New York: Black
Dog/Leventhal.
Have You Noticed?
PAR is Packed!
More pages, more content, more topics, more authors,
more perspectives than ever.
Support our work and your field by joining ASPA today.
Visit: www.aspanet.org
1062
Public Administration Review • November | December 2008