Strategies for Addressing the Problems of Subjectivity

PEER-R EV IEW ED
Strategies for Addressing the
Problems of Subjectivity and
Uncertainty in Quality Risk
Management Exercises
Part I—The Role of
Human Heuristics
Kevin O’Donnell
Note: The views expressed in this paper are those of the
author and should not be taken to represent the views of
the Irish Medicines Board.
ABSTRACT
Problems of subjectivity and uncertainty can arise during
the execution of risk management and quality risk management exercises, but many existing risk management
tools do not provide formal strategies for addressing such
problems. The influences of what are known as human
heuristics during quality risk management-related activities (such as brainstorming and probability of occurrence
estimation) can add to those problems. Heuristics are
cognitive behaviours that can influence how individuals
make judgments in the face of uncertainty, and they can
be a source of significant bias and errors in judgment. The
potential adverse effects of such heuristics when identifying potential negative events and their probabilities
For more Author
information,
go to
gxpandjvt.com/bios
76
Journal
of occurrence should be counteracted so that the best
judgments may be made in relation to these.
This paper discusses some of the most important
human heuristics and how the good manufacturing
practice (GMP) environment might benefit from the peerreviewed research that has been performed in various
fields on those heuristics. In this way, design elements
can be incorporated into quality risk management tools
that may help counteract the adverse effects of human
heuristics. This has the potential to reduce the extent
of guesswork in some current quality risk management
activities. Several simple, practical strategies are presented
that are designed to improve the outcomes of quality
risk management exercises with respect to problems of
subjectivity and uncertainty. Other cognitive approaches
to maximize good judgments are also described. These
should be useful to session facilitators to help yield
more accurate analyses of risk situations.
[
ABOUT THE AUTHOR
Kevin O’Donnell, Ph.D., is a senior GMP inspector and market compliance manager at the Irish
Medicines Board (IMB) in Dublin, Ireland. He can be reached by e-mail at [email protected].
of
Validation T echnology [Summer 2010]
iv thome.com
K EV IN O’DONNELL
INTRODUCTION
Issues relating to subjectivity and uncertainty are
known to arise during the execution of risk management and quality risk management exercises, and
their existence is well documented in the scientific
literature (1-8). There is widespread agreement that
one of the core principles underpinning effective risk
management is the principle that risk management
explicitly addresses uncertainty—that it explicitly
takes account of uncertainty, the nature of that uncertainty, and how it can be addressed (9).
Notwithstanding this, in the good manufacturing
practice (GMP) environment, many existing risk management tools do not formally deal with the problems
of subjectivity and uncertainty that can arise during
quality risk management exercises. In fact, most of
the currently available tools do not provide any formal
strategies for addressing such problems.
This paper is the first of two papers that further
develop some of the points made in an article published in the Journal of Validation Technology in February
2007 (10). The 2007 paper, titled “Simple Strategies for Improving Qualitative Quality Risk Management Exercises during Qualification, Validation, and
Change Control Activities,” addresses how problems
of subjectivity and uncertainty associated with the
outputs of quality risk management exercises may be
addressed via the use of more rigorous approaches to
the assessment of potential failure modes and their
related GMP controls.
This set of papers (Parts I and II) focuses on the
potential influences of what are known as human
heuristics, and issues relating to risk communication and perception. These may give rise to problems
of subjectivity and uncertainty during quality risk
management work.
This first paper presents a discussion on what is
known about human heuristics and how the GMP
environment might benefit from the peer-reviewed
research that has been performed in various fields
on human heuristics. This will illustrate why it can
be beneficial to develop controls and design features
for quality risk management tools that may help
counteract the adverse effects that heuristics may
exert, particularly during brainstorming activities.
Doing so has the potential to reduce the extent of
subjectivity and uncertainty that currently affect
quality risk management activities. In this regard,
several simple, practical strategies are presented that
are designed to improve the outcomes of quality risk
management exercises with respect to problems of
subjectivity and uncertainty.
gxpandjv t.com
BRAINSTORMING AND THE INFLUENCE
OF HUMAN HEURISTICS
Brainstorming is often used when identifying potential failure modes, their probabilities of occurrence,
and their causes; however, there is often no documented means or clear guidance in place for performing such activities. During research work carried
out by the author on the development of a quality
risk management methodology to serve as an aid to
qualification, validation, and change control activities
within GMP environments, brainstorming was found
to have been particularly prone to problems of subjectivity and uncertainty. Strategies were developed
to reduce such problems (10).
It is important that any factors that can introduce
bias, error, or uncertainty during brainstorming activities be counteracted. The author has found, during
regulatory GMP inspections, that brainstorming is
often not formally or adequately proceduralized in current quality risk management methodologies. Formal
training on brainstorming techniques is sometimes not
provided to users of quality risk management methodologies in the GMP environment. There is also generally little guidance provided in the current pharmaceutical literature or elsewhere on how to actually perform
and manage brainstorming sessions (10). As a result,
brainstorming sessions can often be poorly structured,
not science-based, and inconsistent in approach.
Peer-reviewed research into cognitive and behavioural
processes when people are performing activities such as
brainstorming (or when they are providing opinions on
issues such as probability estimates and associated risks)
offers many useful insights into these areas. There are
several useful learnings from that research that can be
adopted into quality risk management methodologies
and approaches in the GMP environment. These may
serve to make the outputs of quality risk management
exercises less subjective and uncertain in nature.
Researchers such as Slovic (11), Kahneman (8), and
Tversky (12) have shown that probability judgments
made during expert elicitation and brainstorming activities are susceptible to problems of uncertainty. This is a
result of what are called heuristic-based behaviors.
Heuristics are akin to cognitive rules of thumb; they
can influence how individuals make judgments in the
face of uncertainty. Several different areas of research
offer useful insights into how human heuristics work
and how they influence decision-making. These include
group and individual behavioural psychology (e.g.,
human psychology [3, 13, 14], cognitive psychology [12,
15], experimental psychology [11, 16], risk and policy
analysis [7, 15], human reliability analysis [17, 18, 19],
Journal
of
Validation T echnology [Summer 2010]
77
PEER-R EV IEW ED
weather and other types of forecasting [20], and group
behaviour and meeting management [21]).
One of the most significant sources of uncertainty
and subjectivity in quality risk management activities
is the probability of occurrence factor that is often
used when estimating risks. Many definitions of risk
include a probability factor for hazards, but the probability of occurrence of an event is an item that has
attracted much debate in the literature over the years,
and its exact meaning has been a significant source of
disagreement even among mathematicians (7).
As explained by Kaplan and Garrick, “people have
been arguing about the meaning of probability for at
least 200 years, since the time of Laplace and Bayes”
(22). Two major schools of thought have developed
in this area; the so-called “frequentist” (or classical)
school, and the “subjectivist” (or Bayesian) school (22).
As discussed by Morgan, in the widely accepted subjectivist view of probability, the probability of an event is
the degree of belief that a person has that it will occur,
given all of the relevant information currently known
to that person (7). Thus, probability is not only a function of the event itself, it is also dependent upon the
state of information known to the person (or group)
assigning the probability value.
The frequentists, on the other hand, define the probability of an event’s occurrence as the frequency with
which it has been found to occur in a long sequence
of similar trials. Here, the probability is the value to
which the long-run frequency converges as the number
of trials increases (7).
Morgan explains how this frequentist view of
probability is problematic, in that “for most events
of interest for real-world decision making, it is not
clear what the relevant population of trials of similar
events should be” (7). He discusses how experimental
psychology research has found that in most cases,
experts and laypersons “do not carry fully formed
probability values and distributions around in their
heads.” Rather, “they must synthesise or construct
them” when an analyst asks for them (7).
Therefore, brainstorming activities that are well
designed and science-based present opportunities for
reducing the uncertainty that can arise during this
“synthesis” stage, when experts and other persons are
requested to provide an informed opinion on the probability of an uncertain event occurring.
WHAT ARE HUMAN HEURISTICS?
Heuristics are cognitive behaviours. They come into play
when individuals make judgments in the presence of
uncertainty. How these behaviours are manifested is still
78
Journal
of
Validation T echnology [Summer 2010]
the subject of much research, but there is much evidence
in the literature that heuristics are a source of significant
bias and errors in judgment (8, 11, 12, 15, 17).
During quality risk management activities, when
identifying failure modes or potential negative events
and their probabilities of occurrence during brainstorming sessions, it is important to design controls
and features into brainstorming activities that serve
to reduce the potential adverse effects that human
heuristics may have when judgments are being made or
when opinions are being offered. This is because there
is usually some level of uncertainty associated with
judgments and opinions related to probability and risk.
Kahneman and Tversky (8, 12, 13) and Slovic (11, 13)
as well as other researchers have shown that heuristics
can sometimes lead to biased outcomes and errors.
Three of the main heuristics are discussed below.
The Heuristic of Anchoring and Adjustment
This heuristic affects how people make decisions, not
only when estimating the probability of an event occurring but also when forming personal opinions about a
diverse range of activities, such as the risks presented by
nuclear and other forms of electricity generation. When
this heuristic is in operation, people’s judgment can be
heavily influenced by the first approximation of the value
or quantity that they think of or hear, or even by the view
of a group to which the person is affiliated, such as a
political party, as demonstrated by research performed
in 2008 by Costa-Font et al., which resulted in the term
“Political Anchoring” (23). Experimental psychology
research has shown that the first approximation of the
value or quantity that a person may think of or hears
can become a natural starting point for that person’s
thought process. This first approximation is termed an
“anchor” in the person’s thought process, and this value
is known to influence any subsequent adjusted values
for the quantity in question that are estimated.
Research by Kahneman and Tversky has demonstrated that the value of this anchor is critical (8, 12).
When adjustments of the initial value are made in an
effort to arrive at a more accurate answer (e.g., with the
availability of new or more information on the item
under study), these adjusted values are usually biased
towards the value of the anchor.
From the author’s very limited experience in this
area, it seems difficult to reduce the uncertainty that is
associated with probability decisions as a result of the
heuristic of anchoring and adjustment. This may be
because one’s thought processes, which might be the
principle means by which the effects of this heuristic
are realised, may not be easily controlled, and simiv thome.com
K EV IN O’DONNELL
ply thinking of an initial probability value may play
an important part in the operation of this heuristic.
However, it is possible that some of the uncertainty
that may be associated with probability estimation and
other decision-making during brainstorming sessions
as a result of this heuristic may be overcome.
The Heuristic of Availability
The heuristic of availability affects how people estimate
the probability of an event occurring. As Morgan explains,
a person’s probability judgment is often determined by
“the ease with which [people] can think of previous occurrences of the event,” or the ease with which they can
imagine the event occurring (7). Research has shown
that people find it easier to recall or imagine dramatic,
uncommon events (such as deaths from botulism) over
more mundane, common events (such as deaths from
stroke). This can cause people to sometimes over estimate
the frequency of an event, where recall or imagination is
enhanced, and to under estimate the frequency of an event
where recall or imagination is difficult. In contrast, people
tend to make reasonable estimates of event frequencies
when their experience and memory of observed events
corresponds fairly well with actual frequencies (7).
The Heuristic of Representativeness
The heuristic of representativeness also affects how
people estimate the probability of an event occurring.
As Morgan explains, a person’s probability judgment is
often influenced by one “expecting in the small behaviour that which one knows exists in the large” (7). Thus,
when tossing a coin six times, people tend to rate as
more likely the sequence HTHTTH than either of the
sequences HTHTHT or HHHTTT, even though all three
sequences are equally likely. This is because, from one’s
larger experience, people know that the process of coin
tossing is random, and the sequence HTHTTH looks
more random than the other two. This phenomenon is
sometimes referred to by what Kahneman and Tversky
call “the belief in the law of small numbers” (8, 12).
This heuristic affects how people estimate the probability of an event occurring in another way too. When
this heuristic is in operation, people can pay too much
attention to the specific details, while ignoring or paying insufficient attention to important background or
contextual information that is relevant to the problem at
hand. Research has shown that people tend to ignore or
forget important probability-related information when
they have been given other specific information that is
worthless to the question at hand (7).
This heuristic may manifest itself in other ways and
not just in relation to probability decisions. It can, for
gxpandjv t.com
example, lead one “to expect in the small behaviour that
which one knows [or believes] exists in the large” (to
use a phrase coined by Morgan and Henrion [7]), and
this is explained in the following example.
Consider a failure in a packaging process that results
in some packs of a selective serotonin re-uptake inhibitors (SSRI) anti-depressant medicinal product being
released without a patient information leaflet.
When a quality risk management team is assessing
the risk presented by such a failure, the fact that some
medicinal products may be dispensed without a patient
information leaflet being provided to the patient by the
pharmacist or physician should not be taken as justification for assessing the risk as being low or insignificant
during the quality risk management exercise at hand.
This is because, with products such as the SSRIs,
which are indicated to treat moderate to severe depression, it is absolutely imperative that each patient taking
the product (or their parent/guardian) has up-to-date
information on the potential side effects of their medicine (such as suicidal thoughts in the case of SSRIs) and
on the other risks associated with the product.
Therefore, the quality risk management team should
be careful not to be adversely influenced by the effect
of this heuristic and “expect in the small behaviour”
(which in this case is the release of packs of a specific
anti-depressant medicinal product that are missing their
patient information leaflets) “that which they know (or
believe) exists in the large” (namely the fact that medicinal products in general may be supplied to patients with
no patient information leaflets).
STRATEGIES FOR USE DURING
BRAINSTORMING
There are several easy and simple things we can do to
counteract the adverse effects of human heuristics. The
team leaders or the facilitators of quality risk management
exercises have an important role to play in this area.
Strategy One—Educating the Team on the
Main Heuristics
At the beginning of the brainstorming session, the
team leader should briefly explain to the team the
ways in which cognitive heuristics are thought to affect
human judgment and decision-making. Researchers
such as Morgan and Henrion have found this approach
to be useful, and they promote explaining to those
participating in such sessions what is known about the
psychology of judgments made in the face of uncertainty (7). The text above in relation to the heuristics
of availability, representativeness, and anchoring and
adjustment may be helpful in this regard.
Journal
of
Validation T echnology [Summer 2010]
79
PEER-R EV IEW ED
Strategy Two—Counteracting the Heuristic
of Anchoring and Adjustment
With respect to the heuristic of anchoring and adjustment, before any ratings or values for the probability,
severity, or detectability of a potential negative event
are discussed during the brainstorming session, the
team leader should instruct the team that no initial
probability, severity or detectability opinions are
to be verbalized by anyone on the team, until each
member of the team has a) had an opportunity to
consider the facts for him or herself, b) formed their
own initial opinion or judgment on the issue at hand,
and c) written their opinion or judgment down. A
round table discussion of the opinions or judgments
can then occur.
While this strategy will not likely overcome anchoring
effects as a result of the initial value or opinion thought
of or formulated by the individual in his/her own mind,
it may help to reduce the effects caused by anchoring and
adjustment because each team member has a chance to
form his or her own opinion or judgment before hearing
that from other team participants.
As noted, it is difficult to counteract the full adverse
impacts of the heuristic of anchoring and adjustment
when such judgments and decisions are being made
because one’s own thought processes, which are the
principle means by which the effects of this heuristic
are realized, cannot easily be controlled. Simply thinking of an answer to the question at hand may play an
important part in the operation of this heuristic.
Strategy Three—Counteracting the
Heuristic of Availability
With respect to the heuristic of availability, in order
to reduce the uncertainty associated with probability
decisions that are made during brainstorming sessions, the team leader should determine if there is
anyone on the team who has had direct experience
of the potential negative event of failure mode under
discussion (or of its causative factors). If that person
is likely to have learned of the event whenever that
event occurred in the past, and if he/she is also able
to recall actual real examples of such events, then
that person’s opinion on the probability of occurrence should be considered to be more reliable than
that of others on the team. That person’s opinion
should be used when assigning a rating to the probability of that event, unless there is a substantial
reason not to do so.
Take the simple failure mode mentioned previously,
in relation to finished SSRI packs missing their leaflets
following a packaging operation.
80
Journal
of
Validation T echnology [Summer 2010]
An example of such a person as described above
might be a long-standing supervisor on a carton packaging line who would have had direct experience of
dealing with patient information leaflet (PIL) handling
problems on the line. If this person is likely to be able
to recall the events when packs were packaged without
PILs on the packaging line, then this person is likely
to be a suitable person to judge the probability of such
packaging problems for that line or for similar equipment. This is based on the research performed on this
heuristic as described by Morgan and others (6-8).
If there is no one on the team who fits the above
description, it may be possible to seek out someone else
within the company who may fit this description, so that
that person’s opinion of the probability can be sought.
If no one can be identified who fits this description,
the procedure for brainstorming should require the team
leader to document that the probability that is assigned
is an estimate without reliable direct experience.
Strategy Four—Counteracting the Heuristic
of Representativeness
With respect to the heuristic of representativeness,
in order to reduce the uncertainty associated with
probability decisions that are made during brainstorming sessions, the team leader should ensure
that the team focuses its attention on the item under
study, and that it is not too heavily influenced by
the expected behaviour of the larger class of objects
that may contain the item under study, unless there
is good reason to do so.
To demonstrate this by way of an example, consider the problem of particulates that is sometimes
observed with injectable medicinal products.
One source of particulates may be the coring of the
rubber stopper closures on vials, when a lyophilised
injectable powder product is reconstituted with a diluent that is added to the vial via a transfer needle.
One’s wider experience may suggest that coring problems of this nature are prevalent with all such products,
and that such particulates are to be expected.
However, it is important to focus on the exact
product of concern, not just on the broad category
of product. The actual stopper and needle components used in the specific product, the reconstitution instructions stated in the product literature, and
the presence of a filtered needle in the pack may be
important factors to consider when estimating and
evaluating the risk posed by stopper coring problems
with such a product.
Thus, again, one should not expect in the small behaviour that which we know (or believe) exists in the large.
iv thome.com
K EV IN O’DONNELL
Strategy Five—Counteracting the Heuristic
of Representativeness
Again with respect to the heuristic of representativeness, in order to reduce the uncertainty associated
with probability decisions and risk estimates that are
made during brainstorming sessions, the team leader
should ensure that the team focus its attention on both
the relevant information at hand when assigning a
probability value to the event, and on any contextual
information that may be available when evaluating
the risks presented by low-probability hazards.
Research by Kunreuther et al. in 2001 shows that
when people are asked to make risk judgments about
low probability events, in the absence of information
that sets the hazard or risk question into context, they
can find it very difficult to make informed risk judgments (3). The value of setting hazards in context
before their risk assessment is further supported by
research reported in 2009 by Satterfield et al. (5).
These strategies are intended to provide examples
of how the adverse effects of heuristics may be counteracted during brainstorming and other team-based
activities, and in decision-making in general. The
strategies are principally directed at the team leaders
or facilitators of quality risk management exercises.
It is their responsibility to manage brainstorming
sessions and to ensure that the sessions are of value
and as non-biased as possible. The strategies may
also be useful for the participants of such quality
risk management exercises, as it is important that
everyone involved in the exercise understands human
heuristics and how they may adversely influence the
outcomes of brainstorming sessions.
Ongoing Research
The three heuristics discussed herein (availability, representativeness, and anchoring and adjustment) have
been known and studied for several decades, but ongoing
research in fields such as experimental psychology continues to identify new heuristics that affect decision-making
in relation to risk. One of the more recently described
heuristics is the so-called heuristic of affect. As described
by Slovic and Peters in 2006, this heuristic relates to how
people feel about a particular risk (with or without consciousness), as opposed to what they think about it (24).
This heuristic seems to have an important influence
on how risks are perceived, and there is evidence that
when people’s feelings towards an activity are favourable, they tend to rate the risks presented by that activity
as low and its benefits as high, with the reverse happening when people’s feelings towards an activity are
unfavourable. Additional research into this heuristic is
gxpandjv t.com
required, however, to better understand how it manifests
itself beyond the relatively simple realm of positive and
negative feelings, as studies have shown that negative
emotions, such as fear and anger, can produce quite
different responses to the same risk (24).
OTHER COGNITIVE ELEMENTS
Many quality risk management methodologies, such
as hazard analysis and critical control points (HACCP)
and failure mode and effects analysis (FMEA), require
multi-disciplinary teams to be assembled for performing quality risk management exercises. It can be useful
if there are ground rules defined for how multi-disciplinary teams should work, especially during teambased activities such as brainstorming. Some of the
learnings gained from research in the field of cognitive
and experimental psychology can been incorporated
into those ground rules.
For example, research performed by Mosvick and
Nelson in 1987 (21) into how team-based decisionmaking works found that, when opinions are put forward by a team member, it is more beneficial if they are
considered as hypotheses rather than facts, so that they
can be tested instead of argued against. This finding
can be reflected in such ground rules.
In addition, the simple rule that “the majority does
not rule” during team-based activities can be adopted;
this is useful because sometimes, a single individual
may be on the right track with respect to a particular
issue and others may be wrong. This is based on the
work of Stamatis, as documented in his comprehensive
text on FMEA titled Failure Mode and Effect Analysis:
FMEA from Theory to Execution (25).
ASSESSING THE STRENGTH OF EVIDENCE
FOR OPINIONS AND JUDGMENTS
It is considered good practice to obtain informed opinion
and expert judgment when identifying potential negative
events and failure modes and the associated probabilities during quality risk management exercises or when
opinions are being sought. As discussed by Morgan (7),
Lichtenstein et al. (13) found that “the more information
subjects have about an unknown quantity, the less likely
they are to exhibit overconfidence” in making judgments.
However, the value of using experts for obtaining reliable judgments is still far from clear. While studies on
risk perception have found that lay people judge many
risks as higher than subject matter experts, there is also
evidence in the literature that the opposite can also occur.
For example, experts were found to be more risk-averse
than the lay community in certain areas of study, such
as with regard to pollution and health issues (5).
Journal
of
Validation T echnology [Summer 2010]
81
PEER-R EV IEW ED
Research at Carnegie Mellon University by Mullen, as
part of her doctoral thesis (14) on the process of probabilistic estimation, demonstrated that acknowledged
experts in an area of study are still susceptible to the
same influences of cognitive heuristics, such as anchoring and adjustment, just as lay people are, though the
extent to which they may be affected may not be as
high. In fact, other researchers, such as Goldberg (26),
have shown that experts sometimes perform no better
than lay people in making judgments relating to their
area of expertise. Furthermore, a study by MacDonald et al. in 2008, which elicited opinions from 25
experts about the explosion probability of unexploded
ordinances at closed military bases in the US, found
a high level of disagreement among those experts on
the explosion probability of those ordinances, as well
as significant differences in the amount of uncertainty
expressed by those experts when making their probability estimates (27).
The following are important factors, identified by
Faust (28) in 1985, that appear to influence the ability
of experts to make reliable judgments on uncertain
quantities in an area of study:
• The availability of a well developed science that
provides established scientific theory for the area
under study
• The availability of precise measuring techniques
in that area of study
• The availability of pre-specified procedures and
judgment guidelines for decision-making.
Morgan summarized that problems relating to
human heuristics appear more likely to arise in fields
involving complex tasks with limited empirically validated theory (7).
In this regard, the pharmaceutical GMP environment, while of course involved in complex activities,
is an industry that should be less affected by such
problems than some other less regulated industries.
This is because there is an increasing reliance placed
upon science and scientific technologies in pharmaceutical manufacturing and control. The industry is
procedure-driven, and there is an emphasis on validated measuring methods.
When obtaining opinions and judgments during
quality risk management exercises, it can be useful
to seek and assess the strength of evidence for each
opinion or suggestion proposed. This adds rigor to the
exercise, and may help reduce the level of subjectivity and guesswork that can arise during the negative
event/failure mode identification process. In this
regard, one might do the following:
82
Journal
of
Validation T echnology [Summer 2010]
• S eek the opinions of actual users and operators
of the process or other item under study. A process operator may know very well what can go
wrong with a process or activity, and he or she
may be in a position to advise as to its potential
frequency or probability
• Seek the opinions of those employees or others
who are knowledgeable in the process or other
item under study. For example, during equipmentrelated quality risk management exercises, the
vendor or equipment supplier may have valuable
knowledge about likely problems and potential
rates of failure of its components, etc.
• Where possible, take into account the concerns of
stakeholder groups when considering what might go
wrong with a process or other item under study. For
example, if a change is proposed to roll out a new
labeling and livery design for a range of medicinal
products, practicing pharmacists might well be in a
position to usefully advise about risks of dispensing
or usage errors that may be introduced by the change,
even if the new labeling is fully compliant with marketing authorization labeling requirements.
Much research has been performed into how best to
elicit informed opinions and judgments from experts
and non-experts, and formal elicitation methods
have been developed to assist with this (7). One such
methodology that has been shown to work consistently
and reproducibly is the Carnegie Mellon risk ranking
method; this was designed to assist with risk ranking
health and safety hazards. Research reported in 2004 by
Willis et al. found that this methodology also worked
well when ranking ecological hazards (29).
The findings in this general area of research are relevant to brainstorming activities during quality risk
management exercises. For example, there is evidence
that asking experts for carefully articulated justification and reasons for and against their judgments may
improve the quality of those judgments. This can be
built into the design of brainstorming sessions, but
again, the situation is still far from clear.
Research by Hoch et al. (30) has demonstrated that
subjects’ probability judgments can be greatly affected
by being asked for reasons for and against their judgments, and that their judgments can be influenced
by the type of reason asked for first (7). Hoch’s work
found that a person’s judgment was less affected by
the type of justification questions asked of the person
when they were more experienced in the item under
study than when less experienced (30). This work suggests that, during brainstorming sessions, one should
iv thome.com
K EV IN O’DONNELL
exercise caution particularly when challenging nonexpert subjects on their opinions by asking for reasons
and justification for their opinions.
Morgan summarised the situation quite well by stating that there is some evidence that asking for carefully
articulated justification and reasons for and against judgments may improve the quality of judgments, but more
research is clearly needed in this area (7). And when
opinions are being sought from experts and others, it
is important to encourage those involved to actually
think, to use common sense, to be flexible, and to keep
the basic and common dangers in mind (7).
CONCLUSION
This paper has provided a discussion on one of the main
areas that may introduce subjectivity and uncertainty during quality risk management work, namely the potential
influences of what are known as human heuristics.
Heuristics influence how individuals make judgments. They are cognitive behaviours that come into
play when we make judgments in the presence of uncertainty. How these behaviours are manifested is still the
subject of much research, but there is much evidence in
the literature that heuristics are a source of significant
bias and errors in judgment.
This paper presents a discussion on how the GMP
environment can benefit from the peer-reviewed research
that has been performed in various fields on human
heuristics. This paper demonstrates why it can be beneficial to develop awareness, controls, and design features
for quality risk management methodologies in order to
counteract the adverse effects that heuristics may exert,
particularly during brainstorming activities. Implementing these instructional techniques has the potential to
reduce the extent of subjectivity and uncertainty that
currently affect quality risk management activities.
In this regard, several simple, practical strategies have
been presented that are designed to improve the outcomes of quality risk management exercises with respect
to problems of subjectivity and uncertainty.
Additional research in this area within the GMP
environment will be useful. This is important considering that probability of occurrence estimation,
expert elucidation, and brainstorming in general are
currently key elements in most approaches to quality
risk management, and these seem to be the activities that are most susceptible to the adverse effects of
human cognitive heuristics. In the area of probability of
occurrence estimation in GMP environments, research
might usefully focus on the potential use of probability
elicitation aids (such as coloured probability wheels) as
a means to counteract the adverse influences of human
gxpandjv t.com
cognitive heuristics. For a useful discussion on such
elicitation aids, see Morgan et al., (7). See also MacDonald et al. (27) for an example of a more recent and
comprehensive elicitation method.
Part II of this discussion focuses on the area of risk
communication and how the way in which the outcomes
of quality risk management exercises are communicated can give rise to similar problems of subjectivity and
uncertainty. The second paper will also discuss the area
of risk perception, as having an understanding of how
risks may be perceived can be important when designing
effective risk communication methods. Several practical strategies will also be presented for counteracting
the problems relating to risk perception issues that may
arise during quality risk management work.
REFERENCES
1.Tidswell, E.C., McGarvey, B., “Quantitative Risk Modelling in Aseptic Manufacture,” PDA Journal of Pharmaceutical Science and Technology, Vol. 60, No. 5, pp 267-283,
Sept.-Oct. 2006.
2.Nauta, M. J., “Separation of Uncertainty and Variability
in Quantitative Microbial Risk Assessment Models,” International Journal of Food Microbiology, 57, pp 8-18, 2000.
3.Kunreuther, H., Novemsky, N., Kahneman, K., “Making
Low Probabilities Useful,” Journal of Risk and Uncertainty,
Vol. 23, No. 2, pp. 103–120, 2001.
4.Ronteltap, A., et al., “Consumer Acceptance of Technology-based Food Innovations,” Appetite, Vol. 49, pp. 1–17,
2007.
5.Satterfield, T., et al., “Anticipating the Perceived Risk of
Nanotechnologies,” Nature Nanotechnology, Vol. 4, pp.
752-758, 2009.
6.Morgan, M. G., “Risk Analysis and Management,” Scientific American, pp. 32-41, July 1993.
7. Morgan, M. G., Henrion, M., Uncertainty–A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis,
Cambridge University Press, 1990.
8.Kahneman, D., Tversky, A., “Subjective Probability: A
Judgment of Representativeness,” Cognitive Psychology,
3:430-354, 1972.
9.International Standard ISO 31000, Risk Management–Principles and Guidelines, First Edition 2009-11-15, (ISO Reference number ISO 31000:2009(E)), Published in Ireland
by the National Standards Authority of Ireland (NSAI)
on October 15, 2009.
10.O’Donnell, K., Greene, A., “Failure Modes—Simple Strategies for Improving Qualitative Quality Risk Management
Exercises During Qualification, Validation, and Change
Control Activities,” Journal of Validation Technology, Vol.
13, No. 2, February 2007.
Journal
of
Validation T echnology [Summer 2010]
83
PEER-R EV IEW ED
11.Fischhoff, B., Slovic, P., Lichtenstein, S., “Fault Trees:
Sensitivity of Estimated Failure Probabilities to Problem Representation,” J. Exp. Psychol. – Human Perception
Perf. 4:330-44, 1978
12.Kahneman, D., Tversky, A., “On the Psychology of
Prediction,” Psychological Review, 80, No 4, 237 – 251,
1973.
13.Lichtenstein, S., Fischhoff, B., Phillips, L. D., “Calibration of Probabilities: The State of the Art to 1980,” in
Kahneman, Slovic and Tversky, eds., Judgment under
Uncertainty: Heuristics and Biases, Cambridge University
Press, New York, 1982.
14.Mullen. T. M., “Understanding and Supporting the
Process of Probabilistic Estimation,” PhD Dissertation,
Carnegie Mellon University, Pittsburgh, 1986.
15.Morgan, M. G., Morris, S. C., Henrion, M., Amaral, D.
A. L., Rish, W. R., “Technical Uncertainty in Quantitative Policy Analysis: A Sulphur Air Pollution Example,”
Risk Analysis, 201-216, September 4, 1984.
16.Wallsten, T. S., et al., “Measuring the Vague Meaning
of Probability Terms,” Journal of Experimental Psychology:
General, 115: 348-365, 1986.
17.Keller, A. Z., “Perception and Quantification of Risk,”
ISPRA Courses, Reliability and Data, JRC ISPRA, 21020,
Italy, 1984.
18.Samaras, G.M., “An Approach to Human Factors Validation,” Journal of Validation Technology, May 2006.
19.Py y, P., “Human Reliability Analysis Methods for
Probabilistic Safety Assessment,” Espoo 2000, Technology Research Centre of Finland, VTT Publications, December 2000, available at www.vtt.fi/inf/pdf/publications/2000/P422.pdf.
20.Brier, G. W., “Verification of Forecasts Expressed in
Terms of Probability,” Monthly Weather Review, 78:1-3,
1950.
21.Mosvick, R., Nelson, R., We’ve Got To Start Meeting Like
This: A Guide to Successful Business Meeting Management,
Glenview, Ill., 1987.
22.Kaplan, S., Garrick, B. J., “On the Quantitative Definition of Risk,” Risk Analysis, Vol. 1, No. 1, pp 11-27,
1981.
23.Costa-Font, J., Rudisill, C., Mossialos, E., “Attitudes as
an Expression of Knowledge and ‘Political Anchoring,”
Risk Analysis, Vol. 28, No. 5, pp.1273-1287, 2008.
24.Slovic, P., Peters, E., “Risk Perception and Affect,” Current Directions in Psychological Science, Vol. 15, pp. 322–
325, 2006.
25.Stamatis, D. H., Failure Mode and Effect Analysis: FMEA
from Theory to Execution, 2nd edition, ASQ Quality
Press, 2003.
26.Goldberg, L. R. “The Effectiveness of Clinicians’ Judgments: The Diagnosis Of Organic Brain Damage From
84
Journal
of
Validation T echnology [Summer 2010]
the Bender-Gestalt Test,” Journal of Consulting Psychologists, 23:23-33, 1959.
27.MacDonald, J.A., Small, M.J., Morgan, M.J., “Explosion
Probability of Unexploded Ordnance,” Risk Analysis,
Vol. 28, No. 4, pp. 825-841, 2008.
28.Faust, D., “Declarations Versus Investigations: The Case
for the Special Reasoning Abilities and Capabilities of
the Expert Witness in Psychology/Psychiatry,” Journal
of Psychiatry & Law, 13(1–2), pp 33–59, 1985.
29.Willis, H.H., DeKay, M.L., Morgan, M.G., Florig, H.K,
Fischbeck, P.S., “Ecological Risk Ranking,” Risk Analysis,
Vol. 24, No. 2, pp. 363-378, 2004.
30.Hoch, S. J., “Availability and Interference in Predictive
Judgment,” Journal of Experimental Psychology: Learning,
Memory and Cognition, 10, No. 4, 1984.
GENERAL REFERENCES
ICH, Quality Risk Management (ICH Q9), International Conference on Harmonisation of Technical Requirements for
Registration of Pharmaceuticals for Human Use, November 9th, 2005, available at www.ich.org.
European Commission, The Rules Governing Medicinal Products
in the European Community, Volume IV, published by the
European Commission, and available at http://ec.europa.
eu/enterprise/pharmaceuticals/eudralex/homev4.htm.
FDA, Pharmaceutical cGMPs for the 21st Century: A RiskBased Approach, FDA Press Release, No. P02-28, FDA
News, August 21, 2002, available at http://www.fda.gov/
bbs/topics/NEWS/2002/NEW00829.html).
FDA, Pharmaceutical cGMPs for the 21st Century-A Risk-Based
Approach, Final Report, September 2004, Department of
Health and Human Services, U.S. Food and Drug Administration, available at http://www.fda.gov/cder/gmp/
gmp2004/GMP_finalreport2004.htm.
Litai, D., “A Risk comparison methodology for the assessment
of acceptable risk,” PhD Thesis, Massachusetts Institute of
Technology, Cambridge, Mass., 1980.
O’Donnell, K., Greene, A., “A Risk Management Solution Designed to Facilitate Risk-Based Qualification, Validation &
Change Control Activities Within GMP and Pharmaceutical Regulatory Compliance Environments In The EU,”
Parts I & II, Journal of GXP Compliance, Vol. 10, No. 4, July
2006. JVT
ACKNOWLEDGMENTS
The author would like to thank Dr. Anne Greene of the
Dublin Institute of Technology for her support during
this work as well as colleagues at the Irish Medicines
Board for reviewing the text. Thanks also to Mitsuko
Oseto for thought-provoking discussions during the
development of this work.
iv thome.com