ETM friction ridge - Evidence Technology Magazine

The magazine dedicated exclusively to the technology of evidence collection, processing, and preservation
Special Supplement • November 2012
THE
FRICTION
RIDGE
COLLECTION
Brady Material and the Expert Witness
Written by Michele Triplett
M
ANY EXPERT WITNESSES
feel they should not be required
to know about judicial procedures. After all, expert witnesses are
not lawyers; they are simply offering
specialized information that may be
beneficial to the investigation of a
case. Nevertheless, the courts require
everyone working on behalf of the
government to understand their roles
and responsibilities to the criminaljustice system. One of the least understood requirements of expert witnesses
may have to do with the disclosure
and testimony of “Brady material”.
Disclosure Requirement
All state and federal courts have rules
about what type of information must
be revealed to the defense through a
disclosure request. Federal courts
generally follow the Federal Rules of
Criminal Procedures Rule 16, while
state courts may choose to follow
other guidelines.
Regardless of the court, when the
defense requests disclosure material,
those working on behalf of the government must provide all relevant information (in accordance with local
requirements) and not simply the
information they want revealed to the
defense. The government’s obligation
to disclose information that may be
valuable to the defense is commonly
referred to as Brady material. This
term comes from the US Supreme
Court’s decision in Brady v. Maryland
(1963) where the government withheld information from Brady that may
have been useful in undermining the
government’s case against him. The
government’s failure to disclose this
information violated due process under
the 14th Amendment.
Since providing this information is
a requirement, withholding such information is commonly referred to as a
Brady violation, and will likely lead
to a reversal of conviction on appeal.
What is Brady Material?
Brady material is any information the
government, or those acting on behalf
of the government, has that may be
beneficial to the defense. This includes
information that may weaken the
government’s case or undermine the
testimony or credibility of the witness.
the
FRICTION
RIDGE
Giglio v. United States (1972) is
an extension of Brady to include
material that would impeach the character of a government witness.
Impeachment material can include
honesty, integrity, impartiality, and
the credibility of an expert witness.
United States v. Henthorn (1991)
is an extension of Giglio to include
requests for personnel records of a
government witness. These records
may contain exculpatory information
about the witness.
Examples
In an effort to understand Brady
material, it may be helpful to consider
some examples of disclosure requests
and Brady violations.
In 2009, San Jose Police were
accused of withholding information
that was favorable to the defense by
failing to note when another expert
did not agree with the conclusion of a
fingerprint comparison. An expert not
identifying a print could undermine
the strength of the identification. This
failure to note disagreement was due
to a lack of knowledge regarding disclosure requirements. No cases were
overturned due to this failure, but San
Jose did change its policies to document and report on non-agreements
in the future.
In 2010, the San Diego County
District Attorney’s Office was accused
of withholding fingerprint evidence
when latent fingerprints were deter-
mined not to be clear enough to match.
In the case of Kenneth Ray Bowles,
six latent prints were identified as his,
and one latent print was declared not
clear enough to match. On appeal, San
Diego Superior Court Judge Harry
Elias found this to be a serious violation and ordered a new trial.
Over the years, several motions
have been filed claiming the AFIS
candidate list produced by a computer
search is Brady material. In several
appeals, judges have determined that
the candidate list would not have benefited the defense. It is possible, however, that in specific cases this information may be useful to the defense.
The candidate list could display additional information that was not recognized by the initial examiner.
Agencies should be aware of Brady
requirements in order to implement
policies to retain necessary information. Agencies should also implement
policies on how to handle disclosure
material. While some agencies leave
this responsibility up to the expert
witness, other agencies require that
all dissemination be done through
management or through a legal unit
so they can ensure the disclosure
requirements of Brady are properly met.
In addition, agencies should ensure
that expert witnesses are aware of
Brady/Giglio/Henthorn so they are
prepared to testify to exculpatory
material. Testimony could be in
regard to office policies, procedures,
or events that took place in a specific
case, or information about the government witness. If there is ever a question regarding Brady material, contact
your agency’s legal unit or the prosecutor’s office. About the Author
Michele Triplett is the Latent Operations
Manager of the King County Sheriff’s
Office in Seattle, Washington. She holds
a Bachelor of Science degree in
Mathematics and Statistical Analysis
from Washington State University and
has been employed in the friction-ridge
identification discipline for more than
19 years. She can be reached at:
[email protected]
From Evidence Technology Magazine • January-February 2011
www.EvidenceMagazine.com
1
Current trends in latent print testimony
Written by Kasey Wertheim
D
URING THE PAST DECADE,
one of the most actively changing
aspects of latent-print examination has been in the legal arena. The
Daubert hearing in the 1999 United
States v. Byron Mitchell trial sparked
a trend toward the “scientification” of
latent-print examiner testimony.
Practitioners hurried to brush up on
their ACE-V and Ridgeology training
so they could explain the scientific
methodology they used in the case.
The use of the word “identification”
became old-fashioned, and while some
examiners stuck with it, many were
quick to change to the more scientific
“individualization”. Readers of David
R. Ashbaugh’s then-fresh book,
Qualitative-Quantitative Friction
Ridge Analysis, came away with a
lexicon that would serve them well as
they portrayed their science to a jury
of laypersons. But since that time, the
legal domain has continued to evolve.
Current “scientific” latent-print
testimony has been portrayed by critics,
academics, and some legal authorities
as pushing too far into a certainty they
claim cannot exist. We are told that the
results of our examinations can never
reach 100% “scientific” or “absolute”
certainty, and any examiner claiming
such should be disallowed to testify.
Some examiners will say this is just
fine, and attaining that level of scientific
certitude simply is not necessary. Their
position is that the court’s acceptance
of testimony is not based on its decision of whether or not the discipline
can reach scientific certainty but,
rather, if the technical expertise and
opinion of the examiner will assist
the trier of fact.
Just to be clear, I am not implying
that ACE-V should be removed from
testimony. Rather, I am suggesting that
current trends indicate that examiners
should reference it more as a “framework” or “process” they use instead of
referencing it as an error-free scientific methodology.
One of the most widely cited cases,
the 2004 Brandon Mayfield case, has
provided much fodder for those who
wish to emphasize “the error-prone
nature” of our discipline. We hear
about bias and other human factors,
and how they can and do affect our
2
the
FRICTION
RIDGE
decision-making thresholds. Frequently
referenced sources—such as the Office
of the Inspector General’s Review of
the FBI’s Handling of the Brandon
Mayfield Case or the National Academy
of Sciences’ report, Strengthening
Forensic Science in the United States:
A Path Forward—and numerous legal
challenges to the latent-print discipline
have provided a trend over the last few
years toward more caution being shown
by latent-print examiners on the witness
stand. That trend has caused latentprint examiners to venture back toward
the more conservative manner of testifying to an “identification,” and has
decreased the emphasis on scientific
individualization.
With a careful look inside our own
discipline, we can even recognize
some indicators of this trend. Take,
for instance, SWGFAST’s removal of
“to the exclusion of all others” from
the most recent definition of the word
individualization in the latent-print
glossary (which continues to cite
identification and individualization as
synonymous).
There are also some examples of
examiners testifying in a fashion that
does not invite challenges from the
astute defense attorney. In a Minnesota
case, we saw Josh Bergeron stating
that person X “made” latent Y—and
upon follow-up, stating that theoretically there could be two individuals
that share enough ridge formations
similar enough to each other that an
examiner might be fooled:
http://www.clpex.com/Articles/The
Detail/300-399/TheDetail382.htm
We even saw a legal decision in
Massachusetts (Commonwealth v.
Gambora) that rebuked the examiner
who used the term individualization,
but praised the examiner who used
the term made:
http://www.swgfast.org/Resources/
101011_MA-v-Gambora-Judge’sOpinion-Sept-2010.pdf
So what does the future hold in
store? Continuing the trend toward
more conservative testimony is one
likely possibility. We can also expect
more talk about the use of statistics to
support our testimony and what it will
take for examiners to actually use statistical modeling on the witness stand.
I think we are still several years away
from their acceptance in court, but
indications are that we can expect a
trend toward probabilities and likelihood ratios in the future. Take for
example the International Association
for Identification’s recent repeal of
earlier resolutions prohibiting certified
examiners to testify to probable or
likely conclusions.
For now, a wise latent-print examiner should continue to stay abreast of
current legal challenges to the discipline.
SWGFAST provides access to many
of the challenges on their “Resources”
page at www.SWGFAST.org.
Examiners should also consider
the trend away from absolute testimony
and consider how they can state their
findings in a manner that is easier to
defend and less likely to invite a challenge from the defense. About the Author
Kasey Wertheim is an IAI Certified Latent
Print Examiner and a Distinguished
IAI member. He serves on SWGFAST as
Webmaster and hosts www.clpex.com,
the largest web resource for latent-print
examiners. He publishes an electronic
newsletter focusing on latent-print
examination, The Weekly Detail, every
Monday morning. He is Co-Chair of
the NIST Expert Working Group on
Human Factors in Latent Print
Examination. He can be reached at:
[email protected]
From Evidence Technology Magazine • March-April 2011
www.EvidenceMagazine.com
Managing Latent-Print Errors
Written by Alice Maceo
I
BECAME A MANAGER of a
latent-print unit in 2006. For those
forensic disciplines that rely on
humans as the analytical instrument,
management can be very daunting. I once
related the experience to our chemistry
supervisor in this way: “Imagine that I
tweaked the sensitivity of each of your
GC-MSs (gas chromatography-mass
spectrometers) to a different setting…
then adjusted those sensitivities randomly
throughout the day on each instrument…
and then asked you to run a complex
sample through two instruments and
come up with the same answer.” The
supervisor just shook her head.
In spite of the inherent difficulties
involved with managing a latent-print
unit, there are steps that can be taken to
identify, address, and reduce technical
error. A culture of accuracy and thoroughness is the first step in the process.
If the analysts know that the qualityassurance process is designed to ensure
the most accurate results and is not
punitive, it allows the analysts to operate
without fear of repercussion or becoming
paralyzed, unable to render conclusions.
The second step is setting up clear
verification procedures. Based on conversations with many analysts around the
country, most agencies verify identifications. Interestingly, the analysts also
indicated that the most frequent technical
error is a “false negative” (a.k.a. “erroneous exclusion”). However, many
agencies do not verify “negative”, “not
identified”, “exclusion”, or “inconclusive”
results. It is impossible to manage technical errors if not all of the conclusions
are reviewed. It is impossible to learn
from mistakes if the mistakes are not
unearthed.
The most frequently cited reason for
not reviewing all conclusions is a shortage of manpower. It has been my experience that reviewing all conclusions in
all cases takes approximately 25%
more time (compared to only verifying
identifications). The benefit of this
process is that the verifier can focus his
attention on the latent prints (not the
entirety of the case) and there is immediate feedback to the case analyst if a
technical error is noted. Another
approach is to review all conclusions on
selected cases (e.g. those that are randomly selected prior to assignment, or
those that are selected based on crime
type). And yet another approach is to
the
FRICTION
RIDGE
perform random case audits. The downside to random case audits is the time
delay between making the error and
discovering the error; the analyst will
likely not recall the circumstances that
were involved.
The third step to managing error is
to decide what to do when a technical
error is discovered. Are there allowances for the number or frequency of
technical errors? Are there different
responses for different kinds of technical
errors? The answers to these questions
are largely agency driven.
As a manager, I have found that a
formal corrective action has been beneficial in analyzing the factors that led to
a false identification. These factors
should not simply center on the analyst!
Supervision and organization issues
should also come to light during the
investigation. Some factors may lend
themselves well to preventive measures
(such as the supervisor limiting the
types of cases assigned to analysts under
high levels of stress) and others may not
be easily prevented (such as detectives
repeatedly asking the analyst to hurry).
I do not recommend removing analysts from casework if a rare false identification is discovered; they have
already punished themselves enough.
However, I recommend that the analyst
does not perform verifications for a
period of time (at least 30 days). After
the requisite time has passed, the analyst should successfully complete a proficiency test prior to performing verifications. Obviously, if an analyst repeatedly makes false identifications, then
the response should be escalated
because the analyst’s competency may
be compromised.
From Evidence Technology Magazine • May-June 2011
www.EvidenceMagazine.com
False negatives are not as easy to
manage because you need to track them
and look for trends. Much can be learned
from tracking the errors, including valuable feedback to the training program.
Sometimes the reason for false negatives
is relatively easy to address. For example,
if a particular analyst is routinely failing
to identify latent palm prints due to orientation problems, then dedicated practice
orienting and searching palms will likely
improve their performance.
Other problems, like backlog pressure,
are harder to address. How do you insulate the analysts from feeling rushed
because so many cases are waiting? I
have found it helpful to keep the backlog out of sight and to throttle ten cases
at a time to the analysts. The analysts
can finish a batch of cases at a time
(with occasional interruptions for cases
that must be rushed, of course) and
clear their desks.
The forensic examination of evidence
is a high-stakes endeavor. Failure to
connect a criminal to a crime may
allow the criminal to continue to endanger society—while connecting the
wrong person to a crime could take
away an innocent person’s life or liberty.
As such, the analysts in the forensic
laboratory strive to be as accurate as
humanly possible. I want to stress the
word humanly. As humans, we are all
prone to error. Forensic analysts will
not be perfect. Mistakes will happen.
Focusing attention only on the analyst
is short-sighted at best. Analysts operate in a system, and that system can set
them up for failure. Instead of pointing
fingers and blaming the analyst, we
should be asking these questions:
How did the system allow the
error to occur?
What can we learn from the
error?
How can we improve the system
to minimize the number of errors? About the Author
Alice Maceo is the Forensic Lab
Manager of the Latent Print Detail of
the Las Vegas (Nevada) Metropolitan
Police Department. She is an IAI
Certified Latent Print Examiner and a
Distinguished Member of the IAI.
Maceo continues to serve on SWGFAST
and the NIST/NIJ Expert Working Group
on Human Factors in Latent Print
Analysis. She can be reached at:
[email protected]
3
Automated Fingerprint Identification Systems
(AFIS) and the Identification Process
Written by Robert J. Garrett
A
WHILE BACK, I was contacted
by an immigration lawyer who
represented an individual facing
deportation. According to the attorney, U.S. Immigration and Customs
Enforcement (ICE) had found—based
on an automated fingerprint search—
that his client had been deported once
before under a different name. The
attorney sought my assistance in
examining the fingerprints to make
sure that they were those of his client.
I asked the attorney for the name
of the fingerprint examiner who had
made the identification so I could
make contact and arrange to see the
prints. I was surprised to learn that no
fingerprint examiner had ever confirmed the results from the automated
search—and the immigration court
was prepared to proceed with deportation based solely on the results of
the automated search.
I prepared an affidavit for the
attorney explaining how automated
fingerprint identification systems
(AFIS) work and the protocol that
followed a search hit: that is, a qualified fingerprint examiner compares
the prints selected by the computer to
make sure it is, in fact, an identification. The attorney filed his motion to
stay the deportation proceedings until
a qualified fingerprint examiner could
compare the fingerprints. The immigration court, however, rejected the
motion and insisted that the automated
results were sufficient to continue
with the process.
In another example, I was recently
consulted on two similar fingerprint
cases. In each case, a local police
department had submitted to their state
AFIS fingerprints recovered from
crime scenes. The state AFIS unit
reported back to the local police that
the submitted prints had hit on a suspect. The report listed the SBI number
of the suspect and the number of the
finger that was matched to the crimescene prints. These reports were used
as evidence of the suspect’s complicity in the crimes under investigation
and resulted in grand-jury indictments
in both cases.
4
the
FRICTION
RIDGE
AFIS hits must be
examined by a qualified
fingerprint examiner and
results of the examination
must be verified
before any proceedings
are commenced against
a potential suspect. It is
unethical, unprofessional,
and—most likely—
unconstitutional
to do otherwise.
Once again, I asked the attorneys
for copies of the fingerprint examiners’ reports so I would know what to
ask for when arranging for my examination. I was advised that there were
no fingerprint examination reports and
that the indictments were made without a qualified fingerprint examiner
ever reviewing the automated search
results or testifying before the grand
jury regarding their findings.
These cases are now proceeding to
trial, and still no fingerprint examiner
from the prosecution has ever issued
a report regarding the identifications.
In my experience, it is becoming
more and more common in AFIS-hit
cases to find only a screen print of the
AFIS “match”—with no other report
from a qualified fingerprint examiner
confirming the identification. There is
usually no indication of whether the
reported AFIS hit was from candidate
#1 or candidate #20. What is astonishing is that in recent cases I encountered,
the state AFIS units stamped their
reports with the following statement:
This package contains potential
suspect identification. It is
incumbent upon the submitting
agency to provide positive ID
for prosecutorial purposes.
However, this caveat seems to be
frequently ignored by some of the
submitting agencies.
In the criminal-justice system, most
cases never go to trial. They are often
settled through negotiation between the
defense attorney and the prosecutor, or
through a guilty plea from the defendant. It would be interesting to know
how many cases were settled or plead
on the strength of an AFIS hit, without further corroborating testimony
from a qualified fingerprint examiner.
Between 2008 and 2010, the
National Institute of Standards and
Technology (NIST) sponsored the
Expert Working Group on Human
Factors in Latent Print Analysis. The
group developed a flow chart of the
latent-print examination process based
on the ACE-V method (analysis, comparison, evaluation, and verification)
used by fingerprint examiners. This
same chart has been adopted by the
Scientific Working Group on Friction
Ridge Analysis, Study and Technology
(SWGFAST) as part of their proposed
“Standards for Examining Friction
Ridge Impressions and Resulting
Conclusions”. The flow chart clearly
shows the path to follow once an AFIS
hit has been made. AFIS is shown as
part of the analysis phase of the examination method and is not part of the
comparison, evaluation, or verification
phases. AFIS hits require a full exam-
From Evidence Technology Magazine • July-August 2011
www.EvidenceMagazine.com
ination by a qualified fingerprint
examiner.
The SWGFAST Press Kit includes
the following entry:
14 Does an AFIS make latent
print identifications?
14.1 No. The Automated Fingerprint
Identification System (AFIS) is a
computer based search system
but does not make a latent print
individualization decision.
14.1.1 AFIS provides a ranked
order of candidates based upon
search parameters.
14.1.2 A latent print examiner
makes the decision of individualization or exclusion from the
candidate list.
14.1.3 The practice of relying on
current AFIS technology to individualize latent prints correctly
is not sound.
U.S. Supreme Court decisions in
Melendez-Diaz v. Massachusetts and,
more recently, Bullcoming v. New
Mexico, reiterated a defendant’s Sixth
Amendment right “to be confronted
with the witnesses against him.”
Reports of a laboratory or investigative
finding do not satisfy the requirement.
Our society and its government
have embraced technology in various
forms for its efficiency and economy.
In the areas of law enforcement and
public safety, these technological
advances have included AFIS, the
Combined DNA Index System
(CODIS), airport security screening
devices, and red light/traffic cameras.
But these advances bring with them
compromises of privacy and our right
“…to be secure in their persons,
houses, papers, and effects…”
AFIS hits must be examined by a
qualified fingerprint examiner and the
results of that examination verified
before any proceedings are com-
menced against a potential suspect. It
is unethical, unprofessional, and—
most likely—unconstitutional to do
otherwise. About the Author
Bob Garrett spent more than 30 years
in law enforcement, including ten
years supervising a crime-scene unit.
He is a past president of the
International
Association
for
Identification (IAI) and currently
chairs the board that oversees the
IAI’s certification programs. Now
working as a private consultant on
forensic-science and crime-scene
related issues, he is certified as a
latent-print examiner and senior
crime-scene analyst. He is a member
of the Scientific Working Group on
Friction Ridge Analysis, Study and
Technology (SWGFAST) and a director
of the Forensic Specialties Accreditation Board.
[email protected]
The SWGFAST Press Kit mentioned in the above article is available at this web address:
http://www.swgfast.org/Resources/swgfast_press_kit_may04.html
&' %
" # $%
!(#
) !
(
!
From Evidence Technology Magazine • July-August 2011
www.EvidenceMagazine.com
5
Recommendations on how to
avoid testimony errors
Written by Michele Triplett
F
objects to this type of testimony, then
the courts must decide if the error was
harmful to the case and if a mistrial
or reversal is warranted. Whether or
not an error is harmful is specific to
each case.
Recommendation
ORENSIC PRACTITIONERS
are commonly asked to testify in
court, yet they may have limited
knowledge regarding the rules of testimony. This lack of understanding
could unintentionally affect the outcome of a trial. Awareness of a few
simple concepts could improve your
testimony and prevent a mistrial or
reversal of a court decision.
Recommendation
1
Avoid testifying
to the conclusions of others
Testifying to the conclusions of others
is detailed in three United States
Supreme Court Decisions. Crawford
v. Washington (2004) states that under
the Sixth Amendment Confrontation
Clause, “the accused shall enjoy the
right…to be confronted with the witnesses against him,” with an exception
allowed for business records. The
exception to the Crawford rule resulted
in many forensic laboratories considering their reports “business records”
and therefore not participating in live
testimony.
Melendez-Diaz v. Massachusetts
(2009) clarifies the acceptability of
forensic reports as business records
stating, “The analysts’ certificates—
like police reports generated by law
enforcement officials—do not qualify
as business or public records…”
Bullcoming v. New Mexico (2011)
clarifies the Confrontation Clause
even further by stating who shall be
permitted to give the testimony. “The
question presented is whether the
Confrontation Clause permits the
prosecution to introduce a forensic
laboratory report containing a testimonial certification—made for the purpose
of proving a particular fact—through
the in-court testimony of a scientist
who did not sign the certification or
perform or observe the test reported
in the certification. We hold that sur-
6
2
the
FRICTION
RIDGE
Minor errors in courtroom
testimony may be tolerated
if there is no objection to
the error or if the error is
considered harmless.
Nevertheless, all forensic
practitioners should be
aware of testimony rules
so they can avoid
testifying incorrectly and
adversely affecting the
outcome of a trial.
rogate testimony of that order does not
meet the constitutional requirement.”
These three decisions clearly specify that a forensic analyst who performed the examination must provide
the testimony.
An analyst testifying to the results
of the reviewer or verifier is a similar
type of error since the analyst did not
perform these tasks. Past cases have
labeled this as inadmissible hearsay
and/or falsely bolstering the primary
analyst’s conclusion. If an attorney
Prepare to provide the basis
underlying a conclusion
Analysts are required to provide the
basis underlying their conclusions if
requested. If an analyst has never been
asked for the basis underlying past
conclusions, they may not be aware of
the requirement to provide this information. Federal Rules of Evidence Rule
705 describes “Disclosure of Facts or
Data Underlying Expert Opinion”.
This rule states: “The expert may testify in terms of opinion or inference
and give reasons therefore without
first testifying to the underlying facts
or data, unless the court requires otherwise. The expert may in any event be
required to disclose the underlying
facts or data on cross-examination.”
In order to give more weight to a
conclusion, a prosecutor may request
demonstrable materials themselves
prior to cross-examination. Chart
enlargements or PowerPoint presentations can be simple methods of providing the basis for comparative evidence
conclusions during testimony.
Recommendation
3
Avoid reference
to past criminal history
Testifying to a defendant’s past criminal history could be prejudicial toward
the guilt of a defendant. Any reference
to a prior criminal history should be
avoided, such as testifying that a latent
print was matched to a fingerprint
card on file from a previous arrest.
From Evidence Technology Magazine • September-October 2011
www.EvidenceMagazine.com
Recommendation
4
Disclose exculpatory information
Analysts should be aware of Brady v.
Maryland (1963), Giglio v. United
States (1972), and United States v.
Henthorn (1991). These rulings require
government witnesses to disclose
exculpatory information to the defense
(information that may assist in clearing
a defendant). Exculpatory information
may include disclosing all conclusions
—not simply conclusions that implicate the defendant; disclosing information about anyone who may have
disagreed with the reported conclusion;
and disclosing unfavorable information
about the analyst. This information is
explained further in “Brady Material
and the Expert Witness,” Evidence
Technology Magazine, January-February
2011 (Page 10).
Recommendation
where testimony errors have occurred
but not had any negative effect on the
outcome of a case, falsely implying
certain testimony is permissible. Minor
errors may be tolerated if there is no
objection to the error or if the error is
considered harmless.
Nevertheless, forensic practitioners should be aware of testimony
rules to avoid testifying incorrectly
themselves and adversely affecting
the outcome of a trial. About the Author
Michele Triplett is Latent Operations
Manager of the King County Sheriff’s
Office in Seattle, Washington. She
holds a Bachelor of Science degree in
Mathematics and Statistical Analysis
from Washington State University and
has been employed in the frictionridge identification discipline for more
than 19 years. She can be reached by
e-mail at this Internet address:
[email protected]
5
Avoid overstating conclusions
Federal Rules of Evidence Rule 702
describes “Testimony by Experts”,
stating: “If scientific, technical, or
other specialized knowledge will assist
the trier of fact to understand the evidence or to determine a fact in issue, a
witness qualified as an expert by knowledge, skill, experience, training, or
education, may testify thereto in the
form of an opinion or otherwise, if
(1) the testimony is based upon sufficient facts or data, (2) the testimony
is the product of reliable principles
and methods, and (3) the witness has
applied the principles and methods
reliably to the facts of the case.”
Since expert testimony is commonly
referred to as opinion evidence, some
could incorrectly assume that conclusions may be the personal opinion of
the expert. An important element of
Rule 702 is that conclusions must be
based on sufficient facts or data.
Testifying to a conclusion that is merely
the personal belief of the expert—not
based on sufficient facts or data—may
be overstating a conclusion and
would therefore be considered an
error in testimony.
Many examples can be shown
From Evidence Technology Magazine • September-October 2011
www.EvidenceMagazine.com
7
Explaining the Concept of
Sufficiency to Non-Practitioners
Written by John P. Black, CLPE, CFWE, CSCSA
E
VERY SINGLE DAY, fingerprint examiners routinely and
reliably determine that questioned
friction ridge impressions possess
sufficient information to identify to a
known source. These sufficiency
determinations are made based on the
quality and quantity of information
available in an impression, as well as
the ability, experience, training, and
visual acuity of the examiner.
Although there is currently no
generally accepted standard for sufficiency in the fingerprint community,
examiners trained to competency
can—and do—reach valid conclusions
that are supported by physical evidence
and will withstand scientific scrutiny.
Friction ridge examiners can typically discuss their examinations and
results among colleagues without any
difficulty. This often is not the case,
however, when the discussion involves
the concept of sufficiency. As a result,
it may also be difficult to explain this
concept to non-practitioners, such as
jurors, judges, and attorneys.
What makes this problematic is
that the non-practitioners mentioned
above—particularly jurors and
judges—often make crucial decisions
based on the information they
receive. If they don’t understand how
friction ridge examiners can reliably
determine sufficiency, then they don’t
have all the information they need to
make an informed decision. With this
in mind, several analogies are offered
below for helping non-practitioners
understand the concept of sufficiency.
Teachers routinely test students
to determine if they have a sufficient
understanding of the course material.
Teachers are able to do this based on
their training and experience in testing
numerous students over a long period
of time. They can reliably determine
whether the students have truly
grasped the material, or if they have
8
the
FRICTION
RIDGE
If juries and judges
do not understand how
friction ridge examiners
can reliably
determine sufficiency,
then they do not
have all the information
they need to make
an informed decision.
simply memorized and regurgitated
the information for test purposes.
Mechanics typically perform
leak checks after patching damaged
tires. Once the patch is applied and
the tire re-inflated, the mechanic will
apply a soapy solution to the patch
area and subsequently look for any
air bubbles around the patch. If no air
bubbles are observed, then the
mechanic determines the patch job is
sufficient and that the tire is safe to
put back into service. This decision is
governed by the mechanic’s training
and experience in patching many tires
over time.
Farmers must constantly monitor their crops to determine if they are
providing sufficient water, fertilizer,
and pest-control methods to ensure a
successful harvest. Again, their decisions regarding the quantities needed
by the plants are determined largely
by the experience of the farmer.
Now, the reader may be thinking
that these analogies are very simple
and seem to have nothing to do with
friction ridge examination. Hopefully,
however, it will be recognized that these
are attempts to explain the concept of
sufficiency, as well as to show that
sufficiency exists in other professions.
More important, these analogies show
that sufficiency determinations made
in other professions are typically based
on a person’s training and experience.
Why would it be any different for
friction ridge examiners?
It doesn’t matter if an examiner is
determining sufficiency for an initial
value assessment or if the sufficiency
determination is for the purpose of
making an identification. What does
matter is that the examiner draws on
his/her experience with numerous
impressions, over time, to assess the
quality and quantity of available
information in making these sufficiency determinations.
Besides, it would not be surprising
if a teacher, mechanic, or farmer is in
the jury box during your next trial.
They will likely have sufficient
understanding of the analogies! About the Author
John Black is a Senior Consultant and
the Facility Manager for Ron Smith &
Associates, Inc. in Largo, Florida. He
can be reached at:
[email protected]
From Evidence Technology Magazine • November-December 2011
www.EvidenceMagazine.com
Human Factors
in Latent-Print Examination
Written by Kasey Wertheim and Melissa Taylor
F
INGERPRINT EXPERTS
never make mistakes right? A
better question might be, “Are
fingerprint examiners human?” The
answer to that question of course is,
“Yes.” The reality of all human
endeavors is that errors happen, even
among the best professionals and
organizations.
The field of human-factors research
focuses on improving operational
performance by seeking to understand
the conditions and circumstances that
prevent optimal performance, and to
then identify strategies that prevent
or mitigate the consequences of error.
Understanding how human-factors
issues impact latent print examinations
can lead to improved procedures.
Human-factors research offers a
variety of models used to detect and
identify errors. Many of these models
focus on a systems approach where
errors are often viewed as consequences of a person’s working conditions—the work environment, for
example, or institutional culture and
management. Rather than focusing
solely on an examiner when an error
the
FRICTION
RIDGE
Understanding how
human-factors issues
impact latent print
examinations can lead
to improved procedures.
occurs, a systems approach would
look at underlying conditions—such as
inadequate supervision, inappropriate
procedures, and communications failures—to understand what role they
play when errors occur. Using a systems approach to understand why
errors occur will help agencies build
better defenses to prevent errors or
mitigate their consequences.
In September 2010, the National
Institute of Standards and Technology
Law Enforcement Standards Office
(OLES) recognized the need for further study on how systems-based
approaches—such as root-cause
analysis, failure mode and effects
analysis, and the human factors and
analysis classifications system
(HFACS)—could be used in forensic
settings. OLES initiated a contract
with Complete Consultants
Worldwide (CCW) to investigate the
HFACS framework and develop a
web portal to help forensic managers
collect and track error-related data.
HFACS and Swiss Cheese
Dr. Douglas Wiegmann and Dr. Scott
Shappell developed HFACS in the
This model of error in latent print examination, adapted from Dr. James Reason’s 1990 “Swiss-cheese model” of error, shows how unguarded gaps
in policy or procedure can ultimately result in an accident or failure. In Reason’s model, each slice of cheese represents a “defensive layer” that has
the opportunity to prevent an error from impacting the outcome or to keep the error from leaving the system undetected.
From Evidence Technology Magazine • January-February 2012
www.EvidenceMagazine.com
9
F R I C T I O N
United States Navy in an effort to
identify why aviation accidents happened—and to recommend appropriate
action in order to reduce the overall
accident rate. The HFACS framework
was based on the Swiss-cheese model
of accident causation, the brainchild
of Dr. James Reason.
This Swiss-cheese model gets its
name because Reason proposed that
highly reliable organizations are analogous to a stack of Swiss cheese,
where the holes in the cheese represent
vulnerabilities in a process and each
slice represents “defensive layers”
that have the potential to block errors
that pass through the holes. Each
layer has the opportunity to prevent
an error from impacting the outcome
or to keep the error from leaving the
system undetected.
Applying HFACS to Forensics
Working at the request of the OLES,
CCW has developed an online tool
that provides latent print managers
and supervisors with an easy and
efficient way to determine and document the factors that led to human
error in a latent print case. The webbased portal is now live and ready to
receive input from the latent print
community. Users will be able to use
this tool to identify “root causes” of
errors (or near-misses) by reviewing
a list of domain-specific issues and
selecting the ones that are applicable
to the incident in question.
The website will remain live to
allow enough time to develop a database with a variety of sufficient
entries. These responses will then be
studied in the hope of gaining further
insight into the nature of human error
in latent print examination. The
reporting process is anonymous, and
no data will be collected on human
subjects. The portal was also
designed so that it does not collect
any law enforcement sensitive data.
Perhaps the best way to gain
insight into the HFACS system for
latent prints is to take a look at the
HFACS outline. For our purposes, the
four “slices” in the original Swisscheese model have been renamed
Examiner Actions, Preconditions,
Supervisory, and Organizational
Influences.
10
R I D G E
Factors that Can Contribute to
Human Error in Latent Print Examinations
1. Examiner Actions that Can Contribute to Error
A) Errors
Skill-Based Errors
Decision Errors
Perceptual Errors
B) Violations
Routine Infractions—“Bending” of the rules, tolerated by
management
Exceptional Infractions—“Isolated” deviation, not tolerated by
management
2. Preconditions that Can Contribute to Error
A) Substandard Environmental Factors
Substandard Physical (operational and ambient) Environment
Substandard Technological Environment
B) Substandard Conditions of the Examiner
Adverse Mental States—mental conditions that affect examiner
performance
Adverse Physiological States—medical or physiological conditions
that preclude safe examinations
Physical / Mental Limitations—situation exceeds the capabilities
of the examiner
C) Substandard Personnel Factors (Practice of Examiners )
Communication, Coordination, & Planning (Examiner Resource
Management) Failures
Fitness for Duty
3. Supervisory Issues that Can Contribute to Error
A) Inadequate Operational Process
Problems in Operations
Problems with Procedures
Inadequate Oversight
B) Inadequate Supervision or Leadership
C) Supervisor Planned Inappropriate Operations—unavoidable during
emergencies but unacceptable during normal operations
D) Supervisor Failed to Correct a Known Problem
E) Supervisory Ethics or Violations—intentional actions that are
willfully conducted by supervisors
4. Organizational Influences that Can Contribute to Error
A) Inadequate Resource / Acquisition Management
Problems with Human Resources
Inadequate Monetary / Budget Resources
B) Problems with the Organizational Climate
Problems with Structure of the Organization
Problems with Organization Policies
Problems with Organization Culture
From Evidence Technology Magazine • January-February 2012
www.EvidenceMagazine.com
F R I C T I O N
If you are a latent print examiner
or supervisor, consider making our
data collection efforts pay off by
entering incidents into the portal.
Without input, this effort will not have
the impact that it could. And there is
a benefit to those who enter their
information: upon submission, a
report is generated that details the
factors the user identified as contributors to the incident being entered. This
could be a valuable printout to obtain
for the file, detailing those factors
you deemed important in a particular
latent print error event.
Log on today to check out the
latent print HFACS portal and consider
contributing to the project.
www.clpex.com/nist
R I D G E
About the Authors
Melissa Taylor is a management and
program analyst with the Law
Enforcement Standards Office (OLES)
at the U.S. Department of Commerce’s
National Institute of Standards and
Technology. Her work within the
Forensic Science Program focuses
primarily on fingerprint-related
research and integrating human-factors
principles into forensic sciences.
Taylor currently serves as a member
of the INTERPOL AFIS Expert
Working Group, associate member of
the International Association of
Identification, and co-chair of White
House Subcommittee on Forensic Science’s
Latent Print AFIS Interoperability
Task Force.
Kasey Wertheim is an IAI Certified
Latent Print Examiner and a
Distinguished IAI member. He serves
on SWGFAST as their webmaster and
also hosts www.clpex.com, the
largest web resource for latent-print
examiners. He publishes the Weekly
Detail, an electronic newsletter focusing
on latent-print examination, to nearly
3,000 examiners every Monday morning.
And he is Co-Chair of the NIST Expert
Working Group on Human Factors in
Latent Print Examination. He can be
reached at:
[email protected]
From Evidence Technology Magazine • January-February 2012
www.EvidenceMagazine.com
11
“I am 100% certain of my conclusion.”
(But should the jury be certain?)
Written by Heidi Eldridge
F
ROM TIME OUT OF MIND,
forensic scientists have testified
to results with phrases like “one
hundred percent certain,” and felt
completely comfortable doing so.
After all, why would we testify under
oath to something that we did not
believe to be true? Then, in 2009, the
National Academy of Sciences report
on forensic science was released, and
in the aftermath, forensic scientists
began to be cautioned against using
this phrase and others like it. Many
embraced this change, while others
continue to ask: But why?
Many arguments have been made
addressing the lack of wisdom in using
a phrase such as “100% certain”. Here
is how the most common argument
goes: The assertion of one’s certainty
does not equate to a scientific stance.
Nothing in science is ever 100% certain. The cornerstone of scientific
exploration is the formation of a conclusion, which is open to falsification.
Here’s that argument in layman’s
terms: 1) I research a question. 2) I
come up with the answer that I feel is
the most appropriate given the data I
had to examine. 3) I share my results.
4) Other people try to prove me wrong
or attempt to fine-tune my answer.
Under this concept of science, the
answer is never absolute. It is always
subject to further testing, interpretation, and challenge. Therefore (the
argument goes), if I claim that my
result is 100% certain, I am tacitly
admitting that my result is not scientific. For, by definition, a scientific
conclusion cannot be absolute.
This argument is fine, as far as it
goes, but it fails to resonate with
some practitioners, particularly those
who were never scientifically trained,
and it fails to address the real crux of
the problem: We must consider our
audience.
When we testify in a court of law,
our audience is not other scientists.
Our audience consists of jurors.
Laypeople. Watchers of CSI. The
majority of these people are not scientifically trained. They expect us to
12
the
FRICTION
RIDGE
bring to the courtroom that training,
experience, and knowledge. And they
look to us with a faith that, for some,
borders on reverence. And because of
this faith, we bear a huge burden of
responsibility: Clarity.
Our words matter. Language is a
powerful weapon. It can be used to
inform, but it can also be used to persuade or mislead. We must remember
that many of the phrases we use as
scientists are a kind of shorthand for
larger concepts that other scientists
understand. But juries do not have that
level of understanding. Juries accept
them at face value.
When we say, “Fingerprint comparison science has a zero error rate,” we
might mean that—although we know
people can and do make mistakes—
the application of the process, if followed correctly, will lead to the correct conclusion. But what the jury
hears is, “Fingerprint conclusions are
never wrong.”
When we say, “Fingerprints are
unique,” we might mean there is a
great deal of biological research that
supports the random formation of fingerprints, and in the limited research
we have done, we have not found two
that were exactly the same… so we
conclude that it is extremely unlikely
that two fingerprints could be identical. But what the jury hears is, “It is
a proven fact that every fingerprint is
different.”
Similarly, when we say, “I am 100%
certain of my conclusion,” we might
mean that we have conducted a
careful examination, reached the best
conclusion possible with the data
available, and that we would not have
reported that conclusion unless we
were confident that we had done our
work well. But what does the jury
hear? They hear, “I’m an expert, and
I’m telling you that this conclusion is
fact and cannot possibly be wrong.”
But the truth of the matter is,
sometimes we are wrong. And what
we are stating for the jury is not fact;
it is opinion. To be clear, the opinion
is based on something—it is not just
made up out of thin air. But it is still
opinion. And to state it in terms that
give it the veneer of fact is both overstating and just plain misleading.
Remember your audience: The jury
that is accepting what you say at face
value. They need you to be precise in
your use of language so they understand correctly. It is okay to say that
you are certain—if you qualify the
statement. Talk about your personal
level of confidence in your conclusion.
Talk about the work you did to reach
your conclusion and why you feel it
is correct. But do not imply that your
opinion is an incontrovertible fact.
Juries do not know the subtext
behind our conventional phrases. All
they hear are the words we say. We
need to be certain that those words
truly convey our meaning. We owe it
to the jurors who have placed their
faith in us, the experts. About the Author
Heidi Eldridge is a Certified Latent
Print Examiner with the Eugene (OR)
Police Department in Eugene, Oregon.
She has successfully defended fingerprint science in a post-NAS Daubertstyle Motion to Exclude hearing in
Oregon and has been writing and
teaching on the subject to help others
understand how to meet these challenges. She can be reached at:
[email protected]
From Evidence Technology Magazine • March-April 2012
www.EvidenceMagazine.com
Practitioner Error vs. Deficient Procedures
Written by Michele Triplett
and
William Schade
F
the practitioner’s actions. Only then
OR MANY YEARS, forensic
can an appropriate solution be found.
science has embraced the idea
The list of possible causes of errors
that any errors made were due to
(in the center of this page) is a startpractitioner shortcomings or malfeaing point and can be expanded further.
sance. Testimony reflected that belief
Low tolerance levels and overconand we were trained that “two comfidence may appear to be practitioner
petent examiners must always arrive
errors, but it is the responsibility of
at the same conclusion”.
an agency (or discipline) to set the
As we entered the 21st Century, the
criteria and parameters. The agency
accreditation requirements and a genmust ensure that practitioners undereral adherence to scientific principles
stand expectations and are capable of
made agencies and practitioners more
achieving them.
aware of concepts such as root-cause
Bias may be considered a system
analysis and process improvement. A
error because an agency or discipline
practitioner is certainly responsible
should have measures to reduce the
for an error that is the result of conpotential for bias. Appropriate protocols
clusions based on his own judgment—
(see 1c in the chart) can diminish
but thorough analysis may determine
pressures and influences
that the practitioner is not
that may affect conclusions.
Root Cause Analysis
solely responsible. When
An agency could require
practitioners are required
1) System Errors
additional review of a situto use judgment, results
a) Use of a deficient method.
ation in which bias may
may not always meet the
i) Allowance of human judgment in lieu of a defined method
have a greater influence.
expectations of others. If
or criteria.
Applying a deductive scispecific procedures and
entific method to derive a
ii)
Poorly
stated
or
improperly
communicated
expectations.
results are desired, then
conclusion can diminish
(a)
The
method
or
criteria
were
followed
but
did
not
produce
clearly stating expectations
bias as well. Such methods
desired results.
may be an easy, yet underinclude: relying on objective
(b) The criteria may need to account for differences of
utilized, solution.
data; attempting to falsify
opinion and or different tolerance levels.
Proper root-cause analyan assumption instead of
sis and suitable corrective
b) Practitioner competence not established (knowledge, ability
trying to confirm it; considaction are essential for those
and skill).
ering data that does not fit;
committed to quality results.
i) Lack of adequate training.
and reviewing the process
ii) Inadequate competency testing prior to allowing a practiIntroduction
as well as the conclusion,
tioner to perform casework.
An error is not always the
as opposed to simply
result of poor decisions.
c) Lack of appropriate protocols, environment, and tools.
reproducing the conclusion.
Indeed, a lack of stated
i) Failure to address and limit external pressure or reduce
Even a difference of
expectations by managebias.
opinion could be a system
ment is a systemic error, and
error when expectations are
ii) Inadequate lighting or poorly maintained equipment.
may contribute to an error
vague. If a difference of
iii) Unavailability of appropriate consultation.
made by a practitioner.
opinion is troublesome, then
2) Practitioner Errors (Understanding the Criteria but Not Applying it)
This deficiency requires that
an agency (or discipline)
a) Medical problem that influences results.
policies and procedures be
should set parameters to
i) Degradation of cognitive abilities.
rewritten. A thorough
control deviation. An agency
ii) Use of medication.
investigation into both the
can establish a policy that
b) Lack of thoroughness.
system and the practitioner’s
conclusions must have
job performance should be
enough justification to hold
i) Carelessness, laziness, complacency.
up to the satisfaction of
conducted to find the cause
ii) Physical or mental fatigue.
other practitioners.
of an error and to establish
iii) Standards and procedures not followed.
Once the cause behind
appropriate corrective action.
c) Ethics.
an unacceptable result is
Accepting responsibility
i) Intentionally disregarding the method.
established, suitable corfor an error should begin
ii) Fabrication / Falsification.
rective action (controls to
at the management level
prevent unacceptable
and progress to examine
the
FRICTION
RIDGE
From Evidence Technology Magazine • May-June 2012
www.EvidenceMagazine.com
13
F R I C T I O N
results from recurring) can be taken to
improve any system, especially one
that requires human decision-making.
Corrective action may include revising
procedures, establishing more specific
criteria, additional training, and
implementing competency testing.
Significance of Errors
It may be important to determine the
significance of an error. Suppose an
error occurred but was detected prior
to any ill effects. There would be no
actual consequences from the error,
but the potential consequences could
have been substantial. The significance
of an error should be determined by
considering the potential effects in
lieu of the actual effects, so that serious errors are addressed appropriately.
In both the medical field and
forensic comparative sciences, some
may assume a false-negative decision
is not significant since no one is
given an incorrect medical treatment
or falsely imprisoned due to the error.
In general, this idea is known as the
precautionary principle: “It is better
to err on the side of caution.”
Forensic science has often quoted
Blackstone’s ratio: “…it is better that
ten guilty persons escape, than that
one innocent suffer.”
It is true that no one is wrongfully
treated or falsely imprisoned due to a
false-negative conclusion, but it may
leave a patient untreated or a suspect
free in the community to commit more
crimes. On the other hand, an erroneous exclusion may be harmless if a
latent print, shoe print, or tire track
should have been identified to the
victim. Until an agency gains experience in determining the root cause of
an error, perhaps it is better to
address all errors instead of trying to
determine the significance of an error.
Discussion
A hypothetical example can demonstrate this form of root-cause analysis
and possible corrective action. Suppose
some analysts in an office believe a
piece of evidence is linked to a specific exemplar, while others disagree.
Of course, varying conclusions are
not acceptable. It is tempting for
management to try to decide which
practitioners are in error. Evaluating
the conclusion against the written cri-
14
R I D G E
teria will determine where the error
lies. Analyzing the six sections from
the chart will determine potential reasons behind errors.
Question 1: Were clear parameters
in place to establish the identification?
One reason people disagree is because
they do not have a clear idea of the
criteria that must be met. Without a
clearly stated expectation, practitioners are free to use self-imposed criteria
that may differ from person to person.
If written criteria did not exist, then
this may have contributed to the
inconsistent conclusions (an error by
management in not stating an expectation). A standard could be implemented, requiring that conclusions be
based on clearly visible data—not
training and experience; or that conclusions require general consensus.
Question 2: Was each practitioner
competent? Many times the competency of practitioners is presumed. If
this is the case, then competency has
not been established and this may
have led to the problem (an error by
the agency). The agency should
implement a formal system to establish practitioner competency. This is
a basic requirement of accreditation
and should be universally adopted by
agencies performing forensic comparative analysis.
Question 3: Were appropriate
tools provided? If practitioners use
differing tools, perhaps a 4.5x magnifier compared to digital enlargement,
then it is possible for conclusions to
differ between analysts. Management
should ensure that practitioners have
appropriate tools available, and are
adequately trained to use the tools
properly.
Question 4: Did one or more of
the examiners have medical or visual
issues? Although not a frequent
occurrence, this is a realistic concern,
and it should not be dismissed as a
possibility.
Question 5: Did one or more of
the examiners lack thoroughness? If
an experienced practitioner becomes
complacent, thoroughness may
decrease. It can be difficult to find
suitable corrective action for a practitioner who lacks thoroughness. Many
supervisors simply ask practitioners
to try harder, but this seldom works.
Implementing additional safeguards
to ensure thoroughness can resolve
this problem. This may include
requiring additional documentation,
ensuring that practitioners perform
work more methodically. Changing
an environment can reduce pressures
and limit distractions that may contribute to a lack of thoroughness.
Limiting extra duties may help a person focus on a specific responsibility
as well.
Question 6: Were the errors due to
ethical issues? It may seem unlikely
that ethics would be the problem, but
it should always be considered.
The answers to these questions
show there are several reasons analysts could have differing conclusions.
The cause of an error may be systemic
and not simply a practitioner error. A
lack of good policies and procedures
(i.e., the cause) by an agency can result
in an error made by a practitioner (i.e.,
the resulting problem).
Conclusion
Quality results come from a quality
system. Just like the airline industry
learns from crash data and implements
better procedures, so too should
forensic science learn from the errors
as they occur and implement better
practices to mitigate their occurrence.
In the past, practitioners have been
blamed for most unacceptable results.
After reassessing various situations,
it can be concluded that many errors
can be avoided if suitable expectations
and procedures are in place. Agencies
and disciplines should continually reevaluate their expectations and procedures in an effort to strive for
improvement. True leadership is displayed by accepting responsibility for
and correcting systemic mistakes. About the Authors
William Schade is the Fingerprint
Records Manager of the Pinellas
County Sheriff ’s Office in Largo,
Florida. He has experience in all
areas of biometric identification.
Michele Triplett is the Latent Print
Operation’s Manager for the King
County Regional AFIS Identification
Program in Seattle, Washington. She
has worked for the program for the
past 20 years.
From Evidence Technology Magazine • May-June 2012
www.EvidenceMagazine.com
The Weight of Subjective Conclusions
Written by Michele Triplett
F
ORENSIC CONCLUSIONS
are typically expressed as being
“matches” or “identifications”.
Without statistical probabilities, these
conclusions may sound like facts but
are more accurately categorized as
deductions, inferences, or affiliations.
Most forensic conclusions are based
on such a wide variety of factors that
they are not currently suitable to
being represented mathematically.
This has led some people to question
the value of forensic conclusions,
holding that the conclusions are
merely the analyst’s personal beliefs
and not solid scientific conclusions.
Is this a valid concern?
The answer may lie in understanding the benefits and limitations
behind different types of statistical
probabilities. There are three basic
types of statistical probabilities.
These are known as classical, empirical, and subjective probabilities.
Classical probabilities are commonly used when there are a finite
number of equally probable events,
such as when tossing a coin. When
tossing a coin, the probability of the
outcome, either heads or tails, is onehalf or 50 percent (one chosen outcome divided by the possible number
of outcomes).
There are times when classical
probabilities do not accurately represent the probability of an event happening, either because there are infinite possible outcomes or because the
likelihood of the outcomes are
unequal. In these situations, empirical probabilities are used to estimate
the possibility of the event.
When using empirical probabilities, the frequency of an event is estimated by observing a sample group
rather than considering the possible
number of outcomes. As an example,
consider the probability of it raining
in Texas. The classical probability
would consider the possible outcomes (rain or no rain), and state
there is a one-half, or 50 percent,
chance of rain. This is clearly inaccurate because the likelihood of each
happening is not the same. The probability would be more accurately estimated by examining a sample group
of the number of days it has rained in
the
FRICTION
RIDGE
the past year. Obviously, examining
probabilities in this manner may still
overlook other important information.
Certain situations are better represented by allowing the user to determine the probability of an event
based on knowledge not considered
in a mathematical equation. These are
known as subjective probabilities.
Accurately diagnosing a skin rash
may involve analyzing the appearance of the rash, additional symptoms, recent exposures, the person’s
occupation, and past occurrences of
similar rashes. A doctor may diagnose a rash based on all of these factors without formally associating
numerical weights with each factor.
This is acceptable and highly valued
if used properly and in the right situation. The value of subjective probabilities is that they can assess more
information than currently accounted
for in a mathematical equation.
No single type of statistical probability is superior to another. The type
of probability preferred is the one
that most accurately represents the
situation at hand. Numerically based
probabilities may sound more persuasive, since there are objective weights
associated with each factor, but they
can be artificially influential if the
weights are inaccurate or if the equation does not account for all relevant
information.
Consider the probability of getting
an “A” in a class. There are five possible outcomes (i.e., A, B, C, D, and
F). The classical probability would
From Evidence Technology Magazine • July-August 2012
www.EvidenceMagazine.com
say the probability of getting an A is
one-fifth, or 20 percent. This would
be inaccurate if the likelihood of
attaining each grade is not the same.
Empirical probabilities may more
accurately represent the situation
because the frequency of past grades
can be considered. However, one
problem with empirical probabilities
is that past events may not represent
future events unless all factors are
similar. Suppose someone had good
grades in the past but currently is not
motivated to study. In this case, an
empirical probability may not accurately represent the current situation.
Instead, subjective probabilities
may be able to account for additional
factors that cannot be considered
with classical or empirical probabilities, allowing for the best representation of the information. One concern
associated with subjective probabilities is that a person may base his
probability on a gut feeling, a guess,
or on intuition, rather than on current
relevant information. A common
example is when a person gives a
subjective probability of the Yankees
winning their division. The person is
typically basing this probability on
personal beliefs and desires, resulting
in a personal opinion instead of a
sound conclusion.
Those trained in science understand the need to refrain from relying
on personal feelings; instead only
relying on information that can be
demonstrated to others. Stating a subjective probability of the Yankees
winning their division based on relevant information, such as the number
of injured players, would result in a
valued logical deduction.
The value of forensic conclusions
is not in their ability to be numerically quantified but rather in the soundness behind the conclusion. In certain
situations, subjective probabilities
may give the most accurate representation of the information at hand. About the Author
Michele Triplett is the Latent Print
Operation’s Manager for the King
County Regional AFIS Identification
Program in Seattle, Washington. She
has worked for the program for the
past 20 years.
15
To the Exclusion of All Others
Written by Michele Triplett
F
OR THE PAST FEW YEARS,
there has been ongoing debate
about whether pattern evidence
identifications are “to the exclusion
of all other sources”. The concern is
about overstating conclusions, and
using the phrase to the exclusion of
all others implies that a conclusion is
irrefutable with no possibility of error.
The same concern has been stated
regarding the use of words like definite,
absolute, conclusive, 100-percent
confidence, or 100-percent certainty.
2008 Hull Frye-Mack Hearing
Prior to the Frye-Mack hearing for
State of Minnesota v Jeremy Jason
Hull, conclusions of identity for fingerprint impressions were considered by
most practitioners to be “to the exclusion of all others”. The Frye-Mack
testimony stated a fingerprint impression could be identified to a source
but not individualized. This distinction
was made because the analysts felt
that the word individualize presented
the conclusion as a fact while the
word identify left the door open for
the remote possibility that someone
else possessed a similar arrangement
of friction ridge detail.
The effort to make this distinction
was not a matter of questioning the
principle of uniqueness; instead, it was
highlighting the amount of information
needed to determine that uniqueness
had not been established. At some
point, the information under consideration may be so minimal or ambiguous
that it becomes plausible that another
source could have produced a similar
pattern.
An additional reason for using the
term identify over individualize was to
specify that the unknown impression
was not compared to every possible
source.
SWGFAST Modification
In September 2008, based on the ideas
presented in the Hull case, the
Scientific Working Group on Friction
Ridge Analysis, Study and Technology (SWGFAST) started the process
of removing the phrase “to the exclusion of all others” from their definition of individualization. However,
16
the
FRICTION
RIDGE
The argument that
a person would have to
compare a fingerprint
impression to every
person in order to exclude
all others may apply
to exact sciences, but
fingerprint comparisons
are not an exact science.
SWGFAST did not differentiate
between the meaning of identification
and individualization as the Hull
Frye-Mack testimony did.
The IAI
On February 19, 2009, in a response
to the National Academy of Sciences
report Strengthening Forensic Science
in the United States: A Path Forward,
the president of the International
Association for Identification (IAI),
Robert Garrett, wrote a letter to IAI
members stating: “Although the IAI
does not, at this time, endorse the use
of probabilistic models when stating
conclusions of identification, members are advised to avoid stating their
conclusions in absolute terms when
dealing with population issues.”
Practitioners’ Views
Many practitioners put these events
together and claimed they could no
longer exclude all others when making a comparison. Others disagreed
and felt there was nothing to forbid
them from making a determination
“to the exclusion of all others”.
SWGFAST had removed the phrase
from their terminology but they had
not specified that it could not be stated.
Similarly, the IAI letter was not a formal resolution nor did it specifically
say “to the exclusion of all others”.
Those opposed to the phrase claim
it is a statement of fact, where no
possibility exists that the impression
could have come from another source.
Others think of it as a statement indicating the range of those under consideration, acknowledging that conclusions are never absolute.
Everyone would agree that physically comparing an impression to all
individuals is unrealistic. Nevertheless,
some maintain their conclusions are
to the exclusion of all others regardless of whether it is stated. Those
people reason that if all fingerprints
are accepted as unique, and they have
concluded that a fingerprint impression was made by a certain source,
then they are excluding everyone else
—not physically, but theoretically.
The possibility of an alternative conclusion is so remote that it can be disregarded as implausible. If another
source could have plausibly made an
impression, then the analyst would
have given a conclusion of inconclusive.
The argument that a person would
have to compare a fingerprint impression to every person in order to
exclude all others may apply to exact
sciences, but fingerprint comparisons
are not an exact science. Fingerprint
comparisons are logical deductions
where appropriate rules of inference
are permitted; viewing all possibilities
is unnecessary.
Conclusion
Regardless of which view a person
holds, clearly articulating the strength
of a conclusion is essential. Stating
that a conclusion is “to the exclusion
Evidence Technology Magazine • November-December 2012
www.EvidenceMagazine.com
FRICTION RIDGE
of all others” may be an overstatement.
Differentiating between the words
identify and individualize may be one
solution, but attorneys and jurors may
hear the same message regardless of
the term used and perceive the
conclusion as a fact instead of a
deduction. This misrepresentation
may inject a debate between opposing
court counsel and undermine the
credibility of otherwise accurate testimony.
Another suggestion has been to
state that conclusions are the opinion
of the analyst. Labeling conclusions
as opinions helps avoid overstating
results but it may severely undermine
a conclusion if it is perceived as being
the personal opinion of the analyst
and not a scientific opinion that would
be corroborated by others as clearly
beyond debate.
Perhaps a better way to state any
positive pattern evidence conclusion
is to use a statement instead of simplifying the conclusion down to a
single word that can be easily misconstrued. Some possibilities may be:
“The information between the
impressions (latent prints, tire tracks,
toolmarks, etc.) indicates that the
impression was deposited by the given
source.” Or…
“After analyzing the data, the only
plausible conclusion I can arrive at
is that this impression was made by
this source.” Or…
“I have thoroughly examined the
data between the impressions and I
would attribute impression A as
coming from source B.”
Using a statement in lieu of using
a single word for conclusions may be
beneficial because the weight of the
conclusion can be indicated along
with the conclusion itself. Phrases
such as these present a belief grounded in reasoning while one-word
answers present a conclusion as
absolute fact. About the Author
Michele Triplett is the Latent Print
Operation’s Manager for the King
County Regional AFIS Identification
Program in Seattle, Washington. She
has worked for the program for the
past 20 years.
THE
EVIDENCE TECHNOLOGY MAGAZINE
DIGITAL EDITION
www.EvidenceMagazine.com/v10n6.htm
Evidence Technology Magazine • November-December 2012
www.EvidenceMagazine.com
17