Forensics: experts disagree on statistics from DNA trawls

OPINION
NATURE|Vol 464|29 April 2010
CORRESPONDENCE
French research
also being stifled
by autocracy
Your Editorial ‘Scientific glasnost’
(Nature 464, 141–142; 2010)
highlights parochial anachronisms
in the Russian Academy of
Sciences that are obstructing
the development of a knowledgebased economy. Russia is
not alone: science in France
has been experiencing similar
problems.
Unfortunately for Europe,
French politicians do not seem
to have properly understood
that research is crucial for an
efficient economy. Germany is not
the only country to demonstrate
that research investment of at
least 3% of gross national product
(GNP) has positive short-term
and medium-term effects on the
country’s industrial output, thanks
to inventions and start-ups —
less populous nations such as
Switzerland and Finland have
shown the same.
However, in 2006 France
spent only about 2.1% of GNP
on research and development
— a proportion more typical of
a developing country. Former
socialist prime minister Lionel
Jospin had planned, before his
2002 defeat, to expand this
to more than 3% of GNP. And
in 2007, socialist presidential
candidate Segolène Royal
vowed to put research at the top
of the government’s priorities;
however, she was not elected.
So the decisive victory by a
left-wing coalition in France’s
regional elections last month
offers some hope. Even so, the
issue of research was notably
absent from the pre-election
evaluation of regional priorities
(Nouvel Observateur 11 March;
2010).
The old French devils of
centralism, dirigisme and
corporatism in politics and
science still prevail. France’s
main research body, the CNRS,
has about 30,000 members
and was once an independent
agency, relatively successful in
1266
basic research. It is now being
suffocated by integration into a
university system that has shown
little competence in managing
top-level research. The recently
acquired autonomy of local
universities is being undermined
by plans for their fusion into
super-universities.
Also dispiriting is the French
government’s plan to take
responsibility for research
investment away from the regions
once more — despite the fact
that regional investment has
just given an essential boost to
local public research in places
such as Strasbourg, Toulouse,
Marseille and Montpellier (see,
for example, http://go.nature.
com/XDPeN2).
Patience is necessary in
Russia, where problems may
be explained by the country’s
recent history. The failure
of present-day France to
comprehend the issues and
implement the policies necessary
for economic success is more
dangerous and distressing.
Klaus Scherrer Institut Jacques
Monod, Bâtiment Buffon, 15 rue Hélène
Brion, 75205 Paris Cedex 13, France
e-mail: scherrer.klaus@ijm.
univ-paris-diderot.fr
Forensics: stronger
scientific scrutiny
needed in Britain
I congratulate Nature for
highlighting problems that exist
in forensic science, and in lowcopy-number DNA profiling in
particular (Nature 464, 325 and
347–348; 2010).
Any move intended to
improve matters must, in the
first instance, be made within
the scientific community. As
the lord chief justice William
Murray told an English court
in 1782: “In matters of science,
the reasoning of men of science
can only be answered by men of
science.”
The United Kingdom and other
jurisdictions must recognize
the defects identified by the US
National Academies of Science
report ‘Strengthening Forensic
Science in the United States:
A Path Forward’ (2009) and
others. They must involve the
wider scientific community in the
validation of forensic techniques,
and in scrutinizing the use of
those techniques in forensic
investigations.
Peter Neufeld and Barry Scheck
correctly suggest (Nature 464,
351; 2010) that, although an
accreditation and certification
process may be part of the
solution to the many underlying
problems in forensic practice, this
is not a panacea. It is essential for
any proposed scheme that the
standards applied are based on
sound science.
It is time for all jurisdictions to
adopt a common approach, using
the proposed US model of a truly
independent and scientifically
sound national institute. This
has not so far been achieved —
neither is it likely to be — by
what the National Academies
of Science describe as “an
extremely complex and
decentralized system, with
various players, jurisdictions,
demands, and limitations”. A
network of such national institutes
would enable the development of
robust international standards
that could then be tailored to local
practice.
The UK response to the
documented and public failures
in forensic science has been to
appoint an independent regulator,
Andrew Rennison. The regulator,
an ex-policeman funded by the
Home Office, chairs an advisory
council whose scientific input
comes from within the forensic
community and from the suppliers
of services to the police. The
regulator-commissioned review
concluded that the low-template
DNA techniques were fit for
purpose (see http://go.nature.
com/3shVJH).
The introspective and isolated
position of forensic science
within the United Kingdom is
further shown by its removal
from the Science, Engineering
and Manufacturing Sector Skills
© 2010 Macmillan Publishers Limited. All rights reserved
Council. Forensic science has
been placed, instead, within the
Skills for Justice Sector Skills
Council, where it is the only
‘scientific’ component — thereby
removing an opportunity for
external scientific scrutiny.
I look forward to the
development of a satisfactory
model in the United Kingdom. In
the short term, a fresh, deeper
and wider look at the use of
low-template DNA techniques,
particularly in casework, is
overdue.
Allan Jamieson The Forensic Institute,
Baltic Chambers, 50 Wellington Street,
Glasgow G2 6HJ, UK
e-mail: [email protected]
Forensics: experts
disagree on statistics
from DNA trawls
Statistical analysis in DNAfingerprint matching is a case
in point of the need for more
science in forensics (Nature 464,
325; 2010)
In ‘confirmatory cases’,
suspects’ DNA is found to match
that from the crime scene.
A serious problem for crime
laboratories, however, is how to
present the evidentiary value
of DNA-profile matches when
those matches arise from trawls
of the DNA database, sometimes
referred to as ‘cold hits’. The
issue stems from differences
in ‘frequentist’ and Bayesian
statistics, and is beyond the
ability of most courts to
adjudicate.
Statisticians of the frequentist
school argue that a trawl involves
many independent trials for
matching, so that a match
from a cold hit within a database
of N individuals, each with a
match probability P, provides a
hit with probability NP. Bayesian
statisticians, on the other hand,
argue that a match between
suspect and crime scene
provides a likelihood ratio that
is independent of whether the
match came from a trawl or
not — in which case the
OPINION
NATURE|Vol 464|29 April 2010
“Understanding the evolution of cooperation
is one of this century’s foremost scientific
challenges.” Mike Mesterton-Gibbons, page 1280
of Forensic Science could help
in solving this kind of problem.
Charles Taylor Department of Ecology
and Evolution, University of California,
Los Angeles, California, USA
Paul Colman Los Angeles County
Sheriff’s Department, 1800 Paseo
Rancho Castilla, Los Angeles,
California 90032, USA
e-mail: [email protected]
Statisticians and
historians should
help improve metrics
To develop and apply adequate
metrics (Nature 464, 488-489;
2010), a proper understanding of
the methodology of measuring
and of the phenomenon to be
measured is essential.
Key contributors to the analysis
of scientific metrics may therefore
be statisticians and historians of
science. Both groups urge caution
in applying science metrics (see,
for example, B. Lightman et al. Isis
100, 1–3; 2009) .
When substantiating claims
about the prominence of
researchers, science historians
draw on publication numbers,
citation numbers, invitations,
editorial duties, awards,
promotions, grant funding,
membership of academies,
honorary titles, institutional
affiliations and links to other
prominent scientists. But they
rarely use these measures alone:
rather they are used as indicators
to supplement and vindicate
thorough analysis (H. Kragh An
Introduction to the Historiography
of Science Cambridge Univ. Press,
1987).
Statisticians would add that,
for most of the present popular
measures, there is no properly
defined model of the relation
between variables, little attention
to confounding factors, and
ignorance about the uncertainty
of the measures and how that
uncertainty affects rankings
derived from them (R. Adler et al.
Statist. Sci. 24, 1–14; 2009).
In addition, the feedback
mechanisms that arise when
scientists change their publishing
and citing behaviour in order to
maximize their metric outcome
will be a major challenge in
developing realistic models. For
predictions from past to future
successes, these challenges will
intensify.
Being aware of these
shortcomings of scientific metrics
is crucial for any endeavour that
aims to improve them.
Hanne Andersen Department of
Science Studies, Aarhus University,
CF Moellers Alle bld. 1110,
8000 Aarhus C, Denmark
e-mail: [email protected]
Nature’s readers comment online
A selection of responses posted on Nature’s website to ‘Let’s make science
metrics more scientific’ by Julia Lane (Nature 464, 488–489; 2010)
Konrad Hinsen said:
Two fundamental problems with metrics in
science are that quantity does not imply quality,
and that short-term impact does not imply
long-term significance. The real value of many
scientific discoveries often becomes apparent
only many years later. It would be interesting to
evaluate metrics by applying them to research
that is a few decades old. Would they have
identified ideas and discoveries that we now
recognize as breakthroughs?
Long-term services to the scientific community
are undervalued by current metrics, which
simply count visible signs of activity. Take the
development of scientific software: a new piece
of software can be the subject of a publication,
but the years of maintenance and technical
support that usually follow remain invisible.
e-mail: [email protected]
Martin Fenner said:
Another important motivation for improving
science metrics is to reduce the burden on
researchers and administrators in evaluating
research. The proportion of time spent doing
research versus time spent applying for
funding, submitting manuscripts, filling out
evaluation forms, undertaking peer review and
the rest has become ridiculous for many active
scientists.
Science metrics are not only important for
evaluating scientific output, they are also great
discovery tools, which may turn out to be more
useful. Traditional ways of discovering science
(such as keyword searches in bibliographic
databases) are increasingly superseded by
non-traditional approaches that rely on social
networking tools for awareness, evaluations
and popularity measurements of research
findings.
e-mail: [email protected]
Luigi Foschini said:
In the same issue, you run a News Feature on
large collaborations in high-energy physics
(Z. Merali Nature 464, 482; 201) — some
10,000 researchers in the case of the Large
Hadron Collider (enough to fill a small city).
People who build enormous instruments of
course do great work that enables important
parameters to be measured.
But the practice of listing as authors on
© 2010 Macmillan Publishers Limited. All rights reserved
papers anyone who just tightens bolts or brings
in money is killing the concept of authorship
and hence any chance of measuring the
productivity of individuals. Should I include
Steve Jobs on papers I publish simply because I
use a Mac to analyse data and to write articles
and books?
e-mail: [email protected]
Björn Brembs said:
No matter how complex and sophisticated,
any system is liable to gaming. Even in an ideal
world, in which we might have the most
comprehensive and advanced system for
reputation-building and automated
assessment of the huge scientific enterprise
in all its diversity, wouldn’t the evolutionary
dynamics engaged by the selection pressures
within such systems demand that we keep
randomly shuffling the weights and rules
of these future metrics faster than the
population can adapt?
e-mail: [email protected]
Readers can now comment online on anything
published in Nature. To join in this debate, go to
go.nature.com/4U9HWS.
1267
ILLUSTRATION BY DAVID PARKINS
evidentiary value of a hit is
equal to P in both cold and
confirmatory cases.
The differences can be
profound. In one case in
California (The People v John
Puckett), now on appeal, the
Bayesian value of 1 in 1 million
was allowed, whereas entry of
the frequentist value of 1 in 3 was
not permitted.
Some panels of experts have
recommended the frequentist
NP value (including the US
National Research Council’s
Committee on DNA Forensic
Science and the US Department
of Justice’s DNA Advisory
Board). Others recommend the
Bayesian value of P.
Crime laboratories are
frequently unsure of which
value to present, or whether
to report both and leave it to
the attorneys and judges. The
proposed US National Institute