John Green seminars

How to use research metrics to improve
research performance
13 - 17 February 2017
Presentation to various organizations in Moscow & Samara
John Green
formerly Chief Operating Officer, Imperial College London
now Life Fellow, Queens’ College Cambridge
[email protected]
UNIVERSITY OF
CAMBRIDGE
1
Benchmarking or ranking?
Rankings are compiled by various providers; each creates relative
positions based on certain weightings – some even allow a
university to choose weightings to create their own ranking.
Unsurprisingly, a Rector chooses the ranking in which his university
comes out best to market the university to new students
Benchmarking has a very different purpose and is a precise activity
which measures activity (e.g. funding, outcomes such as impact etc)
through the use of carefully defined metrics
2
Benchmarking results in higher
ranking
Rankings are important – they raise the profile of an institution –
we all want to be as high as possible in rankings
Moving higher in rankings is the result of improved performance
Higher performance comes from using benchmarking
To benchmark one needs precise measures of activity (e.g. funding,
outcomes such as impact etc) through the use of carefully defined
metrics
The Snowball partner universities believed that there was
no robust methodology for benchmarking
3
Snowball Metrics enable robust,
university-driven benchmarking
Snowball Metrics enable accurate benchmarking to drive
quality and efficiency.
Universities need standard metrics to benchmark themselves
relative to peers, so they can strategically align resources to
their strengths and weaknesses
4
Snowball Metrics approach
• Bottom-up initiative: universities define and endorse metrics to
generate a strategic dashboard. The community is their guardian
• Draw on all data: university, commercial and public
• Ensure that the metrics are system- and tool-agnostic
• Build on existing definitions and standards where possible and
sensible
• Ensure they are robust so compare apples with apples
5
The output of Snowball Metrics
“Recipes” – agreed and tested metric
methodologies – are the output of
Snowball Metrics
From Statement of Intent:
•
Agreed and tested methodologies…
are and will continue to be shared
free-of-charge
•
None of the project partners will
at any stage apply any charges for
the methodologies
•
Any organization can use these
methodologies for their own
purposes, public service or
commercial
www.snowballmetrics.com/metrics
Statement of Intent available at http://www.snowballmetrics.com/wp-content/uploads/Snowball-Metrics6
Letter-of-Intent.pdf
Snowball Metrics delivered
Research
Research Inputs
Research Process
Research Outputs and Outcomes
•
•
•
Publications and citations
• Scholarly Output (enhancement)
• Citation Count
• Citations per Output
• h-index
• Field-Weighted Citation Impact
• Outputs in Top Percentiles
• Publications in Top Journal Percentiles
•
•
Applications Volume
(enhancement)
Awards Volume (enhancement)
Success Rate
Income Volume
Market Share
Collaboration
• Collaboration
• Collaboration Impact
• Collaboration Field-Weighted Citation Impact
• Collaboration Publication Share
• Academic-Corporate Collaboration
• Academic-Corporate Collaboration Impact
Impact
• Altmetrics
• Public Engagement
• Academic Recognition
Enterprise Activities/Economic
Development
•
•
Academic-Industry Leverage
Business Consultancy Activities
•
Contract Research Volume
•
•
•
•
Intellectual Property Volume
Intellectual Property Income
Sustainable Spin-Offs (enhancement)
Spin-Off-Related Finances
Postgraduate Education
•
Research Student Funding
•
Research Student to
Academic Staff Ratio
•
•
Time to Approval of Doctoral degree
Destination of Research Student Leavers
Denominators
Institution (enhancement)
Discipline (enhancement)
HESA cost centre – HERD mapping
HESA funder types – FundRef mapping
Funding type
Post-graduate research student, and FTE proportion
Gender
7
How do Snowball metrics help
universities align their strategies to
their strengths and weaknesses?
8
Metrics can be size-normalized
9
Metrics can be “sliced and diced”
10
Research Student Funding
Sources of Funding – Full time, UK Male, home students in Biology 2014/15
80
70
% of active Students
60
50
40
30
20
10
0
No award or
financial
backing
Provider
waiver/award
Research
Councils
UK Charity
University of Cambridge
11
UK Public
Sector
UK Industry
University of Bristol
Overseas
Industry
EU
Other overseas Other sources
Government
sources
University of Leeds
Time to Award of Doctoral Degree
1.6
Ratio of Time to award of
Doctoral degree : Expected Course length
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
University of
Cambridge
12
University of
Bristol
University of
Leeds
University of
Oxford
Queens University Imperial College University College University of St.
Belfast
London
London
Andrews
Research Student to Academic Staff
Ratio (Biosciences)
Research Student to Academic Staff Ratio (Biosciences)
3.0
2.5
2.0
1.5
1.0
0.5
0.0
University of
Cambridge
13
University of
Bristol
University of
Leeds
University of
Oxford
Queens
Imperial College
University
University of St.
University Belfast
London
College London
Andrews
What benefits have Imperial
College London seen?
•
•
•
•
•
•
•
Understanding strengths and weaknesses
Understanding competitors and identify our peer group
Recruitment of faculty
Developing strategies to focus resource and collaborate
Increasing selective strategy (Global Themes)
Improving research income & outputs
Strategic approach
Some real examples
• Decrease in neuroscience income
• Recruiting a new professor
• Divestment of an institute
Benefits for universities
• Trusted comparison of metrics on a robust standard
(comparing apples to apples)
• Universities are in control
• Methods (recipes) are not proprietary
• Metrics are agnostic to systems or suppliers
– anyone can use them for their own purposes
• Ability to choose and control with whom one shares
/benchmarks (the crossroad/traffic light model)
• Ability to benchmark nationally and internationally
15
Globalizing Snowball Metrics
• US Michigan, Northwestern University, University of Illinois
at Urbana-Champaign, Arizona State, MD Anderson, Kansas
State
• ANZ Queensland, Western Australia, Auckland ,Canberra
• Japan 2013 to date RU11 & MEXT
• APRU Association of Pacific Rim Universities
• HKU & NTU
• European Commission for H2020
• Fundação para a Ciência e a Tecnologia (FCT) in Portugal
• Sweden
• Denmark
16
John Green
[email protected]
Snowball Metrics http://www.snowballmetrics.com/
17
Towards global standards for
benchmarking
Snowball Metrics denominators should enable global benchmarking as far as
possible. We do not know how this will look , but one possibility is:
Common core where benchmarking
against global peers can be conducted.
Aim is to make this as big as possible
Shared features where benchmarking
between Countries 1 and 2 can be
supported by shared or analagous data e.g.
regional benchmarking
UK metrics
Japan metrics
Australia / New Zealand metrics
18
National peculiarity can support
benchmarking within Country 1, but not
globally i.e. national benchmarking
Illustrative only,
testing underway
Snowball Metrics exchange
• Any institution who has Snowball Metrics can use the exchange
• Institutional members are responsible for generating Snowball Metrics
according to the Snowball recipes
• An institution could be the member of one or more ‘Benchmarking clubs’
(Russell Group have agreed to form a Snowball benchmarking “club”)
• Institutions choose what to share
• Exchange service encrypts all metrics and only entitled institutions can
decrypt
• Data underlying metrics will never be exchanged
• CRIS system could act as provider and client, communicating directly with
exchange APIs
19
Snowball metrics exchange
Data sources
inputs
Snowball Metrics
eXchange
Data sources
outputs
database
InCites
InCites
SciVal
SciVal
Symplectic
Symplectic
Pure
Pure
Converis
Converis
spreadsheet
Scholarly output = 1,376
spreadsheet
FJ#)_!A#_+
Scholarly output = 1,376
20
Metrics are only valuable when
used responsibly
• Always use more than 1 research metric
– A strong research ecosystem produces and recognises diverse types of
impact
– Reduces the chance of gaming
– Reduces the chance of driving undesirable changes in behaviour
• Select metrics that represent behaviour you want to
encourage
• Data source coverage, disciplinary focus, and timeline
should affect the metrics selected
• Select a set of metrics whose strengths complement each
others’ weaknesses
John Green
[email protected]
Snowball Metrics http://www.snowballmetrics.com/
22
Are research metrics alone enough to
fully describe performance?
No
Is expert opinion alone enough to fully
describe performance?
No
Can performance be fully described?
Yes
Two Golden Rules for using
research metrics
When used with common sense, research metrics together with
qualitative input give a complete, balanced, multi-dimensional view of
performance
Always use both qualitative and
quantitative input into your
decisions
Always use more than one
research metric as the
quantitative input
24
The importance of research metrics
•
•
•
•
•
•
Research metrics help us to make better decisions
– More informed
– Reduce chance of error
Metrics are responsive to changes in performance
Metrics enable benchmarking even against peers we don’t know
Metrics can save time and money, if used wisely to indicate where peer
review should be used for validation
Metrics are an objective complement to expert opinion
– Both approaches have strengths and weaknesses
– Valuable information is available when these approaches differ in message
Always use both approaches in combination
Rankings
Peer review
Qualitative
Expert opinion
Narratives
with
Quantitative
Benchmarking
Narrative support
F. Qualitative input
The basket of metrics is diverse…
Metric theme
Metric sub-theme
A. Funding
Awards
B. Outputs
Productivity of research outputs
S
S
Visibility of communication channels
C. Research Impact
Research influence
S
S
Knowledge transfer
D. Engagement
E. Societal Impact
S
Academic network
S
Non-academic network
S
Expertise transfer
S
Societal Impact
S
Number of metrics shown is indicative only.
S
S
S
“S” denotes Snowball Metrics www.snowballmetrics.com
26
… and the diverse metrics are
available for all entities
Metric theme
A. Funding
F. Qualitative input
B. Outputs
C. Research Impact
Outputs
e.g. article, research data,
blog, monograph
Custom set of outputs
e.g. funders’ output, articles
I’ve reviewed
Researcher or group
Institution or group
D. Engagement
Subject Area
Serial
e.g. journal, proceedings
Portfolio
e.g. publisher’s title list
E. Societal Impact
Country or group
27
Users in different countries select
different metrics
Metric
Field-Weighted
Citation Impact
Outputs in Top
Percentiles
Publications in Top
Journal Percentiles
Russia World Australia Canada China Germany Japan
United United
Kingdom States
?
1
1
1
3
2
4
3
1
?
2
2
3
1
4
1
1
6
?
3
4
2
2
6
2
2
5
Collaboration
?
4
6
6
5
1
3
5
7
Citations per
Publication
?
5
3
7
6
3
5
4
3
Citation Count
?
6
5
5
4
8
6
6
2
h-indices
?
7
7
4
8
7
7
7
4
Usage of metrics available in SciVal’s Benchmarking module from 11 March 2014 to 28 June 2015.
A partial list of the metrics available at that time is shown, focusing on the most frequently-used. Scholarly Output it
excluded since this is the default.
How should the basket of metrics
be used?
1. Define the question clearly, so that you can
2. Select appropriate metrics for the particular situation, and
3. Calculate metrics for the entities you are investigating, and
4. For suitable peers so you can benchmark performance
29
So do metrics help?
• It’s not all metrics, or no metrics – it’s not a black and white decision
• Metrics can provide data points on which to build using expert
opinion (peer review) to delve deeper & deal with outliers
• Metrics aren’t a replacement for human judgment – they complement it
• Metrics aren’t the antithesis of peer review
• (Biblio)-metrics incorporate decisions made by peer review,
e.g. whether to publish, what to cite
• But metrics aren’t just bibliometrics – there are many measures
that can and should be used
• We value objective normalized universal information that
enables meaningful comparisons
• After all academia is an evidence-based activity!
• First define the question; then pick the metrics to answer them
30
Conclusion
• Research metrics help us to make better decisions, if used
responsibly
– Golden Rule 1: Always use both qualitative and quantitative
input into your decisions
– Golden Rule 2: Always use more than one research metric as
the quantitative input
• Successful benchmarking relies on a clearly defined question, a
selection of appropriate metrics benchmarked against suitable
peers
• Suggest a common core of metrics, with complementary
sets suitable for particular disciplines and institution types
• Use new metrics alongside traditional metrics, not instead
of them
If universities in other countries
want to explore metrics …
• Work bottom-up
• Have a manageable group of universities working together
– need those who are willing to work at it (it took us 5 years  )
• Involve an organisation which can test the metrics
• UK universities are not very good at managing projects ….
• Need high level Steering Group (to ensure buy-in from the top)
• Need data experts from within universities
• AND… a project group which “does the work”
• Need to get buy-in from all parts of the sector: funders,
government, researchers …
• Leverage the work others have done (Snowball !! )
• Trust each other
32
John Green
[email protected]
Snowball Metrics http://www.snowballmetrics.com/
33