Monitoring Steps - Quality Planning

2013
Monitoring Steps
Monitoring tools, indicators and data
management
Monitoring tools, indicators and data management
There are many methods and tools that can be used for monitoring. In choosing and
using these, it is important to ensure the information produced is of good quality, useful
for the monitoring purpose and consistent over time. These requirements should govern
the way you collect and manage your data.
Key messages





There are many different monitoring methods that can be used.
Don't reinvent the wheel, and think about what other people can do for you.
Indicator selection should reflect the monitoring purpose and how the information
will be used.
Don't get bogged down in detail or feel you need to measure everything.
Data management needs to ensure data is consistent, of high quality, easy to
collect and record, and suitable for analysis.
The requirement to monitor is prescribed by section 35 RMA. In addition, national
regulations may be set under section 360, that specify particular indicators, standards,
methods or requirements applying to monitoring, which may differ depending on what is
being monitored.
Monitoring tools





1|Page
A wide range of methods and tools can be used for monitoring. While the focus is
often placed on quantitative measurement, there are also other ways to monitor
whether you are achieving your resource management objectives. Tools could
include:
o perception or satisfaction surveys
o sequential visual analysis by comparing photos over time
o participative monitoring and evaluation
o model building and scenario testing - monitoring in close situations where
the system of environmental actors, causal chains and results may be
readily understood (used for forecasting).
Start small. Pilot schemes can be a good way to build capacity on a small budget.
Try techniques and learn from the results in limited situations. Proving capability
in this way may help attract more resources and cooperation. The key is never to
do more than what you can learn from, and never to promise more than you can
achieve.
Don't reinvent the wheel; explore opportunities to use monitoring carried out by
other organisations, or to work with other councils to develop methods for
monitoring similar issues.
Think creatively about getting local experts or community groups involved.
In deciding whether a particular method is appropriate, you should consider the
following questions:
Does it provide a means of measuring what is happening?
Is it repeatable (ie, will the results hold across samples)?
Will it provide information related to management (ie, will a change in the
way the resource is managed have any effect on the results of
monitoring?)
o Will it provide information in the timeframes that you need it?
o Is it efficient, or will fewer or less complex measures do?
Develop an understanding of the systemic context of particular issues and
indicators (what are the resources, natural and human induced changes in the
environment, role of the indicators in understanding these changes), so that the
analysis of monitoring results can be accurately analysed and interpreted.
Consider using 'one-off' research to supplement ongoing monitoring where you
want to fill in specific information gaps or explore an aspect of an issue at a
particular time. This can be more cost-effective (and more achievable) than trying
to cover everything in your ongoing monitoring programme.
Be wary of systems that include collective ranking across various types of
impacts, with a cumulative score of significance. These may not be repeatable
from one situation to another because the underlying scales of measurement are
unique to each type of impact and are not compatible with other scales. Such
scores should be acknowledged to be qualitative, and will need to be described so
that people other than the person who decided the score can identify assumptions
and make their own findings of significance.
o
o
o



Perceptions and making decisions based on uncertainty




Councils have to deal with a number of uncertainties and manage issues for which
there may be no right or wrong answers - but a number of qualitative or
perceptive judgements may exist eg, for water quality, amenity issues, etc.
There are times when asking people about their perceptions and views (via
qualitative social science research) may assist more quantitative and technical
monitoring (eg, is the water perceived to be clean? Or is rural amenity
satisfactory?).
It is important to have high-quality social science research (including survey
design) if this sort of perception research is to be used to substantiate monitoring
and reporting.
There have been many perception surveys carried out in New Zealand. Most of
these are satisfaction surveys carried out at a local scale and assessing localised
environmental issues.
Participative monitoring and evaluation

2|Page
Participative monitoring and evaluation recognises that public authorities depend
on people in the community to implement their policies and strategies and
reproduce it in a myriad of applications. The community is not a passive recipient,
but a group of actors.



Monitoring indicators and reporting formats tend to be about the officers who
design them and what they want to know, and less about people in the
community and what they want to learn.
Participative monitoring and evaluation is explicit about who the project is
targeting so we can distinguish their various experiences and who is producing
the outputs so we can assist their different needs.
This approach develops indicators and questions that make sense to stakeholders
so they can identify significant change in their environment and learn how to
improve environmental behaviours or practice. Facilitated reflection is always an
element, to make the most of the chance to learn.
Selecting and using indicators

Indicators are an 'indication' - they provide a snapshot of meaning that people can
easily absorb. The trends revealed by the indicators will need to be explained by
further research.
Data, information and indicators figure

Before selecting indicators, consider:
o What is the purpose of your monitoring (eg, to assess the state of the
environment, to assess plan effectiveness, to identify compliance issues)?
o What sort of reporting will you be doing (and who will the audience be)?
o How will the monitoring feed into review of policy and/or management?
Indicator development should be based on the answers to these questions, rather than
on current data collection activity - ie, "why are we doing this?" not "what data do we
have?" (Developing a monitoring strategy is a useful way of setting this framework.)




3|Page
Some
o
o
o
o
o
useful questions to assist in selecting indicators include:
What are the systemic features of the issue that you seek to understand?
What is the outcome you want to monitor?
What are the pressures/causal factors that affect this outcome?
What sources of information are available to tell you about these?
What information will tell you what effects (if any) your policies and
management are having?
Be prepared to reassess your data collection activities in light of the results of
your indicator selection process. A stocktake of current data collected in your
organisation will help you identify what relevant information you already collect
and what data collection activities you might need to modify.
It is important not to commit yourself to more indicators than you have the
resources to collect data for, consistently over time. Instead, identify your key
priorities and start with these.
Consider the links between the different types of monitoring your organisation
undertakes (eg, state of the environment, policy/plan effectiveness,
consent/compliance/complaints and community outcomes monitoring). Are there
indicators that will be useful for more than one of these purposes?


A wide range of environmental indicators have been developed by various
organisations - look at the possibility of adopting some of these rather than
reinventing the wheel.
Review indicators periodically to determine whether they are providing the
information you need to understand pressures, states and responses.
Interpreting indicator results



Consider making data subject to panel review. People with different roles in
relation to what is being measured can have important experience that adds
meaning to the data. For example, resource consent monitoring frequently
produces lists of breaches but less so information about the perceived impact of
those breaches, the sensitivity of the receiving environment, or the difficulty of
implementing the consent on site due to technical or unforeseen physical issues.
The developer, neighbours, consenting planner, monitoring officer or scientist
might all make relevant observations that would benefit future practice and put
the data in context (eg, in terms of repeatability).
Data collected will not provide useful information for monitoring without a baseline
for comparison. Monitoring should always be in reference to the situation before
an intervention, and what things would have been like without it. This (predictive)
base-line acknowledges that change of some kind often occurs anyway - whether
through natural processes or patterns of human resource use - due to economic
or social factors. Think about what baseline you need to establish so that you will
be able to answer the question, "what real difference has the policy and its
methods of implementation achieved?"
Don't assume that an intervention has actually occurred, or occurred in the way
expected when indicators were first written. You may not be able to accurately
attribute the data you collect to the logic of your district or regional plan or other
programme of interventions. The logic may break down at a number of points. For
example, were consent conditions enforced, were conditions a practical and
accurate application of plan criteria, and did the criteria reflect the policies?
Further, the logic may have made a poor job of understanding causality in the
environment in the first place, or the causal chain may have been too complex or
subject to many external factors.
Data collection


4|Page
When you collect data you should be confident that people want to and will be
able to use and understand it. Much effort has been expended collecting data that
no one looks at. Think about who will use it: who stands to gain most from it?
Whose behaviours might change as a result of the lessons learnt from it? This will
lead you to consider what level of detail you need to record.
Regulations (sections 360(hk), (hl), (hm)) may prescribe the need for some types
of information, the methods used to collect information, the way in which it is
presented, and the timing of its presentation. These requirements need to be
factored into an overall methodology for data collection.



Data collection can be made more efficient by enlisting the help of people who are
working in the field (eg, consent planners, building officers, technicians). If you
make it easy for people to collect and record data as they are doing their jobs, it
helps with consistency and reduces the chances of getting gaps in the information.
It is a good idea to get input on design of data collection forms from the people
who will collect the data.
Find out what information is collected by other agencies that you can use. There
may also be opportunities to share the costs of your data collection with other
agencies who also want the information.
Think about how often you need to collect the data to provide information about
trends for reporting and review. If change in the values of a particular indicator
will only be apparent over several years, you might not need to measure it
annually.
Managing data and information






5|Page
Just collecting data is not enough - to have confidence in its quality, consistency,
and accessibility over time means developing clear procedures for its
measurement, recording, and storage and security as well as descriptions about
how these things change. To ensure it will be accessible for future users, it must
also be stored and described in suitable ways.
Metadata is information about the data - such as what, how, where, when, how
often and by whom data is collected, as well as how it is recorded, stored and
analysed. This information allows you to determine whether or not data sets
collected at different times are truly compatible and so able to be combined to
build accurate time series. It preserves a clear understanding of the data over the
years despite staff turnover. It can set out criteria for significance when
interpreting the data, as well as make clear any limitations in the measuring
techniques applied or samples taken.
Metadata acts like a library catalogue. It describes topics, how they can be
accessed, and where to direct enquiries. In this way it assists the public, other
agencies, and your own colleagues to locate all available data in a field of interest.
This helps to prevent duplication of effort and to share knowledge. Shared
knowledge can translate to shared commitment. An effective approach is to have
your metadata on the internet for ease of access.
Key matters to consider in designing a system to manage data are:
o What is required by regulation?
o How would a person find relevant data?
o What makes the data fit for purpose?
o How will the data be shared and with whom?
o How will it be secured for future use?
Consider whether there is any opportunity to link or combine your council
databases to include all monitoring information relevant to state of environment,
policy/ plan effectiveness and compliance and complaints monitoring. Integrated
data storage can occur at different levels - metadata, indicators, underlying data.
Consider having a data review process as quality of data is critical.

6|Page
National regulations (under section 360 RMA) may specify what, when and how
information is to be provided to the Minister.