Focus on Outcome Public Governance supported by Outcome

142
Nordisk Administrativt Tidsskrift nr. 2/3/2012, årgang 89
Focus on Outcome
Public Governance supported by Outcome
Information? – A contextual case story
By Chief Advisor, Niels Refslund
Nordisk Administrativt Tidsskrift nr. 2/3/2012, årgang 89
Artikler: Focus on Outcome
Niels Refslund
Note
This article is based on experience and data obtained by the Danish Agency for Governmental Management1. However, specific viewpoints and opinions uttered stands
at the responsibility of the author only.
Abstract
During a few years from 2007 to 2011 the Danish Agency for Governmental Management has carried out a full scale development process concerning strategic management support based on outcome information – or data on the external outcome of
the Agency’s effort. This article represents a narrative presentation – a case story of a
full scale pilot project.
This project has had deliberate board decisions and dedication of internal resources as preconditions. It has implied a thorough process of designing an »Outcome Model« of the Agency as a prerequisite for measuring the external impact of
the Agency. That means the formulation of an overall structure of outcome goals
covering the entire Agency and the design of a number of indicators to serve as metering points related to each of the overall goals.
Further, the pilot project has trigged an intensive effort to measure outcome, report
the results to all levels of the organisation and adapting methods. However, a priority
has been to ensure local anchoring and ownership to the results of measurement. The
reason for that has been the intention to add value to the internal management process and thus enhance the external value of the Agency’s services to its customers
and clients (other state agencies, enterprises and citizens).
The final purpose of the project has been to verify and to improve the added value
of the Agency’s effort. In total the Agency’s raison d’etre is to serve other state insti-
Artikler: Focus on Outcome
143
tutions and enterprises in order to improve their ability to govern, to deliver and to
obtain their legitimate goals.
Along with the »outcome project« the Agency has aimed at testing the operational
possibilities for development and implementation of »next generation« of a contractual management scheme supported by measuring of vital external outcomes of the
agency as a whole. This ambition is in progress on the basis of experience gained in
the above mentioned project, but has also been supported by the framework of a
governance policy issued by the Ministry of Finance in the early 2010.
As an executive arm of the Ministry of Finance the Agency is obliged to inspire
other state institutions in adopting effect or outcome oriented measures in their governance procedures. The initial effort in this respect is also addressed below, including main results of a cross ministerial mapping of the variety of contractual management schemes in the Danish State Administration.
Thus this is a case story about a full scale pilot project in a structured context.
0. Introduction
The purpose of this article is to share experience with readers – practitioners as well
as theorists – who are interested in performance management with a specific focus on
how to measure societal outcome and how to use such measurement as strategic
management support in a State Agency.
The ambition is – and has been – to produce valid and relevant evidence on the
societal outcome of a deliberate public effort, which in this case has been accomplished by the Danish Agency of Governmental Management in the course of 20072011. This case makes clear, that the meeting of this expectation is not impossible,
although very demanding.
The intention is to give the reader information about the necessary preconditions
and requirements, the development of model and methodology, the establishment of
internal organisational commitment, the actual and practical carrying through of
pretty comprehensive measurements, the interpretation of the results of measurements and – not the least – presentation and dissemination of the results of measuring, including the strategic usefulness of outcome information.
Further, the intention is to let the reader know about the wider framework of state
governance policy in which our case is but only one element of action. However, this
case may contribute to the generation of some degree of common experience merely
by being a front runner and thus being a potential inspiration for other Agencies.
Although this is basically a narrative on a single case the entire story is only well
understood in its wider context of Danish administrative policy.
Summing up some of the crucial learning points:
– Dedication and priority by the top management is required as to the total project through several years. Anchoring and supportive interest is essential for both
development and strategic use of outcome information
– Modelling and measuring should be based on a variety of methodological
approaches in order to facilitate the practical challenges of measuring and to ensure iteration of the measurements – including robust time series. Furthermore
both quantitative and qualitative information is meaningful
144
Niels Refslund
– Indicators must be developed and validated in an ongoing process aimed at
simplification – and maximizing re-use of data, which have been generated for
other purposes. This also minimizes the risks of bias.
– Figures on outcome never speak for themselves! Thorough interpretation and
contextual understanding is essential for making outcome information relevant.
Measurements can be supportive for management, however never replace the need
for decisive leadership and committed every day effort by public servants. Measurements can indicate trends. No more.
1. The basic question?
The basic question is not whether public governance is to take place in a modern
society, but only in which way governance is to be conducted. Government and administrative policies are ideally framing and setting the course of development for
public governance. One might perceive administrative policy as a meta-concept for
the actual public governance that forms and direct society and everyday life for citizens and enterprises. How to handle the wheel effectively and – at the same time –
transfer a sustainable resource base (the »money box«)?
From time to time a random politician might be quoted for an expression like the
following: »We have had enough discussions, now we want to make results?« For
most people – and administrators as well – such a claim seems very sympathetic.
However this type of manifestation is qualified by at least two pitfalls:
– apparently it supports any effort, but without any leading direction
– it stresses results while the intention probably is to demonstrate societal impact
(outcome) of political decisions
During the past couple of decades these challenges have mainly been addressed in
terms of various concepts of »contractual management« by which the »principal«
Artikler: Focus on Outcome
145
actor of authority through a mutual dialogue with the executing »agent« arrives at an
agreement on what should be done to achieve specified goals.
No doubt the introduction and adoption of contractual schemes in public governance have made valuable contributions to clarify purpose and goals thus giving direction to the efforts of public institutions. And no doubt agreement based management
in general has improved a more conscious and coherent performance in public administration and service production. In that sense, contractual management has
strengthened the strategic orientation of most public institutions thus becoming a
most useful supplement to the traditional and detailed day-to-day regime of hierarchy.
However, contract management in many cases has developed into ever more encompassing and detailed agreements on »delivery of results«. In order to accomplish
with fair and expanding expectations of the »principal« the response of the »agent«
may have turned into commitments to deliver a growing number of specific outputs.
The pressure for countable performance has been ever growing along with the need
to justify growing public expenditures of the welfare state. However, immediate
output (results) is no guarantee for intended outcome.
So far so good, but in the search for calculable results the use of agreement based
management the concept often seem to have deteriorated into a new regime of the
»old virtues«: Too many goals and details; lack of coherence and consistence; unsufficient strategic outlook; unappropriate trade off between controlling and measuring
activities versus the outputs and outcomes that are the entire purpose of and raison
détre for the public institutions and their effort.
Thus in the more unfortunate scenarios, agreements and contractual frameworks
have turned into mere rituals of governance procedures and systems of selected »activity accounts« far from a suitable support to any genuine leadership – be it on an
administrative or a political level.
2. The principal answer?
The principal answer to the above mentioned weak points in a major scheme of governance might be radical in the sense of simply scrapping the tool in question. However, such an approach would probably also tend to scrap all the positive features still
there and a variety of experiences that should be utilized to enhance future governance.
A more realistic and appropriate response to former failures should be a redirection to the initial purpose. This includes the choice of a possible strategy for adjustments and reimplementation of contractual or performance based management and
governance. With this aspiration in mind we might stress that the initial purpose was:
– to strengthen the strategic basis for daily management
– to sharpen the discourse about intended external effectiveness, the outcome of
public efforts
– to make the actual »value for money« equation more transparent
– to improve the final societal outcome of any public effort in question
146
Niels Refslund
Instead of resigning and turning to »other solutions« a responsible reaction could be
to adopt a more ambitious, but also much more selective concept of governance
through the setting of overall frames for goal oriented management and insisting on
the possibility to give priority to measures of »real outcome« rather than only
measures of activity registration. The latter is of course rather simple, but at the same
time less meaningful.
However, development and implementation of outcome based management calls
for a more pragmatic understanding of outcome that does not only encompass outcome at a societal level, but also outcome realized at users’ level.
The diagram below illustrates the overall perspective as a production line.2
The overall production line
Changes can be experienced as a consequence of external pressure emanating from
»changing conditions«. That means change by enforcement. However, change can
alternatively be perceived as an asset of a proactive attitude to the necessity of dynamic adjustments – or even innovation – in order to cope with a competitive environment.
3. A full scale pilot project
As of 2007 the Danish Agency for Governmental Management 3 initiated an internal
project in order to test the operational possibilities for development and implementation of a »next generation« contractual management scheme supported by the measuring of vital external outcomes of the Agency as a whole. Below is given a brief
description and analysis of the experiences gained.
3.1. Agency for Governmental Management
The Danish Agency for Governmental Management is an executive arm of the corporate Danish Ministry of Finance.4
Artikler: Focus on Outcome
147
The portfolio of the Agency represents a most varied set of tasks. Its responsibilities
spreads from classical regulatory tasks (such as setting general rules for state accounts) to administration of various state grant schemes (such as grants and loans for
students) to conceptualisation of financial management of state agencies (such as
implementing accrual principles in budgeting and accounting) and to innovation and
implementation of sector crossing digital administrative systems (as electronic purchasing and invoicing systems or citizens single sign on for all public service itsystems etc.)
However, regardless of changing tasks of the Agency for Governmental Management (AGM), the way of securing outcomes for citizens and enterprises is predominantly indirect, through other state agencies and public enterprises. In most of its
affairs the Agency operates through and with a population of approximately 180 state
entities (ministerial departments, agencies and institutions). Those are its »customers« in first hand. Rules and regulations, cross-ministerial it-systems and a vast number of services – obligatory or facultative – are directed at this group of first hand
customers. Only their efforts have a direct impact on citizens and private enterprises
– in other words the societal outcome.
148
Niels Refslund
Thus any idea of measuring the external, societal effect of the efforts of the entire
Agency seems complicated and hard to turn into something specific and useful for
long term management, not to speak of support of day-to-day management.
Nevertheless, a fundamental change of approach from »inside out« to a more
reflective »outside in« perspective seems to be rooted during recent years.
3.2. A fundamental board decision
The CEO and board of directors of the Agency took an encouraging decision in early
2007. It was decided to aim at designing a model of the Agency which would allow
for a systematic measurement of the external outcome of the efforts of the Agency
and to carry through a first base line measurement of the intended ourcome. The
ambition was also, through proper interpretation, to turn results of the outcome
measurements into strategic management support.
Turning our view point from the ambitions (ex ante) to the view point of possible
risks (ex post) we may underline that none of the conventional management schemes
in force were taken out of operation. So every day top management went on as usual,
but along with that a full scale pilot project was initiated. Thus the main elements of
risks consisted of a possible loss of prestige if the attempt failed and also a possible
loss of resources invested.
3.3. Development of an »Outcome Model« of the Agency
Measurement of the external outcome of a state agency cannot be accomplished
without thorough consideration of what are – and maybe should be – the core tasks
and external impact of the Agency.
The institutional raison d’etre must be faced and put into words. And not only
mantled by a common language. This wording must finally be precise and paving the
way for common understanding as well as specific measuring. These requirements
Artikler: Focus on Outcome
149
are fairly difficult to meet in a complex state agency being responsible for a great
variety of duties/tasks.
The internal process in the Agency in order to reach out for a new scheme of outcome oriented management was based on a few main presumptions:
– Ultimate involvement of the CEO and Board of Directors as well as the entire
group of managers heading the operational divisions of the Agency
– Allocation and dedication of a small nucleus of internal, experienced staff members to drive and anchor an encompassing and prolonged development process
was necessary
– Investment up front in an experienced external consultant with specifically related
experience and a personal professional ambition and »critical devotion« to develop sustainable experiments in the realm of public governance5
Ideas and elaborated proposals were established by the internal »nucleus group« in
immediate cooperation with the consultant.
Based on sketching and pre-testing of each element of a possible »Outcome model« of the Agency the entire outline model was presented for the Board and the whole
group of managers and discussed at a one day workshop in the summer 2007.
The purpose of this process was to ensure management ownership and to qualify
the model through a thorough co-working effort involving each member of the management group.
A few months later another workshop including the Board and group of managers
was catching up and finalizing the first edition of »Outcome Model for AGM«
(Effekt Model for Økonomistyrelsen, Version 1-2007).
The figure below shows the »Outcome Model«.
It might be added that a repeated effort for consolidating knowledge and ownership
to the outcome model within the entire group of managers has been necessary – not
only for the sake of revitalizing the »institutional« consciousness, but also to cope
with the inevitable consequences of continuous turnover of staff at all levels. Newcomers have had to be addressed.
150
Niels Refslund
This formative process included establishment of consensus on 4 overall outcome
goals for the Agency and a common validation of approximately 30 indicators by
which the overall outcome goals were to be measured.
The above figure illustrates the »Outcome Model for AGM«. The outcome goals
are described in their full length and capacity. The outlined indicators however have
undergone some adjustment over time as described in the sections below.
Artikler: Focus on Outcome
151
It is worth and necessary to underline, that each of the outcome goals refers to the
impact that the efforts of the Agency is intended to impose on its costumers and
clients. The great majority of efforts and deliveries of the Agency must always be
justified by their ability to support and improve other state institutions in doing their
job in proper and effective ways. This is the real challenge of the Agency and this is
the real challenge of measuring the effect of the Agency.
3.4. Indicators as substitutes for direct measurement
Ideally, each of the outcome goals should be measured directly in one valid and
adequate observation or measurement, which should point out the level of effect just
like that. However, this is not the option in practical terms. Instead we have to go
quite a way around by measuring a number of points that indicates impact of the
Agency’s efforts. We must make causality probable between the indicator and the
related outcome goal. And we assume that a limited number of indicators altogether
make up a valid and adequate picture of evidence for the actual impact in terms of a
level of goal accomplishment. Indirect measurements are inevitable. So are the corresponding interpretation and presentation of our measurements.
Figures never speak for themselves!
A few examples can clarify some of the challenges connected to elaboration and
adjustment of indicators.
A solid indicator related to the outcome goal »Correct administration« is objectively measured by looking into the evaluation of state accounts done by the Danish
National Audit Office (Rigsrevisionen, which is subordinate to the Danish Parliament, Folketinget). Normally the result of the account investigations by Rigsrevisionen is worded in a coded language, which expresses a graduation. However, in the
absence of a coded language some additional interpretation is needed to establish a
measurement.
Another indicator relates to the outcome goal »Optimal decision making« by
focusing upon the gains of the costumers, who are using concepts and it-systems
delivered by the Agency for Governmental Management. This measurement is fairly
complex. No existing objective data covers this item. Consequently it has been addressed by tailored questions to costumers. However, this approach inevitably faces
all the traditional methodological difficulties of reaching sufficient validity, sufficient
reliability, avoiding bias etc.
3.5. The challenge of continuity in a dynamic setting
While the outcome goals and their relative weighting have been kept stable throughout the 4 years passed, the structure of indicators and their relative weight have been
handled as a dynamic ability of the model. From time to time a better, more valid and
relevant indicator (measuring point) has become possible – either as a consequence
of deliberate striving for a better indicator or incidentally as a lucky re-use of data,
that initially are produced for other purposes, but »re-invented« for outcome measuring.
From the very beginning of the outcome modelling process it has been intended –
to the largest extent – to base the indicative measurements on existing available data.
We have denominated such information as »objective data« as opposed to »tailor
made data«, that means information produced specifically to meet requirements of
one or more indicators.
152
Niels Refslund
The reason for giving priority to exploitation of »objective data« is two fold. First,
this approach is likely to minimize the costs of measuring, as data are already there
due to other requirements. Second, it is likely to maximize the validity of data as the
affected data initially have been produced for other purposes. This fact reduces the
risk for any bias.
However the ratio between objective respectively tailored measuring of indicators
was approx 40 % objective as to 60 % tailored at the initial base line measurement
2007/08. This ratio was turned around at measurement number two in 2009, that
means a relative improvement to 55 % of indicators measured by existing objective
data. This trend has been increasing up to measurement number three in the autumn
2010 and 2011. At this point approximately 60% of indicators were covered by objective data.
Our work thus shows that an incremental approach is often necessary.
3.6. How to tie up our »outcome model« with administrative reality
The build up of time series of outcome measurements is extremely essential for the
relevance of outcome indications – trends – for management/governance purposes.
Thus the consistency and continuity of the basic outcome model is crucial. This point
is tricking a potential dilemma: How to make sure that a stable reality-model of the
Agency can reflect this organisation, which is continuously under change as to its
portfolio and ways of working? The need for continuity and stability in goals and
measuring methods stands opposite to the fact of changing conditions and the dynamics of reality.
A major change of the Danish Agency for Governmental Management has taken
place during the years 2008-2010. Based on a comprehensive business case analysis
the Government decided to establish a cross-governmental shared service centre for
basic administrative functions regarding accounts, wages and personnel, and in service travelling. This Administrative Service Centre (ASC) serves the entire state
administration and became an integrated part of the Agency as of 2008 and onwards
as the implementation process went on. This meant a doubling of employees to about
700 and a marked extension of the portfolio of the Agency.6
Under these circumstances a validation of the outcome model became most essential. Luckily, the initial model has proved to be very robust. Reconsideration led to an
easy confirmation of the outcome model as to its four overall outcome oriented goals.
These goals could continue unchanged and even cover the enlarged portfolio in a
most appropriate way. Even the initial relative weighting of the overall goals could
withstand the validation for potential changes. This fact was not only a result of luck
but also a consequence of the outcome goals themselves and the fact that these goals
were directly derived from the overall vision and mission of the Agency.
Only at the next level of methodology – the design and use of indicators – adjustments became inevitable. Some indicators have been removed, others been elaborated and a few new ones added. This operation has been considered acceptable from a
methodological point of view – especially as long as any changes are described and
documented thoroughly in a transparent way.
Another channel of tying up the effect modelling with the administrative reality of
the Agency has been a repeated and encompassing dialogue between the project
group and the board as well as with the greater group of line and centre managers.
Artikler: Focus on Outcome
153
A functional restructuring of the Agency took place almost in parallel with the
introduction of general effect measurements based on the model above. The criterion
for restructuring was a sort of outside-in perspective of the Agency by which a concept of »products« became institutionalised.
Altogether the deliveries of the Agency were spread and defined as one of six
products or product lines. The figure below illustrates in general the six product
fields of the Agency.
Product lines of the Danish Agency for Governmental Management
These products are partly cross cutting the formal organisational structure, partly
being reflected in the formal organisation by so called »Product Governing Groups«
that resembles groups of managers, who have the operational responsibility for activities belonging to each defined product.
Especially the recent considerations concerning measuring and use of outcome
information has underlined the necessity of integrating the operational group of managers in invention and implementation of relevant indicators.
The aim of proactive internal meetings and discussions with product managers has
been twofold:
1) To involve managers and other members of line staff in the »discovering« and
design of smart indicators. That means simple, easy and valid indicators.
2) To ensure internal anchoring and »ownership« to the outcome model and not at
least to the results of measuring the external outcome. Ideally each indicator is invented in a professional dialogue with the line entity in touch.
154
Niels Refslund
Finally, also internal »road shows« with the participation of all the staff in different
»product fields« (respectively »organisational centres« in the Agency) have contributed to dissemination of »thinking in terms of outcome« in the daily administrative
efforts and to a broad acceptance of the results of measuring.
The chart below shows major activities and milestones in the outcome measurement process 2010-2011. The process as such does not deviate in principle from the
previous one.
3.7. The first measurement – and the following ones
What have been the procedures, the major experience and core learning points connected to the accomplishment of such institution wide outcome measuring?
The first measurement 2007/08 was based on a minority of »objective data« extracted from existing data sources in connection with a majority of »tailored data«
generated through a rather comprehensive questionnaire addressed to a randomly
selected group of state institutions (costumers/clients). This procedure caused a lot of
work, a lot of trouble in getting a sufficient level of response – that means a number
of reminders – and from time to time also dubious quality of some response, that
means reliability and validity at stake.
By the organisation of measurement number two 2009 the above experience concerning the generation of »tailored data« was taken into account consequently and in
full. The written questionnaire was given up in favour of direct interviews in so
called »focus groups« of randomly selected representatives from the customers/clients of the Agency. This approach proved to be very good indeed.
No doubt, both validity and reliability of measuring was improved substantially,
thus improving trust in the outcome project in total. At the same time the methodology of interviews with focus groups gave the impetus to a most honest and construc-
Artikler: Focus on Outcome
155
tive dialogue between the Agency and its costumers, which gave not only quantitative but also qualitative data. So a common search for effective solutions was lifted
into the agenda in stead of more or less »mechanical« responses to standardised,
written questions. At the same time the internal need for project resources was diminished; in a way a triple advantage.
At the latest two measurements 2010/2011 and 2011/2012 of the outcome of the
Agency the methodological concept was simply continued without essential changes.
Extended emphasis on indicators based on objective data (existing sources) and tailored employment of interviews of focus groups of randomly selected segments of
costumers. In order to maximize benefits of the measuring efforts the process as a
whole is structured for feeding in to three purposes in parallel:
1. The external outcome of the Agency as to the defined outcome goals
2. The degree of the Agency’s costumer satisfaction
3. The degree of goal accomplishment as to the ongoing performance contract for the
Agency.
In a positive sense the outcome measurement process turned more and more standardized and routinized. This allows for stabilisation of the quality of measuring and
easing up of the work process which in turn has made a reduction of the internal
resources invested possible.
It is worth to notice that the purpose of measuring the outcome is fundamentally
different from the purpose of measuring customer satisfaction. In short this can be
illustrated by the fact, that it is quite possible that a measured effect can be relatively
high while a simultaneous measurement of customer satisfaction is recorded as low,
as well as vice versa.
How is this? – An example could be the implementation of an obligatory regulatory tool that proves to raise cross ministerial effectiveness, but which is perceived
negatively by many state Agencies/institutions merely because of its »compulsory«
character or because of the immanent demand for standardization of local working
processes etc.
This example also reflects a latent conflict between different roles of the Agency’s
customers or different understandings of the concept of being a customer. From time
to time other state institutions are true customers of the Agency that means they do
have an option to require services from other suppliers. They have an option for
alternatives within a competitive setting. In many cases however the »costumer« of
the Agency is simply obliged to accept the services and regulation issued by the
Agency in its capacity of being an authority – a »monopolist« – issuing rules and
regulations on behalf of the Minister of Finance.
Figures below shows the overall results of the outcome measuring of the Agency
by the four measurements mentioned.
The first figure also illustrates how robust the model is in the sense that changes in
indicators to some degree can be coped for by regressive calculation, thus reestablish consistent time series of the indicative measurements.
156
Niels Refslund
Artikler: Focus on Outcome
157
Even if a certain outcome goal (goal no. 4: »Effective and innovative administration«) seems to be fading, this observation does not by its own means and information point to unambiguous lines of action, which could ensure a better performance as to the goal in question. This is a clear challenge to the value for leadership
of both modelling and measuring outcome (external effect). Interpretation and evaluation of different strategic options are still most necessary, be it on a higher level of
insight.
4. Generalisation of experience?
The presentation of a case is by nature pointing to a unique occurrence – for instance
a specific situation or a certain chain of events forming some sort of unity at least in
analytical terms. Uniqueness of a public administration is interesting. However, we
would always try to look for possibilities of generalisation in order to maximise benefits (effectiveness) and/or to reduce costs. In this respect the Agency is positively
obliged to spread information in order to inspire and guide to the extent possible.
Here a narrative approach becomes relevant.
4.1. The Agency as a partner of dialogue
By 2009 the »Outcome project« became an integrated part of the contractual management scheme of the Agency.
First, the general ambition of an inclining focus upon outcome and indicators on
outcome became visible. Practically all of the medium term (3 years horizon) goals
of the results contract for the Agency were turned into an outcome perspective or at
least inspired by this. So did the criterions for evaluation.
Second, it was agreed that the Agency should promote knowledge and practical
experience about outcome measuring and the use of outcome information as supportive to top management and their design of governance. An indispensable step and
prerequisite for being able to lift this duty externally has been the internal experience
gained by designing the Agency’s outcome model and accomplishing the first generations of measurement.
Without this vast and complex experience concerning the modelling, measuring,
testing and validation of outcome as a tool for strategic support of leadership and
governance the role of the Agency as a professional partner of dialogue for other
state agencies/institutions would have been without sufficient credibility. Once again
the claim for »taking your own medicine« in the realm of administrative policies
seems well-founded.
5. The role of a dynamic policy setting?
The Danish Agency for Governmental Management can plead for a substantial role
as »front runner« concerning an operational focus on outcome as an essential tool for
strategic management support. The ambition of the »Outcome Project« to encompass
an Agency as a whole despite of its functional complexity constitutes an interesting
and relevant challenge.
158
Niels Refslund
However, these abilities at the institutional level would have been seriously hampered if the overall frames of Governmental administrative policies had not been
supportive of initiating an outcome oriented approach.
The preparation of this policy was based on a concerted action by the central department of the Ministry of Finance and the Agency in order to couple the two aspects of policy making and the operational concerns of implementation. In this process a number of line ministries and state agencies were consulted to consolidate the
intended new administrative policy statement.
5.1. Accountable Governance
The most tangible sign of a dynamic policy setting is the encompassing administrative policy7 issued by the Ministry of Finance, February 2010. »Accountable Governance – A Guide for Governing from Corporate to Agency Level«. The title and
head line point to the responsibility for management and governance.
State agencies/institutions must hold themselves accountable for conducting all
their efforts, results and outcome. By doing so it is a core virtue of each agency to
consider itself part of the corporate structure of the entire ministry.
It is appropriate to mention explicitly that this policy is calling for a coupling of
appropriations to some sort of making up the related outcome.
And so is the opposite position: That the upper hierarchical part of the ministry, the
department, must be aware of its responsibility for framing and following up of the
corporate governance functions.
This administrative policy states a clear emphasis on administrative leadership
including governance by support of information on the outcomes of vested efforts.
There is a pressure for reinterpreting the nature of the responsibilities of (top-) man-
Artikler: Focus on Outcome
159
agement. It is stated that a number of basic requirements must be met by a responsible board of directors or management, such as budget control and proper accounting
as well as a reliable yearly report8.
On the other hand it is stated, that a comprehensive span of control is left for design and follow up by the specific board of directors or management in charge of a
given Agency. Thus, a paradigm of more flexible and tailored »governance as needed« was founded by »Accountable Governance«.
It is a common, horizontal requirement to every state agency to express its overall
strategic goals in a match with the state budget propositions. It is also stated that the
process and results of fulfilling the goals must be reported. However, the ways and
measures in accomplishing these general duties of ongoing administrative leadership
are exactly up to the responsible top management. This defines and calls for »Accountable Governance«.
6. Outcome supported Governance is developing
Lately the Agency has made an effort to acquire better knowledge about the actual
practice in the Danish state administration concerning the implementation of outcome goals and outcome measuring as core elements of the widespread use of contract based results management in the Danish ministries. With the exception of two
ministries (Prime Ministers Office respectively the small Ministry of the Churches)
all other ministries systematically employ a scheme of contract based performance
management, at least to some degree.
6.1. Cross Ministerial mapping – a status
Just above is indicated, that variations between the ministries are considerable as to
the understanding of fundamental concepts as »goal« or »result« or »effect« or »outcome« as of the abilities of tools such as »indicator« or »criterion« or »measurement« etc. etc. Nevertheless, relevant data on this issue was generated earlier in this
decade in the years 2002, 2004 and 2006. Thus an update within an expanded scope
became most appropriate and was carried out in 2009 and 2011.
The main results of the 2009 and 2011 mapping of goal and performance management in the Danish central state administration are presented in the table below.
Attention should be paid to the distinction between »goal« and »result claim«. The
latter stands typically for specific activities demanded for, while the first – the goal –
expresses a superior aim strived for. So the number of result claims can in a meaningful way be related to the number of contracts respectively the number of goals.
Another meaningful ratio is the number of result claims per goal. All figures relating
to a certain contractual span of time.
160
Niels Refslund
Tabel 1: Development of goals and result claims through 2002-2011
2002
2004
2006
2009
2011
140
119
111
121
91
..
852
750
907
607
Total number of result claims
5219
3765
3381
3058
1871
Number of goals per contract
..
7,2
6,8
7,5
6,7
37,3
31,6
30,5
25,3
20,6
..
4,4
4,5
3,1
Number of performance contracts
Total number of goals
Number of result claims per contract
Number of result claims per goal
Comm. No. of goals in performance contracts is not available for the year 2002.
As showed in the table above, the numbers of goals have been roughly unchanged
through 2004, 2006 to 2009. Apparently the number of goals has decreased lately.
However, calculated per contract the number of goals is rather stable, approx. 7.
However, the trend is clearly a decreasing number of result claims per contract –
approximately a decline corresponding 1/3 in the period 2002 to 2009 and affinitive
to half of previous level in 2011. The latest mapping from 2011 tells that the number
of result claims per contract still amounts to about 20. Consequently the number of
result claims per goal in the contracts has diminished to actually approx. 3,1 result
claims per goal.
Tabel 2: Development of number of institutions with outcome demands and the
number of outcome demands, 2002-2011.
2002
2006
2009
2011
No. of institutions with effect demands
10
45
86
71
Outcome share of total no. of performance contracts (pct.)
7
41
71
78
Total no. of outcome claims
21
136
465
372
Outcome share of total no. of result
claims (pct.)
0,4
4,0
15,2
19,9
The table above shows that the dissemination of outcome demands has risen substantially throughout the period in focus, although from a very modest level. Apparently
the number of public institutions operating with some spread of outcome demands
has increased from 10 pct. in 2002 to almost 80 pct. by 2011. The number of result
Artikler: Focus on Outcome
161
claims that may be designated as outcome demands has risen from only about ½ pct.
in 2002 to about 20 pct. in 2011.
7. At the track of outcome based Governance?
Apparently a growing consensus is emerging on the limits of measurements. Probably the range of measurements concerning public sector efforts has never been more
widespread and deliberate. Nevertheless, the added value in terms of better – more
qualified and effective – governance is often questionable and surely very often not
evident. Resources needed for measuring are often considerable.
This experience calls for reform of the track and tools for public governance. A
basic ambition should be collection of data and evidence that is useable for management and governance purposes and not only data collection in order to legitimate
ongoing activities by more or less diffuse documentation. Or in other words to make
a systematic shift in focus from »We have done…« to »The outcome of our effort
is…«.
A turn of the perspective away from a simple input-output concept into an outcome
perspective calls for keen involvement of the professionals in designing and following up of outcome verification. If measuring and reporting is not an integrated and
meaningful part of operational every day life of public institutions the risk of distortion and lack of validity seems great.
However, the road to outcome measurement is not an easy one. Therefore, from
time to time we will have to resign from direct measurement of the outcome on a
social level (level 1) in favour of measuring the targeted outcome at the level 2 of the
user or recipient of public output (citizen, company or public institution) be it societal
benefits or regulations.
Still, a reasonable and maybe limited verification of the outcome on level 2 makes
more sense than more or less random registration of activity data decoupled from
162
Niels Refslund
outcome goals or maybe based on tacit and dubious assumptions on causality between activity and expected outcome.
The experience of the Danish Agency of Governmental Management supports the
above deliberations – not least in terms of its indirect relations to level 1 outcome, i.e.
on a general societal level.
7.1. A policy statement as framework
When the Danish Ministry of Finance launched its latest overall policy statement on
administrative policy – »Accountable Governance«, January 2010 – a framework for
encouragement of management supported by outcome information was set.
Basically it is recommended, that management by performance and results should
pay attention to the following »guide lines for inspiration«:
–
–
–
–
–
Few strategically important goals
Focus on results and outcome
Focus on coherence between costs and performance/results
Long term governance
Focus on customers/clients/end-users
A basic bearing to societal outcome and indicators for outcome on a societal level is
recommended. However, it is at the same time admitted that measuring of the final
societal outcome of public efforts can be very difficult and in specific cases require a
disproportionate amount of resources. Under such circumstances the ratio between
added governance value and the costs of outcome measurement must be carefully
assessed.
A concept of contractual management is still envisaged, however not as a compulsory tool. Ways and means to a better outcome is up to the responsibility of local top
management (CEO and board etc.). However, it is required, that any public institution must lay down goals reflecting its core responsibilities vis a vis the authorisations given by the National Budget Proposition and follow up of the actual performance in order to report at least once a year.
By stating these framework conditions the Ministry of Finance also aligns with the
principles of accrual budgeting and accounting that were introduced in the beginning
of this century and fully implemented by 2007 as regards state institutions. The »accrual reform« intends to expand the free scope of top management of state institutions, i.e. to underline local accountability and active leadership.
8. Next steps: Empowerment of Governance – or the inevitable
exhaustion of a concept?
The overall and ongoing agenda is how to improve public Governance in order to
simultaneously enhance a constant lowering of administrative costs and a constant
increasing of performance and outcome at a societal level?
A more deliberate and stringent focus on outcome seems reasonable in order to
improve the potential for outcome information in support of Public Governance.
Very strong premises of this approach are the call for an ongoing focus on the raison
Artikler: Focus on Outcome
163
d’etre verified through information on outcome; the ultimate claim for impact on a
societal level.
However, also this strategy has its limitations and challenges. One of these may be
the well known administrative experience of growing »fatigue of tools«. As long as a
new tool seems fresh and able to deliver better performance than before it will be
supported by its users. It may even be supported beyond its specific point of adding
value to governance. But sooner or later the adverse will occur. In the long run any
administrative tool can become counter productive or just stuck.
From time to time we even talk about certain »schools of administration« that are
sustained and defended by circles of administrative practitioners and theorists. But a
certain tool may become exhausted by both over doing and under doing.
Over doing or over performance may be connected with the failure of sophistication. By trying to ever improve a basic concept or tool this will very likely run into a
number of traps such as high level of complexity, distortion as to the original purpose, too broad roll out, unbalanced need for administrative expertise and – finally –
lack of ownership in daily administrative practice.
Under performance may be connected with the failure of leadership. No tool is
better than the hand that drives it. In other words the commitment of top management
to its own governance set-up is essential. So is also the case concerning the choice
and implementation of an outcome oriented concept for management of a cross cutting state institution as the Agency for Governmental Management. This commitment by the top management has been strong, initiating and devoted from the very
beginning in 2007 until late 2011. And still a top management commitment will be
needed as driving force.
8.1. Never complete – always on the move for improvement!
No doubt the »outcome-approach« has contributed to a substantial sharpening of at
least two important features of the Danish Agency for Governmental Management
through 2007-2011:
– A more distinct awareness of »the art of implementation« throughout the entire
organisation. A principal claim for external effect (outcome) almost inevitably
points to the importance of completing the project in question. Semi-manufactured
concepts, standards or IT-support systems simply do not work and are no use for
customers/ clients/ citizens/ companies. On the contrary such partly operational
164
Niels Refslund
tools may easily become costly time robbers. The lesson is that initiative and innovative capability is an advantage, but without proper implementation the intended improvement (outcome) is likely to shrink substantially.
– A more deliberate need for knowledge on »the causality chain« between ends and
means in the efforts of the Agency. Often a new effort will be based on tentative
assumptions concerning the relation between initiative and result; between intention and consequences in reality. This is to a great extend an inevitable condition
for social action. However the outcome approach calls attention to the need for evidence based management – as far as possible. The bar is lifted considerably and
so is the general level of ambition. We might even talk about a shift in organisational culture. Theory of change has become integrated in daily administrative
practice.
The latter feature leads to another expectation on the future front line of the development of the outcome approach of the Danish Agency for Governmental Management. Most likely a greater clarification of the operational interconnection between
the »products« of the Agency and the intended external outcome will be on the agenda9.
The ability to measure »What has been achieved?« on an overall level covering
the entire Agency has improved considerably. The ability to connect certain products
with certain intended outcomes has become part of a useful discourse, but still the
evidence for such causalities are often rather weak. This fact calls for a concentrated
effort for further consolidation and connection of both sides of the equation: Commencement of activity (products) and measuring of the related impact (causal outcome).
It is evident that measurements cannot replace the ability, accountability and art of
good leadership. However, only when a more systematic answer is given to the question »How did we achieve the outcome?« can we fully improve the aim of delivering
strategic support to top management.
Notes
1. At the end of 2011 a major restructuring of the corporate Ministry of Finance took place. At this
occasion the agency was renamed as Danish Agency for Modernizing the Public Sector. Due to
real time coherence the initial name will be maintained in the following analysis and presentation.
2. Similar approaches are observable in other Nordic countries.
3. The Danish Agency for Governmental Management (DAGM) was an executive agency within
the Danish Ministry of Finance until the end of 2011. See note no. 1 above. Other agencies within the then corporate organisation of the ministry were the Agency for Governmental IT, The
Palaces and Properties Agency, The State Employers Authority.
4. As of January 20 in 2011 the Agency for Governmental Management was split into two Agencies: The »Agency for State Administration« (SAM), which has a portfolio of all the cross cutting shared services former within the AGM. The Agency for Governmental Management
(AGM) continued as such with a portfolio marked by operational regulatory functions and developmental functions concerning management concepts, consultancy and guidance, cross cutting
digital administrative solutions etc. Further see note no. 1 above.
Artikler: Focus on Outcome
165
5. The engaged consultant was based in the Danish branch of Accenture Ltd. and as interim inhouse consultant an indispensable resource in the introduction and adaption of the »PublicSector-Value-Model« to the Agency, DK. The so called PSV-model was initially developed by
Martin Cole and Greg Parston, both of which in leading positions within Accenture Ltd. See for
further information in »Unlocking Public Value«, 2006.
6. However, this organisational change took another step as the Administrative Service Centre by
January 20, 2011 was separated out in a new individual entity »Statens Administration« (Agency of State Administration). Looking forward this underlines the need for robust methodology
when continuing the build up of time series of outcome measurements.
7. Ansvar for styring – vejledning om styring fra koncern til institution, Finansministeriet, februar
2010.
8. This particular perspective has been stressed by new requirements 2011/2012 on budgeting,
budget control and budget follow up – throughout the entire state administration.
9. The ongoing implementation of a new structure and – partly – new functions of the Agency
since the end of 2011 is already giving some evidence to this point. See note no. 1