European Conference on Quality in Official Statistics:

European Conference on Quality in Official Statistics:
Helsinki, Finland 4-6 May 2010
Quality management in a changing environment: Eurostat's
perspective
Marie Bohatá, Martina Hahn,
Eurostat, European Commission, L-2920 Luxembourg,
[email protected], [email protected]
Keywords:
European statistical system, European Statistics Code of Practice, integrated statistical production
system, quality of data from multiple sources, quality concepts, communicating quality to users
ABSTRACT
The European Statistical System needs to cope with an increasing complexity for measuring,
assuring and reporting on quality of data stemming from multiple sources and resulting from
integrated production processes. Both an increased mix of data sources - which may in addition
differ across countries - and changes in the paradigm of the production and dissemination processes
of European statistics might substantially influence the definition of quality.
The paper analyses different concepts of quality relevant under particular circumstances and
discusses possible impacts of evolving standards on quality management in Eurostat and the
European Statistical System (ESS). Communication with users on both the concepts and final
outputs is at the centre of a review of the ESS quality framework. The paper also explores some of
the resulting challenges in the context of the European Statistics Code of Practice and its potential
adjustments.
1. Introduction
In line with the European Statistics Code of Practice, quality of European statistics is defined in
terms of their institutional environment, the statistical processes for their organisation, collection,
processing and dissemination as well as the extend to which they meet users' needs i.e. the extent to
which they are relevant, accurate and reliable, timely, coherent, comparable and readily accessible.
So far statistical institutes' efforts in developing methods and tools to assess, improve and
manage quality of statistical data have mainly focused on pre-defined inputs and outputs. While
traditionally inputs comprise homogenous data sources, like surveys or administrative sources, the
European statistical system (ESS) over the last years has gained quite some experience in dealing
with complex input schemes, involving a mix of sources at national and/or European level to
contribute to European statistics in a given domain. Accordingly, European statistical requirements
to a large extent focus on the definition of output characteristics, including in terms of quality.
As much as input quality criteria targeting production process characteristics require at least
some uniformity of data processing, output quality criteria are being defined with a specific output
in mind. Moreover, in the European Statistical System they result from an arbitration process with
main users targeting not only a specific output but even its specific use. In turn, once defined,
output quality criteria set the framework conditions for the design of the production process.
1
The Commission Communication on re-engineering the production of European Statistics
(European Commission, 2009) points the way towards greater efficiency and integration of the
production and compilation of European statistics, inter alia through greater exploitation of existing
data collections for statistical purposes and their integration with statistical data in a warehouse-type
environment and through data linking and data matching. It also invites the use of further data
sources, not yet systematically exploited, for statistical purposes. This may include private sources,
including the internet and opinion polls. Finally, it points towards directly obtaining data from
respondents like e.g. from enterprises’ accounting systems.
In a data warehouse approach data is not being produced with a specific use in mind other than
its combination and re-combination according to user-queries to be determined ex-post, i.e. after
data collection and storage. In this environment, the absence of precise and fixed ex-ante user
requirements and the amalgam of data from different sources and of different quality call for
revisiting current ESS approaches to quality. This also holds for intermediate cases where data is
produced according to the pre-determined user needs but then exploited further through data
integration or combination in order to satisfy additional emerging needs.
The institutional environment is broadly considered a pre-requisite for data trustworthiness at
the level of the institution rather than accounting for differences in individual data process or output
quality. In so far changes in the production method of European statistics are not expected to impact
upon or require changes of the institutional environment. However, data obtained from non-official
sources which are used by the National Statistical Institutes and disseminated together with official
data, need to be earmarked as different so that differences i.a. of the institutional environment in
which they have been obtained, are made explicit.
Chapters 2 and 3 aim at providing an overview in how far these changes of the production
pattern could impact on the definition and measurement of quality in European statistics and what
this may entail for official statisticians. Chapter 4 deals with challenges and implications for users
of statistics and chapter 5 concludes.
2. Impact of changes in data sources for official statisticians
Primary data collections by statistical authorities for statistical purposes are increasingly
complemented by the use of secondary sources. For the purpose of this paper secondary sources are
defined in a broad sense as data collections originally accumulated for non-statistical purposes by
sources external to statistical authorities, which can be of use for official statistics. The following
sources can be distinguished:

Administrative sources comprise a data holding containing information collected and
maintained for the purpose of implementing one or more administrative regulations (see:
http://www.sdmx.org/). Thus, administrative data have generally been obtained on the basis
of legislation entailing compulsory response.

Private sources comprise data collections held by non-government bodies for commercial
and non-commercial purposes. While ignored to a large extent by official statistics, mainly
for their weaknesses in terms of quality components, ownership rights and accessibility, new
sources are emerging with ICT use generating vast data amounts yet to be explored by
statisticians.
Enterprise reporting system (ERP) can be considered a special case of a private source as
they replace enterprises’ reporting obligations. In so far quality concerns related to the use

2
of other private sources regularly do not apply. Thus, ERP are excluded from the
considerations below, beside where they are explicitly mentioned.
The European Statistical Framework for data quality reporting comprises a standard (ESQR)
and a more detailed handbook (EHQR) prescribing what to report upon under each quality
dimension and a set of quality indicators in line with the Regulation on European statistics (EC Reg.
223/2009) and the European Statistics Code of Practice. They are generic in that they cover a wide
range of statistical processes (sample survey, census, use of administrative data, use of multiple data
sources, price or other economic index process and statistical compilation) and appreciate the need
for different approaches to be used for different processes. They are complemented by the quality
reporting standards of the Eurostat Statistical Metadata Standard (ESMS) for which implementation
guidelines include recommendations on aggregating (quantitative) quality indicators across
countries. At the same time statistical legislation details for individual domains quality reporting
requirements.
While experience and guidance used to focus on statistical results obtained through primary
sources, literature on quality assessment and quality reporting on administrative data used for
statistical purposes becomes increasingly available (e.g. Daas et al, 2008, Wallgren and Wallgren,
2007). Some examples for quality reporting on statistics based on administrative sources exist in the
ESS, in general emphasizing the importance of prior screening of the data sources’ suitability and of
the quality of metadata as well as calling for auxiliary sources to assess main quality dimensions of
administrative data prior to their use.
The use of private sources for official statistics entails further challenges for measuring of and
reporting on the quality of both the sources and statistics produced. The processes through which
they have been obtained are not within control of the official statistician. However, unlike
administrative sources, this also holds for their institutional environment, calling for additional
documentation requirements and scrutiny of their suitability and conditions for use for statistical
purposes:
 Purpose of the data collection: in general, it can be assumed that data collections for private
purposes follow profit considerations making them vulnerable to independence, objectivity
and impartiality concerns.
 Security concerns are in particular relevant when obtaining data directly from enterprises’
reporting systems requiring special safeguards on both sides to avoid unintended data access
by third parties and any misuse of data.
 Availability of data source: given their dependency on funding, continuity, periodicity and
availability of private data sources need special attention. This includes conditions and
future conditions for their availability, including
 Costs: While neutral in terms of respondents' burden, private data sources in general have to
be paid for. Where official statistics need to rely on private data sources they may obtain a
monopoly position which will be reflected in their pricing. ERP do not entail per unit costs
for the statistical authorities but necessitate substantial investments in reporting
infrastructure. The ESS XBRL project point into this direction.
 Property rights: once entering the statistical authority’s data portfolio, private sources turn
from a private to a public good. This raises property right questions which could either lead
to restrictions in re-disseminating the data or to price increases reflecting the fact that data
can be sold only once.
 Comparability issues will arise where data have been collected for a national market only. In
addition, screening of the comparability of the definitions used will gain importance. This
includes not only the question in how far definitions have been actually complied with but
also in how far they meet European standards.
3


Combinability: Usually private sources will be purchased to complement official data. Thus,
an important quality dimension is their suitability to be combined with existing sources (see
also chapter 3).
The combination of objective and subjective information raises a number of questions which
are not being dealt with further in this paper.
Other dimensions important for the assessment of the usability and quality of private data can be
considered similar or equal to those of primary sources or administrative data and are not repeated
here. Like with administrative data, statistical authorities need to invest in evaluating the framework
conditions under which they have been produced, prior to assessing the quality of the data and prior
to their purchase. In so far assessing and reporting on the quality of secondary sources shifts the
focus from output characteristics to the institutional environment and production processes, both
gaining importance as they have not been under the control of the statistical authority and both
influencing its main output quality characteristics. Thus, Eurostat and the European Statistical
System will need to invest in assembling together and agreeing the most relevant characteristics in
order to complement existing quality reporting standards where needed. This holds in terms of both
increasing comparability of approaches between countries and where possible, across data sources.
Recommendations to deal with similar issues relating to the use of administrative sources
comprise the conclusion of service level agreements with data providers. When it comes to private
sources, the scope for contracting between data owners and statistical authorities is large and it is
particularly large when the private source itself comprises a statistical survey (e.g. opinion poll) and
may increase up to the borderline where a private source de facto becomes an official primary
source commissioned to private data providers. Recent Eurostat initiatives towards experimenting
with a quality adjusted Eurobarometer survey point into this direction. In general, private sources
may benefit from official statisticians’ expertise and experience in quality in statistics, thus
facilitating the negotiation of framework conditions and processing or even establishing some kind
of partnership in which quality assurance and the official statistics’ quality trade mark is traded
against data.
When it comes to the assessment of errors, the official statistician’s lack of control translates
into missing information with some error types and sources being impossible to observe (like e.g.
measurement errors) or to be derived from the data (unit non-response). Like with administrative
data, parallel control surveys can contribute to indirectly assessing data quality. However, given the
time advantage usually attributed to private sources over official statistics, this assessment may
involve considerable time lags, thus sharpening the trade-off between timeliness and expected gains
in terms of accuracy. In so far official statisticians may find themselves in a situation where they
cannot assess in particular data accuracy with traditional tools and methods. Information on the
production processes may not be fully available or itself subject to objectivity concerns. In these
situations, official statisticians need to invest in understanding the underlying processes in order to
arrive at an impartial assessment of sources and evaluation of their potential for official statistics.
This may hold in particular for newly emerging data sources, based on ICT (see Skaliotis, 2009).
Ultimately, a situation of uncertainty or doubts about data quality leave statisticians with the choice
– provided they have already obtained the data - not to disseminate it or to disseminate it with
appropriate warning and full transparency about their concerns, the latter calling for a new approach
towards communication on data quality with users, including e.g. experimental statistics.
The purchase of private data sources from the market establishes an interesting case for crosscountry co-operation within the European Statistical System. If one ESS member buys data on
another country neither subsidiarity considerations nor confidentiality concerns apply the way they
would if data was collected by statistical authorities themselves. What are the implications for
Eurostat's financial framework?
4
For the sake of completeness it should be mentioned that other challenges stem from
combinations of observations and imputations. Errors can be estimated for surveys using
sophisticated techniques, however not for imputations. Model-based approaches used to impute data
in order to achieve ex-post congruent data sets will need to be compared in regular intervals to
observed results. Where imputation duplicates entries in order to better represent a given
population, they need to be earmarked as such in view of future re-combination of data. Both cases
call for proper documentation of methods used, including of the weights attached to the respective
data sets as well as for European statisticians to agree the limits of such approaches, taking into
account user needs.
3. Impact of data warehousing for official statisticians
The combinations of two or more of any of the above mentioned primary and secondary sources
in order to improve the quality or to obtain a statistical output is facilitated through a data
warehouse. In so far, data warehousing at a large scale, i.e. across domains and using a multiplicity
of data sources can benefit from ESS quality assurance experience with the combination and
integration of data sources. De facto it may be appropriate to talk about statistical data warehouses,
each holding a subset of data with common characteristics.
A priori, fitness for use supposedly increases with data being stored and retrieved from a data
warehouse with
 enriched or new output elements of statistical authorities meeting further user requests
(relevance).
 improved frames as well as new sources and improved techniques for imputation, however
matching errors can reduce net gains (accuracy)
 combination of existing data replacing additional surveys (timeliness)
 options for data retrieval across Europe depending on the set-up of the warehouse
(accessibility)
 overlapping data sets and incoherent data becoming transparent, thus facilitating and
demanding respective improvements (coherence, comparability)
 no additional burden on respondents
Seemingly, only clarity of data is at risk given the high complexity of the data warehousing in
terms of both multiplicity of source data and interventions by the statistician.
A special case is ERP systems which themselves can be considered a data warehouse,
assembling together enterprise related accounting information to be exploited for statistical
purposes. They may thus be considered both a useful starting point and benchmark for an ESSstrategy towards data warehousing, including in terms of carefully assessing costs and benefits.
Prior to harvesting the alleged benefits of data warehousing, setting-up, running of a data
warehouse as well as disseminating its output entails a number of challenges going beyond current
ESS quality management practices.
In setting up the warehouse statisticians will need to define its structure (top-down/bottom-up)
and level of detail, choices influencing and to a large degree determining the warehouse’s future
uses and in so far limiting its flexibility. Statisticians will also need to develop a set of prerequisites for individual data sources to enter the warehouse. This paradigm may bring about a
complete new role of European statistical legislation, targeting no longer individual data collections
but defining minimum quality standards for the content of a data warehouse, focusing on output
quality and structural quality of the warehouse, data quantities, common data characteristics and
5
metadata requirements. At the same time statistical legislation will need to be generic in the sense
that data collections no longer serve a unique purpose or use. This entails a whole series of issues
relating e.g. to the desired quality to target, the acceptable costs and burden which may imply
revisiting the role of the legislators compared to the role of the statistician.
In deciding whether or not a data source enters the data warehouse, the profile of a domain
manager changes from the guardian of an individual data source to the guardian of a data
warehouse, assessing the sources’ potential in combination with other sources. This wider
perspective of a data warehouse influences the design of individual data sources, calling e.g. for the
introduction of further identifiers and a sufficiently high number of core variables which may also
include geo-references. At the same time domain mangers reflect upon the utility of an individual
data source beyond a given domain, thus fostering an inter-disciplinary approach. Finally, with de
facto usage manifesting itself increasingly ex-post in a data warehouse environment, domain
managers’ capacities to anticipate user needs will influence the warehouse’s quality.
An increased supply of official statistics in a data warehouse environment could also lead to
statisticians fulfilling information needs, which have been previously satisfied or could be satisfied
on the market. This potentially market distorting effect of data warehousing reminds the need for
official statistics to be clearly legitimated and to refrain from collecting – and purchasing – data
without clearly defined purposes, a command running counter the idea of a data warehouse to
determine its use ex-post. At the same time official statistics’ value added in terms of its
professional ethics and confidentiality guarantees can establish legitimacy where combination of
data sources by private data owners may impinge on fundamental privacy rights. These
considerations touch upon the role of official statistics in society and require careful balancing
involving relevant stakeholders.
With regard to statistical confidentiality, the increased risk of disclosure of individual
information that arises from the increased possibilities to combine data will need to be dealt with.
Data security needs to be taken care of in a more complex and fundamental sense than with
information stored in distributed databases. This holds in particular for individual data. While the
data warehouse’s value increases with the amount of individual data and from a European point of
view increases further with allowing for analyses of individual data across Europe, the related data
protection challenges in terms of both legal and physical safeguards as well as the associated risks
for European statistics in terms of public opinion (big brother scenarios) may call for an incremental
approach based on distributed solutions. In so far, data warehousing provides challenges for the
distribution of tasks within the ESS.
To deal with the high level of complexity in a data warehouse environment, high quality
metadata become paramount. This holds for all stages of warehousing, the entry decision, data
processing and output documentation. At the same time there is an increasing need of paradata
documenting the statistician’s interventions. From the Eurostat point of view reporting of paradata
in a warehouse environment will supplement and to some extent replace traditional data quality
reporting as the way data sources are combined and integrated will be important determinants of
their quality.
Finally when it comes to guiding users on the use and limitations of the resulting data, the sheer
complexity of the underlying processes stemming from a multiplicity of sources and their
combination and integration in complex statistical processes, cannot be captured by a traditional
quality report. This holds even more at European level where complexity increases further.
Reporting about the quality of statistical output of a given domain already today poses a number of
challenges to domain mangers where data stems from different sources:
6

The need to distinguish between various assessment levels (data/key output level, data
source level, data provider level, European level) in terms of individual quality dimensions.
 differences in terms of availability, suitability and measurability of quality indicators for
different data sources
 the need to summarise contradictory quality messages for a specific reporting entity (e.g.
statistical domain or indicator)
In a data warehouse environment additional challenges relate to
 quality characteristics no longer determined by their source alone but by the kind of output
derived from various sources
 data relevance changing with data use
 the need to summarise quality features for a specific reporting entity across domains
 even greater mix of quality indicators
 interrelationships/trade-offs between quality characteristics call for multi-domain expertise
in assessing their net effects
 lack of ownership by domain managers for end product and thus lack of knowledge about its
quality characteristics
 impossibility to re-contact respondents as far as secondary data sources are concerned
 lack of information or lack of information attached at appropriate level makes it difficult to
back-track quality
 highly complex sets of metadata and paradata shifting domain managers' attention from
work with data and bearing the potential for a certain fatigue in coping with this
environment
Solutions to these issues cannot be easily found. However, utility of the data warehouse and
opportunities for its use depend on statistician’s ability to communicate to users on this complexity
in a clear and unambiguous way. The following strategies emerge:






The importance of communication about data and advice on its use as well as on possible
further data compositions grows with the data warehouse. Thus statistical authorities need to
invest in this communication function.
In deciding about specific output preferences users may be given options reflecting
differences in quality.
Difficulties to measure and report output quality will shift the emphasis towards process
quality and to providing the necessary assurances to users that quality has been optimised
under given restraints.
Reporting on output quality will necessitate a risk-based approach, paying greatest attention
to those data sources having the highest impact on overall quality.
Data obtained within an institutional environment which gives raise to doubts about data
credibility and objectivity need to be earmarked as such.
Quality reporting will need to better reflect limitations for the use of data.
4. Changing user role and implications for communication on quality
4.1 Changing user role
Official statistics are serving ever growing information needs of a broad variety of users.
Simultaneously, these needs are becoming more complex and cross-cutting. As explained above,
this changing environment for official statistics on one hand necessitates changes to statistical
production systems and on the other hand calls for new and more profound communications
between statisticians and users.
7
Users are not homogeneous nor are their needs. As this paper deals with quality management in
a changing environment, it is important to focus on a specific category of statistics which is being
fostered by recent political developments in the EU, and thus is gaining more and more visibility.
We refer to statistics and indicators required to implement specific European policy commitments
e.g. Maastricht criteria, to achieve targets identical with statistics (headline targets defined within
the European strategy for smart, sustainable and inclusive growth) rather than economic, social and
environmental characteristics of EU society used for evidence based policy making. In other words,
we have in mind a limited number of highly sensitive statistics/indicators used for politically
defined purposes - such as triggering decisions about regulatory activities, including enforcement
and sanctions or automatic allocation of resources. In some cases they are even defined by
politicians themselves. Such specific statistics can be called - with a high degree of cautiousness "political "statistics. Their specific features raise some questions and pose new challenges for
official statisticians: It is not obvious whether they should be treated in the same way as any other
European statistics, should be somehow flagged because of their political loading or should be even
treated differently because of their specificity. The authors of this paper favour the last option which
in their view is led by such principles as transparency, justification, social legitimacy and sincerity
as the underlying pillars of integrity.
The European Statistics Code of Practice (CoP) declares values and principles which guide the
development, production and dissemination of "traditional" statistics and are common for NSIs and
Eurostat. These statistics represent public good and impartial treatment of users is a fundamental
concept. The scope of statistics produced is defined in a stakeholder dialogue, often led by user
councils where a whole range of users is represented and priorities are set.
At the European level, and in line with the Eurostat's mission, we serve some users with priority
when it comes to their information needs. In case of "political" statistics defined above, their
compilation is not subject to a standard cost-efficiency analysis. However, it is especially the usage
of "political" statistics which makes them different, and in this context it is legitimate to ask
whether the same impartiality in treating all potential users in the dissemination phase should be
required.1 Since "political" statistics exhibit differences compared to "traditional" ones, we may
argue whether they also belong into the category of public good.
If we as official statisticians wish to stay relevant and credible, and respond adequately to
emerging needs we should be open to a debate on what we do and how we work. Moreover, we
should not confuse our statistics with reality as statistics are nothing but an image or
conceptualisation of reality. In case of "political" statistics not only a broad dialogue with users has
to be held as we ideally do for traditional statistics, but even an explicit agreement on concepts
should be achieved. It stands to reason that this agreement is a prerequisite for a wide acceptance of
statistics based on those concepts. Obviously, professional independence in the compilation of
statistics founded on agreed concepts is crucial to guarantee that they are objective and based on
pure professional judgment free from all economic, financial and other relationships.
It is the view of the authors of this paper that the European Statistics Code of Practice should
explicitly cover the existence of "political "statistics and set appropriate rules for their treatment,
especially for a consensus building mechanism and a dissemination procedure. This would provide
even a stronger assurance to users - including the general public - that official statisticians do care
about credibility and quality of their service which is taking new sophisticated forms with the
development of our societies.
1
For the time being Eurostat exempts EDP, GNI and remuneration coefficients from the impartiality protocol.
8
4.2 Reporting to users on quality
Quality reporting for traditional statistics is highly developed in the ESS. We should admit,
however, that it is addressed rather to informed users than to the public at large. Reporting is often
required by legislation and relates to individual statistical domains. Current quality reporting is
predominantly driven by producer perspectives and focuses on quality dimensions defined first in
the ESS Quality Declaration, later in the CoP and most recently in the EC Regulation 223/2009 on
European Statistics. In principle, all quality dimensions are treated equally. However, users may
attach different importance to individual quality dimensions depending on their concrete needs (e.g.
in some situations timeliness might have priority over accuracy and in others vice versa). The "fit
for purpose" concept is a response to the fact that different usage of statistics might require different
quality emphasising different features and implying trade-offs.
New production methods of multipurpose statistics may require a review of the current concept
of quality and its underlying dimensions, as quality is becoming even a more complex issue in this
context. It seems, however, that official statisticians move only very slowly from a traditional
understanding of quality in absolute terms to a relative concept led by the (multi)purpose of the
usage. Under these circumstances, we may assume that the role of users will increase. Instead of
relying on ready made products of given quality, through data warehousing users can exercise more
influence on tailor-made statistical products fit for varying purposes.
Quality of official statistics is taken for granted and it is one of the main attributes
distinguishing them from statistical information provided by other bodies. If statistical authorities in order to respond quickly to emerging user needs - engage in the compilation of experimental
statistics with – by nature – a lower reliability, accuracy etc, they should clearly communicate this
fact to their users. Consequently, a "stamp" of an official statistical authority would no more be seen
as a synonym for certain (standard) level of quality, and especially less informed users, such as the
general public, would have to understand this shift. For this purpose quality labelling could provide
a useful communication tool. While labelling could be an attractive approach simplifying
communication on quality, especially for single purpose statistics, multipurpose statistics by
definition should be fit for several purposes, very likely with different importance of individual
quality dimensions. Thus, multipurpose statistics might represent a challenge for the philosophy of
labelling.
5. Conclusions
An increased mix of data sources for the production of European statistics calls for additional
documentation of the underlying institutional environment and production processes. In producing
and disseminating statistics based on secondary sources, European statisticians will need to invest in
agreeing on the most relevant items and complement where needed existing quality reporting
standards. Parallel and ex-post quality control surveys will increasingly need to be conducted where
data stem from sources outside the control of statisticians. At the same time official statisticians will
increasingly need to negotiate with data owners shifting their role from production to consultancy,
their experience in quality assurance and official statistics' quality mark influencing their
negotiation position.
Merging different data sources in a warehouse-type environment may further change the role of
Eurostat and European statisticians, including within society with European statistical legislation
increasingly generic and statisticians covering information needs previously satisfied on the market.
At the same time data warehousing entails challenges for data quality assessment and reporting, i.a.
9
to distinguish between different degrees of quality according to different needs and to better
communicate limitations of use.
Quality management in the development, production and dissemination of multipurpose
statistics and data warehousing has to take into account varying pertinence of individual quality
dimensions and implied trade-offs. The underlying philosophy of labelling has to be developed
further. In this respect, involvement of users will be of utmost importance. At the European level,
the European Statistical Advisory Committee could naturally take the leading role.
Quality management of "political" statistics has to address their specificity. In the first place
their political adequacy should be assured. A broad dialogue with users on concepts and an explicit
agreement on them are necessary preconditions. Professional independence is crucial to guarantee
that results of statistical work are impartial, objective and based on pure professional judgment free
from all economic, financial and other relationships. However, it should be made clear that
independence starts only after the concepts have been clarified and agreed. Transparency is
important in order to avoid that concepts are contested if statistical results are not in line with
political wishes and expectations.
Until now "political" statistics have been limited to such areas of Eurostat's responsibility as
GNI, government debt and deficit within the Stability and Growth Pact and correction coefficients
for remunerations of Commission officials. This will most likely change and also NSIs might face
similar challenges as they will get involved into a close monitoring of their national targets set
under the aforementioned European strategy for smart, sustainable and inclusive growth. National
targets will be monitored through statistics produced and disseminated by the national statistical
systems, which will assume full ownership of this data. In so far, their direct use for regulatory
processes (which should be distinguished from policy making) will entail further going challenges
and thus will require deep discussions with stakeholders. The European Statistical Governance
Advisory Board which was set aiming at enhancing credibility of the ESS could provide guidance.
10
REFERENCES
Baigorri, A., Laux, R., Radermacher, W. (2009), Building confidence in the use of administrative
data for statistical purposes, The 57th Session of the International Statistical Institute, Durban (South
Africa), 16-22 August 2009
Bohatá, M., Hahn, M. (2009), Eurostat's path towards implementation of the European statistics
Code of Practice and reflection on its future developments. Proceedings of the International
Statistical Conference "Statistics - Investment in the future 2". Prague 2009.
Daas, P.J.H. (2004), Measurement Theory and Practice: the world through quantification, Wiley,
London
Arends-Toth, J., Schouten, B., Kuijvenhoven, L. (2008), Quality Framework for the Evaluation of
Administrative Data. Proceedings of the Q2008 European Conference on Quality in Official
Statistics. Rome 2008
Hand, D.J. (2004), Measurement Theory and Practice, Wiley, London
European Commission (2009), Communication from the Commission to the European Parliament
and to the Council on the production method of EU statistics: a vision for the next decade, COM
(2009) 404, 8 August 2009
Regulation (EC) 223/2009 of the European Parliament and of the Council of 11 March 2009 on
European Statistics.
Skaliotis, M.(2009), Official statistics in the area of ubiquitous connectivity and pervasive
technologies. Proceedings of the International Statistical Conference "Statistics - Investment in the
future 2". Prague 2009.
Wallgren, A., Wallgren, B. (2007), Register-based Statistics: Administrative Data for Statistical
Purposes. Wiley Series in Survey Methodology, John Wiley & Sons, Ltd. England.
11