SouthAfrica – Evaluation Week Uganda 13 to 17 June Nox Chitepo

Department of Planning, Monitoring and Evaluation
South Africa’s National
Evaluation Policy Framework
Nox Chitepo
National Evaluation Practices: Experience from South Afr
14 June 2016
Kampala, Uganda
Why evaluate?
Improving policy or programme performance
(evaluation for continuous improvement):
this aims to provide feedback to programme managers.
Evaluation for improving accountability:
where is public spending going? Is this spending making a
difference?
Improving decision-making:
Should the intervention be continued? Should how it is
implemented be changed? Should increased budget be
allocated?
Evaluation for generating knowledge (for learning):
increasing knowledge about what works and what does not
with regards to a public policy, programme, function or
organization.
2
Where are we at?
•
•
•
•
•
•
•
Total evaluations on the NES- 59
Approved reports- 25
Served at cabinet- 13
Evaluations underway- 16
Preparation stage- 12
Stuck- 1
Dropped- 5
Use of evaluations
Programme evaluated
Progress
ECD
New policy gazetted responding to findings
Grade R
DBE address quality of provision not just quantity
CRDP
Substantial revisions to operations
Recapitalisation (RADP)
Substantial revisions to operations
Nutrition interventions for
children under 5
Nutrition plan integrated with Food Security. MTSF target to reduce
stunting from 21% to 10%.
Restitution
Creating independent Commission on Land Claims. Substantial
revisions to operations. Impact evaluation.
Support Programme for
Industrial Innovation
Changes to operation including addition of commercialisation stage.
Re-launched.
Urban Settlements Development Even before evaluation completed changes made to guidelines
Grant
Export Marketing Incentive
(EMIA)
Policy on Community Colleges
Changes to operation (remove duplication in operations)
This was a design evaluation and before the policy was released
significant changes were made as a result
4
Approach – ensuring
use and ownership
• Key issues to ensure use:
– Departments must own the evaluation concept and the
process and so they must request evaluation (not be imposed
on them)
– There must be a learning focus rather than punitive otherwise
departments will just game the system – so punish people not
because they make mistakes, but if they don’t learn from their
mistakes
– Broad government ownership – so selection by crossgovernment Evaluation Technical Working Group – based on
importance (either by scale or because strategic or innovative)
– Evaluations must be believed - seen as credible
– There must be follow-up (so improvement plans)
5
Priority interventions to evaluate
• Large (eg over R500 million for national, R50m prov)
• or covering a large proportion of the population, and have not
had a major evaluation for 5 years. This figure can diminish with
time
• Linked to 14 outcomes in MTSF/NDP/prov strat
priorities
• Of strategic importance, and for which it is important
that they succeed
• Innovative, from which learnings are needed
• Of significant public interest – eg key front-line services
6
Approach – credibility and
transparency
To ensure credibility
Ensure independence:
• Independent external service providers undertake the
evaluation, reporting to the Steering Committee
Ensure quality:
• DPME plays intensive role supporting the evaluations/
QA at the end of the evaluation
• Capacity Building initiatives
Ensure transparency:
• All evaluation reports go to Cabinet/EXCO/dept mgt
• Then Parliament
• Then on web on Evaluation Repository
7
Emerging findings
• Coordination across departments is a major problem
– need to find good practice mechanisms (many)
• Poor admin data/ unavailability of data (many)
• M&E inadequate, sometimes targets not set in
advance (many)
• Inadequate use of IT ( use of paper-based system)
• Initiatives are sometimes not targeted enough and
resources get spread too thinly (EMIA,CRDP, CASP, )
• Sometimes Frameworks are good but not enforced
• Poor management of implementation and
operational challenges (many)
8
Emerging findings
• Services often generic and not targeted enough for different
groups – one size fits all
• Sometimes support for beneficiaries is inadequate (e.g. BPS
MAFISA, CASP)
• No explicit theory of change, sometimes no consensus on
programme design (many)
• Government better at spending money/infrastructure than on
behaviour change/outcomes
• No post-support project tracking (difficult to measure impact)
• In some instances, government’s role is poor/ not clear
9
Do better? Enabling conditions
• Key role of a powerful and capable central ‘champion’
with sustained political will
• Resources : budget and a talented team to drive the
system to solve problems early and rigorously
• Build a coalition across government
• Promote Integration of evaluations in intervention
cycle and performance management cycle
• Utilization seen as the measure of ‘success’; promote
ownership; improvement plans implemented
10
Do better? Enabling Conditions
• Strengthen the monitoring system; data
availability and management
• Ensure that there is a strong design of
interventions – with a proper theory of
change
• Incentives – co funding; profiling of
evaluations through evaluation networks;
publishing and co-authoring, awards as
SAMEA, etc
11
How can we strengthen system further?
• Improvement plans & follow up on use
• Draw budget implications for NT
• Communication of findings (policy briefs, social
media, Evaluations Update, new website, etc)
• Building body of evidence across sectors- Human
Settlements 7 evaluations + synthesis; Rural
Development 5 evaluations + synthesis
• Quick internal evaluations for problem-solving
• Reflect on the evaluation system- slow, too many
evaluations, administratively a burden
Asanteni sana; Merci beaucoup; Thank you!
Noqobo (Nox) Chitepo
Director: Evaluation and Research, DPME
[email protected]
www.dpme.gov.za
13