Participatory Evaluation Promises and Pitfalls Brad Cousins, Ph.D. University of Ottawa CaDEA Workshop Series, Session 3, Yaoundé, October 2010 Evaluation: What is it? Systematic inquiry for the purpose of judging the merit, worth and/or significance of [programs] OR to support decision making about [them] OR, said differently Information gathered to determine if programs or projects are accomplishing what is intended and how to improve them. 2 Judgement: What is it? Judgement implies comparison between observations (data, information) and something: 1. other programs or projects 2. same program/project at earlier point in time, and/or 3. external standard, benchmark, measuring stick. 3 Participatory evaluation What is it? Evaluation where trained evaluators work in partnership with program / project stakeholders to produce evaluative knowledge It’s an approach, not a model, not a design 4 Why do it? Solve practical problems (pragmatic, instrumental) Skill building, working toward selfdetermination and action (political, transformative) Develop understanding and meaning (philosophic) 5 Two Streams of Participatory Evaluation Practical Participatory Evaluation (P-PE) Utilization-oriented, problem solving Transformative Participatory Evaluation (TPE) Emancipatory, empowerment-oriented When to do it? Formative, improvement-oriented context. Reasonable consensus on issues Commitment is there: organizational, program practitioner, community. Sufficient resource base of time, money and personnel. 7 Who contributes what? Evaluator/evaluation team – Standards of professional practice – Evaluation logic, knowledge skill – (Content / context knowledge) Non-evaluator stakeholder – Knowledge of program context – Knowledge of program function – (Evaluation logic) 8 What does it look like? 3 Dimensions of Collaborative Inquiry Who Controls? technical decision making (evaluator vs. stakeholder) Stakeholder selection for participation? of stakeholders selected for participation (diverse vs. limited) How Deep? Stakeholder participation (involved in all aspects of inquiry vs. involved as a source for consultation) 9 What does it look like? Promises Enhanced use of findings of evaluation (especially learning) Process use: individual, group, organization Cultural relevance: integration of alternative ways of knowing 11 Pitfalls Threat of self-serving bias Power imbalance, conflict in values, Feasibility: coordination demands, workload demands Accuracy: trade offs between rigour and relevance Sustainability: transfer of leadership 12 Selected readings Cousins, J. B. (2003). Utilization effects of participatory evaluation. In T. Kellaghan, D. L. Stufflebeam & L. A. Wingate (Eds.), International handbook of educational evaluation (pp. 245-265). Boston: Kluwer. Cousins, J. B., & Chouinard, J. (2010). A review and integration of empirical research on participatory evaluation. Paper presented at the Virtual Conference on Methodology in Programme Evaluation, Johannesburg, University of Witswatersrand http://wpeg.wits.ac.za/. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore (Ed.), Understanding and practicing participatory evaluation. New Directions in Evaluation, No. 80 (pp. 3-23). San Francisco: Jossey Bass. Daigneault, P.-M., & Jacob, S. (2009). Toward accurate measurement of participation: Rethinking the conceptualization and operationalization of participatory evaluation. American Journal of Evaluation, 30(3), 330348. 13
© Copyright 2026 Paperzz