3ie just released a new working paper on “Designing impact evaluations: different perspectives”

3ie just released a new working paper on “Designing impact evaluations: different perspectives” with contributions from Robert Chambers, Dean Karlan, Martin Ravallion, and Patricia Rogers, now available online at: http://www.3ieimpact.org/admin/pdfs_papers/50.pdf

ABSTRACTS:
Making The Poor Count: Using Participatory Methods For Impact Evaluation

Robert Chambers, Institute of Development Studies, University of Sussex

The starting point for an evaluation is to ask why it is being conducted, who will benefit, and what impact the evaluation will itself have, and how. Participatory approaches and methods fit in a paradigm that is pluralist, evolutionary and iterative. They include stakeholder analysis, individual story-telling, participatory social mapping, causal-linkage and trend and change diagramming, scoring, and brainstorming on program strengths and weaknesses. Well designed and facilitated, participatory methods are rigorous, and besides offering qualitative insights can count the uncountable, and generate statistics for relevant dimensions that would otherwise be overlooked or regarded as purely qualitative.  They open studies to the voices of those most affected by a project in a ways not possible using more conventional methods and can make the realities and experiences of poor people count more.

Thoughts On Randomized Trials For Evaluation Of Development: Presentation To The Cairo Evaluation Clinic

Dean Karlan, Yale University and Innovations for Poverty Action/ Jameel Poverty Action Lab Affiliate

We were asked to discuss specific methodological approaches to evaluating three hypothetical interventions. This article uses this forum to discuss three misperceptions about randomized trials. First, nobody argues that randomized trials are appropriate in all settings, and for all questions. Everyone agrees that asking the right question is the highest priority. Second, the decision about what to measure and how to measure it, that is through qualitative or participatory methods versus quantitative survey or administrative data methods, is independent of the decision about whether to conduct a randomized trial. Third, randomized trials can be used to evaluate complex and dynamic processes, not just simple and static interventions. Evaluators should aim to answer the most important questions for future decisions, and to do so as reliably as possible. Reliability is improved with randomized trials, when feasible, and with attention to underlying theory and tests of why interventions work or fail so that lessons can be transferred as best as possible to other settings.

Evaluating Three Stylized Interventions

Martin Ravallion, World Bank

Along with the other panellists in a session of this conference, I was asked to discuss evaluation designs for three stylized interventions: conditional cash transfers, a transport sector program and an anti-corruption commission. This paper records my responses, and elaborates a little on some points, including references to the literature. I begin with some general suggestions on the issues to think about at the outset of any evaluation. I then try to illustrate these points with reference to the three stylized interventions.

Matching Impact Evaluation Design To The Nature Of The Intervention And The Purpose Of The Evaluation

Patricia Rogers, Collaboration for Interdisciplinary Research, Consulting and Learning in Evaluation, Royal Melbourne Institute of Technology

Appropriate impact evaluation design requires situational responsiveness – matching the design to the needs, constraints and opportunities of the particular case. The design needs to reflect the nature of the intervention and the purposes of the impact evaluation. In particular, impact evaluation needs to address simple, complicated and complex aspects of the intervention. Simple aspects can be tightly specified and standardized; complicated aspects work as part of a causal package; complex aspects are appropriately dynamic and adaptive. Different designs are recommended for each case, including RCT, regression discontinuity, unstructured community interviews, Participatory Performance Story Reporting, and developmental evaluation.


Christelle Chapoy
Advocacy and Communications
International Initiative for impact Evaluation (3ie)
Mob: +91 98 10444993
www.3ieimpact.org

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: