Evaluation Reports: Moving From Dusty Shelves into Action


Melissa Chiappetta
Director of the Center for International Evaluation
I vividly remember sitting in a run-down hotel in a hot, dusty, remote village of South Sudan in 2013, toiling away on an evaluation report into the wee hours of the morning. I was sitting on my bed with the mosquito net tightly tucked around it to prevent the giant bugs from crawling on me. I only had the light from my laptop to work by. The electricity had gone out to conserve power, which also meant that the fan I had running was sitting motionless. Despite the circumstances, I was excited because I was working on a report that included recommendations that I thought would really help to improve lives.

Sadly, it didn’t. 

This fact has been my greatest point of personal dissatisfaction as an international development evaluation specialist. I do not doubt that—just like the report I was working on while in South Sudan—many of the other reports I have worked on have also sat on shelves and gathered dust. 

This is also the source of one of my biggest criticisms of the international development sector. Even though donors and evaluators work hard to generate evidence of what works in development programs and what does not, that evidence is not always used to improve programming. This means that taxpayer dollars spent on international aid are not having the level of impact that they could. This has serious consequences when “impact” means reducing poverty and hunger, improving health and education, and ensuring basic human rights.  

It’s About Timing and Collecting a Sufficient Body of Evidence

Why hasn’t the evidence been used? 

There are several reasons, but I believe the greatest of these are 1) timing and 2) a lack of a sufficient body of evidence to provide policymakers with confidence in a given solution. Until recently, the majority of evaluations I worked on occurred at the end of projects, most of which were about five years long. They provided evidence of what worked well in that project and what did not. They often generated valuable evidence that could be used to inform future programming for a similar follow-on project. But, because most follow-on projects begin right after the end of the last project to avoid a gap in services, the follow-on projects were usually designed well before the evaluation recommendations were released. This is the timing issue in a nutshell. 

The other issue, though, goes beyond individual project evaluations. The issue is that there often isn’t enough evidence of what works in a specific sector in various contexts or how multiple sectors work together in one context. In other words, we simply have not yet done enough research or produced enough evaluations to provide the level of evidence donors and policymakers need to know about what will work where and under what circumstances. 

Especially lacking are evaluations that look across countries or within a country across sectors—at whole systems. Instead, we are often looking at disparate parts of a system in individual countries.  This is largely because, in the past, donor evaluation policies called for evaluations of individual activities or projects rather than systems of activities. Moreover, most donors are set up to work bilaterally, which means a lot of evaluations are procured at the individual country level rather than looking across multiple countries.
 

Donors Embrace Adaptive Management and Systems-Level Thinking

Fortunately, international donors have recently recognized many of these issues. They are now calling for better evidence and more innovative methods for ensuring evidence is both timely and is not tied only to individual projects and activities. The United States Agency for International Development (USAID) and the U.K’s Department for International Development (DFID) have updated their policies in the past couple of years, with USAID asking implementers to collaborate, learn, and adapt and DFID calling for monitoring and evaluation (M&E) for adaptive management. 

We are starting to see those policies reflected in calls for proposals for both new implementation projects and M&E projects. Donors are now issuing more independent M&E contracts that parallel single-project implementation contracts, which is helping to address the timing issue. And, they are also issuing more program-level evaluations that seek to look at entire systems. For instance, we are starting to see evaluations of USAID’s Country Development Cooperation Strategies, which encompass all of USAID’s goals for a specific country.

Evaluators Use Better Tools for Learning

This shift in donor focus incentivizes evaluators to more rigorously track inputs, outputs, and outcomes throughout the project cycle using tools such as developmental evaluations (DEs) and rapid cycle evaluations (RCEs) that make the evaluator role a more integrated part of the project implementation team. Rather than waiting to determine if an activity has been successful until the end of a five-year project, DEs and RCEs allow evaluators to pilot new activities or activities proven in other contexts to rapidly determine their effectiveness—usually within a year. Both types of evaluations draw on proven evaluation designs and methods to assess more short-term and intermediate results. They can be used to compare the cost effectiveness of two different activities before scale up or to ensure an activity has its intended impact.

Implementing project managers can then use the results of these evaluations to adapt and improve their projects. In this way, DEs and RCEs do a better job than typical endline or summative evaluations at measuring learning outcomes for complex programs and systems. This is because they allow for flexibility in programming to adapt to changing contextual circumstances and emerging evidence.

Why We’re Not There…Yet

Despite this shift in the field, I still believe we have room to grow.  I am still not seeing a lot of procurements for evaluations that look at what works in a specific sector across countries and contexts.  For instance, a common problem donors are grappling with across countries is how we can better engage parents in supporting their young children’s learning.  Donors have tried a variety of activities, but there is little evidence on which ones work best in which cultural and developmental contexts.  I would jump at a chance to develop and contribute this type of evidence to the knowledge base.

I got into this field with grand visions that I would provide international development donors evidence on what activities work so that they could scale those activities to make the lives of some of the poorest people on the planet a little better. But that’s only possible when our evaluation reports: 1) ask the right questions and 2) are used. The trend toward M&E for adaptive management and learning gives me renewed hope that those evaluation reports won’t sit on the shelf.
Comments
Blog post currently doesn't have any comments.
Leave comment



 Security code