When Is Randomization Right for Evaluation?

I advocate using randomized experiments because they provide a high level of confidence in the results. But they aren't always possible or appropriate. So what criteria should researchers use to decide when to use them?

Read More >

Categories:

How to Help More Americans Get Jobs and Earn More: The WIA Gold Standard Evaluation

Recently published evaluation results found that not all approaches to helping unemployed Americans find work were equally effective. Intensive services lead to higher earnings, but more time is needed to determine the medium-term impact.

Read More >

Categories:

The Ethics of Experimental Evaluations, Things You Can Learn From Randomized Experiments, and More

Abt evaluation experts Stephen Bell and Laura Peck examine concerns about social experiments and provide ways to avoid common pitfalls.

Read More >


Performance Measurement? Proceed with Caution

A theme of the "New Management" is to better measure government performance and, in particular, the performance of individual workers. However, one must carefully weigh the costs and benefits of performance measurement.

Read More >


Internal versus External Validity in Rigorous Policy Impact Evaluations: Do We Have to Choose?

Can researchers give policymakers the right information about what is and is not working for the nation as a whole, especially when research is limited to select pockets of the country? This is not as impossible as it sounds.

Read More >


Black Boxes, the Counterfactual, and Bringing Order to RCTs

Abt evaluation experts are engaged in discussions to advance the leading edge of evaluation methods. Recently, Laura Peck and Allan Porowski shared insights on the American Evaluation Association blog AEA365.

Read More >


Getting Meaningful Technical Assistance from Webinars: We Can Evaluate That

The field of technical assistance is changing rapidly. Many organizations that provide national and local technical assistance have moved toward the use of "virtual" TA. How can TA providers evaluate webinar-based TA?

Read More >


Learning Together: Building Stronger Practitioner-Researcher Partnerships

Promoters of evidence-based policies and practices are seeking to engage practitioners more fully in developing and carrying out technically challenging evaluations — notably randomized controlled trials (RCTs).

Read More >


How Can We Measure and Evaluate the Racial Wealth Gap?

The current political cycle has prompted renewed interest in the nation's distribution of wealth - highlighting a disparity that is often called the nation's "wealth gap." The gap has been described as a chasm between the riches of the few set against the struggles of the poor and middle class. Yet a deeper social disparity has challenged public policy since the Civil Rights movement: even today, the fortunes of people of color remain radically different from those of white people.

Read More >


Data without Design: Don’t Do It!

In a recent blog post, Jacob Klerman and I argued that having administrative data available for answering a question about the impact of a program or intervention won't be successful unless paired with a good research design. Here is an all-too-typical example of why relying on administrative data, even where it includes the primary outcome of interest, is insufficient when a participant's entry into a program cannot be explained.  Since my purpose is general and not about the particular study, I’ve anonymized its description.

Read More >