Sorry, you need to enable JavaScript to visit this website.
Search
March 8, 2017

Using Within-Site Experimental Evidence to Reduce Cross-Site Attributional Bias in Connecting Program Components to Program Impacts

Authors

Stephen H. Bell, Eleanor L. Harvill, Shawn R. Moulton and Laura Peck, Abt Global

This paper considers a new method, called Cross-Site Attributional Model Improved by Calibration to Within-Site Individual Randomization Findings (CAMIC), which seeks to reduce bias in analyses that researchers use to understand what about a program’s structure and implementation leads its impact to vary. The paper describes the method for potential use in the Health Profession Opportunity Grants (HPOG) program evaluation

Randomized experiments—in which study participants are randomly assigned to treatment and control groups within sites—give researchers a powerful method for understanding a program’s effectiveness. Once they know the direction (favorable or unfavorable) and magnitude (small or large) of a program’s impact, the next question is why the program produced its effect. Multi-site evaluations offer a chance to “get inside the black box” and explore that question. First, researchers estimate the overall impact of the program without selection bias or other sources of bias, and then use cross-site analyses to connect program structure (what is offered) and implementation (how it is offered) to the magnitude of the impacts. However, these estimates are non-experimental and may be biased.

The CAMIC method takes advantage of randomization of a program component in only some sites to improve estimating the effects of other program components and implementation features that are not or cannot be randomized.