Assessing Program Outcomes Can be Tricky

I often work with juvenile justice programs and their staff, advising them on research and evaluation issues. I recently learned that people need to be reminded that using pre-/post-outcome comparisons to judge the effectiveness of a program can be misleading.
In a recent meeting I attended, a program director was defending the effectiveness of his agency's intervention approach. He described what he believed were solid measures of impact by first describing the rate of offending among his program's clients prior to intake (in terms of average arrests per year). 
Then, he told us how that number was cut in half during the first year after a youth completed the program. According to him, this meant that the program had been proven effective.
For emphasis, he added, “With such good before-and-after data, we don't need any more evidence to know that we’re effective.”
Eeek, I thought to myself. 
He clearly didn't realize that his assertion of effectiveness was risky and possibly flawed.
Many people believe that agencies can assess their effectiveness entirely with pre/post comparisons of youth outcomes, such as recidivism or drug use before and after treatment.
Apparently, they do not know about the statistical bias present in that sort of comparison.

These sort of pre/post measures can be distorted by the statistical bias known as a "selection-regression artifact."

The concept was explained very well in 1980 by Michael Maltz and his colleagues, but since I don't know many people who enjoy reading articles about research methods written more than 30 years ago, I've provided a summary in the video above. 
However, if you are the type of odd person who enjoys reading about evaluation methods, here is the full citation:
Maltz, Michael D., Andrew C. Gordon, David McDowall, and Richard McCleary (1980). “An Artifact in Pretest-Posttest Designs: How It Can Mistakenly Make Delinquency Programs Look Effective.” Evaluation Review 4(2): 225-240.
 

juvenile-justice-reform_Jeff-ButtsJeffrey A. Butts is Executive Director of the Research Evaluation Center at John Jay College of Criminal Justice, City University of New York.  Since 1991, he has managed more than $8 million of research and evaluation projects including investigations of teen courts and juvenile drug courts, the methods used to anticipate changing juvenile corrections populations, strategies for measuring disproportionate minority contact in juvenile justice, the coordination of substance abuse services for court-involved youth, and the incorporation of positive youth development concepts in youth justice.
Before joining John Jay College, Jeff was a Research Fellow with Chapin Hall at the University of Chicago and before that director of the Program on Youth Justice at the Urban Institute in Washington, DC. He began his career as a drug and alcohol counselor for the juvenile court in Eugene, Oregon.
Go here for more information on his current projects, research, and recent presentations.
 
 

Updated: February 08 2018