The Need for High-Quality Correctional Education Studies

There is a need for more high-quality studies on the effects of correctional education.. Improving the quality of studies is essential for policymakers to make solid decisions.

Most studies are of poor quality and design and tend to be too general. Many studies have insufficient statistical analysis and use historical records. Improving the quality of studies is essential for policymakers to make solid decisions.

Well-Developed Correctional Education Studies Are Lacking

A team of four researchers reviewed 17 studies when looking into the effects of basic education and life skills training in prisons (Cecil, Drapkin, MacKenzie, & Hickman, 2000).  They could not draw any definite conclusions because of the lack of statistical tests, the rarity of well-designed studies, and conflicting data.

Click on the infographic to find out what two other research teams discovered.

Poorly Defined Endpoints

Another problem is inconsistencies in the definition of endpoints (target outcomes).  Often it seems these are selected based on what is easy to collect, rather than what is meaningful.

Consider the definition of recidivism, an endpoint defining the success or failure of an educational program.

The Bureau of Justice Statistics defines recidivism as a criminal act that results in re-arrest, re-conviction, or return to prison, within three years of release (Harlow, 2003).  But reviewing recidivism endpoints in several studies brings over 20 definitions, including:

  • Return to prison, excluding technical parole violations.
  • Return to a specific jail within five years.
  • Arrest, revocation, or absconding during a 12-month period.
  • Arrest.
  • Return to prison for a new conviction or parole revocation within two years.
  • Return to prison for a new conviction or parole revocation within 54 months.
  • Return to prison within 84 months.
  • Re-committed, re-convicted, or self-reported offending.
  • Citation: Harlow, 2003.

It’s Not Just the Devil in the Details

Policymakers and managers seeking to design the most effective and cost-effective education and training programs need details on what works and what does not.  Questions they might ask are:

  • What should our first priority be: GED or vocational training?
  • Which are the most effective, academic or vocational programs?
  • Which types of vocational training should we offer?
  • What is the most effective method of instruction?
  • Does it matter who teaches?
  • How much effect does the duration of education have?

Most studies conducted so far don’t truly answer these questions.  A surprising number compared the outcomes of offenders who received any prison education with those who did not.  Policymakers can’t do much with that information other than suggest more education.

The Statistical Void

Suppose you go to a Chinese restaurant with three friends, open your fortune cookies, and discover only three have a message.  It would be wrong to conclude 25% of fortune cookies contain no message.

Yet, this can happen if researchers don’t calculate how many subjects are needed in their study (the sample size), based on the expected differences between the groups.  Instead, researchers will often include the numbers they can get, based on available records.

Formal statistical tests should give completed studies at least a p-value and 95% confidence intervals.  A p-value is a measure of confidence that a difference seen between groups is real.

For example, p<0.05 means it is 95% certain a difference seen between groups is real, and 5% likely the difference occurred by chance.  It’s a believable result. If we saw p<0.42, it is 42% likely the difference occurred by chance and is not real.

A study’s population is a portion of the total population, so a recidivism rate estimate in a study approximates the rate in the total population.

The 95% confidence interval (C.I.) is how accurately the study results represent the true rate in the total population.

A rate of 25.4% (95% C.I. 22.6-28.2) means the study’s rate was 25.4%, and there is 95% confidence the true value in the population is between 22.6-28.2%.  The narrower the 95% confidence interval, the more confidence in the study estimate.  A result of 25.4% (95% C.I. 5.8-48.0) is not as good.  Without confidence intervals, we couldn’t tell the difference between those two results.

Such statistics are essential to properly understanding a study’s results, yet many correctional education studies don’t include them.

Looking Forward, or Looking Back?

The best studies are prospective. When researchers recruit subjects, some subjects are assigned to the intervention (for example, a vocational training program), and others are not.  All are followed for a set amount of time to see if they hit an endpoint (such as being sent back to prison). Prospective studies are labor-intensive and can take years, but they provide the best data and the most reliable results.

Most prison education studies are retrospective. Researchers take a group of offenders, see which ones took education programs while in prison, and then discover which ones were arrested or re-incarcerated.

The challenge with retrospective studies is missing data. Prison and parole records may be incomplete, records may be lost, transferred for storage, or subject to confidentiality rules. Researchers may not discover critical information (such as if the offender was re-incarcerated) if the event happened in another jurisdiction.

Publication Problems

Many methodological problems in studies go unnoticed because they don’t receive enough external review before publication. Social scientists or statisticians outside a research team may not review a report published directly by a state Department of Corrections. Many studies sent to journals are published with little if any, external review.

There is also publication bias. Journal editors like to publish positive results of studies.  A well-designed study showing an intervention has no effect can be crucial, but getting it published can be almost impossible, so we never hear about it.

Raising the Bar on Correctional Education Research

Not all studies are equal, and not all results have validity.  The challenge is to determine which studies are most credible and to question conclusions and point out deficiencies.  Only then can we hope to raise the standard of research in this important area.

References

Cecil, D. K., Drapkin, D. A., MacKenzie, D. L., & Hickman, L. J. (2000, June).  The effectiveness of adult basic education and life skills programs in reducing recidivism: A review and assessment of the research. Journal of Correctional Education, 51(2), 207-226.

Davis, L. M., Bozick, R., Steele, J., Saunders, J., & Miles, J. N. (2013). Evaluating the effectiveness of correctional education – A meta-analysis of programs that provide education to incarcerated adults. RAND Corporation.

Gaes, G. G. (2008, February 18). The impact of prison education programs on post-release outcomes. Paper presented at the Re-Entry Roundtable on Education, John Jay College of Criminal Justice. New York.

Harlow, C. W. (2003) Education and correctional populations. Bureau of Justice Statistics Special Report, U.S. Department of Justice. NCJ 195670.

Lawrence, S., Mears, D. P., Dubin, G., & Travis, J. (2002, May). The practice and promise of prison programming. Urban Institute, Justice Policy Center. Washington, DC.

Wilson, D. B., Gallagher, C. A., Coggeshall, M. B., & MacKenzie, D. L. (1999). A quantitative review and description of corrections-based education, vocation, and work programs. Corrections Management Quarterly, 3(4), 8-18.