top of page

Philadelphia Reentry Coalition Training Series Recap: Researcher-Practitioner Partnerships

As part of our monthly training series, Philadelphia Reentry Coalition hosted a training by researchers from Drexel, Temple, and the PA Department of Corrections earlier this month. The training focused on the best practices involved in building and sustaining effective relationships between researchers and practitioners. Below is a summary of some of the key takeaways from the training.

The Philadelphia Reentry Coalition is a group of 115 Philadelphia agencies and organizations committed to reducing recidivism. Subscribe to our newsletter to receive alerts regarding our future training sessions and much more!

Strengthening Researcher-Practitioner Partnerships August 14th, 2019

Dr. Ajima Olaghere, Dept of Criminal Justice, Temple Dr. Bret Bucklen, Director of Planning, Research, and Statistics, PA Dept of Corrections Dr. Caterina Roman, Dept of Criminal Justice, Temple Dr. Jordan Hyatt, Dept of Criminology and Justice Studies, Drexel

Relationship Building

1.) Practitioners and researchers need each other. The relationships are symbiotic, with both parties benefitting from the opportunity to learn more about a certain practice.

2.) There is value to both long-term and short-term partnerships between researchers and practitioners. Too often, we focus too heavily on the development of lifelong partnerships when many organizations and researchers could benefit immensely -and in some cases equally – from short-term collaborations. With that said, long-term relationships are very productive as well.

3.) Cold calls are a great way to start. Neither researcher nor practitioner should shy away from an effort to reach out and ask about the possibility of starting a partnership. If you are not a fan of cold calling, ask around to see if a colleague knows the person/organization you would like to reach out to, and can introduce you via email.

Program Evaluation

1.) Make sure you have a written logic model theory of change. It doesn’t have to be fancy, just something that describes your overall goals, objectives and activities and the relationships you expect between goals, activities and outcomes. What does your program do, and how or why do you expect that to create which outcomes?

2.) Document everything you can. The more information that your organization keeps track of, even when outside of collaborating with researchers, the more productive an eventual evaluation or analysis will be. Use your logic model as guide so you know what to capture. Even if you don’t know if a certain piece of information would be useful to keep track of, keep track of it until you know!

3.) Track program changes. When your program undergoes a shift in practice, pay extra attention to the data you are collecting, and document organizational/program changes. This will allow you to better contextualize your data.

4.) Process vs Impact Evaluations. There are different types of programmatic evaluations that can be done in succession or simultaneously. Understanding the purpose of each type will allow you to conduct a more focused and effective evaluation of any given program.

a. Process: Describes the services and activities that were implemented in a program and the policies and procedures that have been put in place.1

b. Impact/ Outcome: Used to measure a program’s results, or outcomes, in a way that determines whether the program produced the changes in child, family, and system-level outcomes that the program intended to achieve.2

Resources and Materials

Why Am I Always Being Researched? A guidebook for community organizations, researchers, and funders to help us get from insufficient understanding to more authentic truth. Investigates how we can begin to level the playing field and reckon with unintended bias when it comes to research through an equity-based approach that offers one way in which we can restore communities as authors and owners.

Is This a Good Quality Outcome Evaluation Report? A Guide for Practitioners A guide designed to introduce and explain the key concepts in outcome evaluation research in order to help practitioners distinguish between good and poor-quality evaluation reports. This guide provides the reader with the basic information needed to identify high quality evaluation reports. The intent is to help practitioners 1) understand key evaluation terms and designs, and 2) recognize how to identify a well written evaluation report.

Recidivism Metrics

Per the Second Chance Reauthorization Act of 2018, passed as part of the First Step Act.

Recidivism Reconsidered: Preserving the Community Justice Mission of Community Corrections An analysis of recidivism as an impartial and variant statistic. Argues that when used as the sole measure of effectiveness, recidivism misleads policymakers and the public, encourages inappropriate comparisons of dissimilar populations, and focuses policy on negative rather than positive outcomes. Concludes that the solution is

not to end the use of recidivism as a justice system measure but to illustrate its limits and to encourage the development and use of more suitable measures — namely, positive outcomes related to the complex process of criminal desistance.



bottom of page