By David Lee on March 25, 2011 · tagged as , , ,

The art and science of measuring prevention

This week I am attending the California Coalition Against Sexual Assault’s series of trainings on Measuring Prevention where California’s rape crisis centers and domestic violence programs are exploring how to evaluate their prevention programs and examine ways to collect and analyze data. Given the small budgets to implement and evaluate the programs, we are exploring creative ways to do this.

The article Commentary on Foubert, Godin, & Tatum (2010): The Evolution of Sexual Violence Prevention and the Urgency for Effectiveness recently ePublished for the Journal of Interpersonal Violence prompted me to think hard about how we measure sexual violence prevention efforts.

Since sexual violence recognized as a serious problem, there is a growing consensus that both intervention and prevention are necessary. But creating change that will actually reduce the rates of sexual violence is quite complex. There are no easy answers, no magic solution nor one key behavior that will absolutely lead to prevention, Even with a complex issue such as HIV prevention, we know that regular consistent use of condoms will reduce incidence of HIV infection. But there is no simple equivalent behavior for rape prevention. (Part of me wants to say there is – just don’t force someone to have sex. While this sentiment is understandable and true, unfortunately this message alone is not really a good strategy to actually ending rape.)

Evaluation is a tool that will assist us in determining how we can prevent rape. I think of the process of developing, implementing and evaluating prevention efforts as a combination of art and science. Science provides essential tools to for our work (such as epidemiology to understand the problem, and rigorous evaluation of prevention program effectiveness), but the art is the creativity and innovation that goes beyond what we have done in the past to create new approaches. While science is grounded in testable knowledge, I do not want to lose the art of finding ways to connect with communities and fostering change to shift cultural norms.  Some of the greatest movements in our history to create change relied heavily on the art more than the science to foster change. The civil rights movement did not rely of randomized clinical trials to take action to change policies.  I want to learn from both the art and science of prevention.

Commentary on Foubert, Godin, & Tatum (2010): The Evolution of Sexual Violence Prevention and the Urgency for Effectiveness looks at the science of evaluation of sexual violence prevention. The authors describe “experimental and rigorous quasi-experimental designs with well-matched control groups that have been replicated by investigators other than the developers as the designs that provide the strongest evidence.” What I found especially useful in this article is the notion that evaluation is part of a process.  The authors suggest that “[r]efining a program through less rigorous evaluation methods is often cost effective and may result in a more cohesive and well-developed program that can then be subjected to more rigorous evaluation.” That process is what I described above as the art.

In this article, the authors critique the published evaluations of John Foubert and his colleagues about The Men’s Program.  I too have written previously about some of my concerns about this program.  I agree that the one-hour presentation of the Men’s Program is inconsistent with generally accepted prevention principles.  I find it hard to image actual behavior change comes from such a short presentation alone.

However, I do like the methodology of interviewing participants seven months after the presentation as Fobert, Godin and Tautm had done in their evaluation. That is a good early effort for prevention to inform the development and determine whether the practice has promise.  The authors of the commentary suggest that over time the rigor of evaluation should increase.

What does this mean for the prevention practitioner? At this time there is so little research on sexual violence prevention programs that demonstrate effectiveness at reducing rape rates. As we read the research that is released, I agree we need to look at the rigor of the evaluation method.  Unfortunately the cost and time commitment for quality effectiveness research is high.  In our prevention efforts we must use systematic evaluation efforts such as process evaluation and measures of potential indicators of change to inform the program’s development and implementation – that is the art of creating and conducting a prevention program.

However, that is not enough. Over time we will need to conduct more rigorous research to determine the impact of our work – find a way to use science as tool to understand what our prevention efforts can do. In the meantime, we must continue to innovate – that is engage in the practice of art to create new approaches to creating change.

How are you using both art and science to improve your prevention efforts?

Here is the full abstract and references to the article:

Commentary on Foubert, Godin, & Tatum (2010): The Evolution of Sexual Violence Prevention and the Urgency for Effectiveness

Andra L. Teten Tharp, Sarah DeGue, Karen Lang, Linda Anne Valle, Greta Massetti, Melissa Holt and Jennifer Matjasko, Journal of Interpersonal Violence, 2011, ePublished February 28, 2011

Click here for a link to the abstract on the journal’s web site.

Foubert, Godin, and Tatum describe qualitative effects among college men of The Men’s Program, a one-session sexual violence prevention program. This article and the program it describes are representative of many sexual violence prevention programs that are in practice and provide an opportunity for a brief discussion of the development and evaluation of sexual violence prevention approaches. In this commentary, we will focus on two considerations for an evolving field: the adherence to the principles of prevention and the use of rigorous evaluation methods to demonstrate effectiveness. We argue that the problem of sexual violence has created urgency for effective prevention programs and that scientific and prevention standards provide the best foundation to meet this need.

Photo by Addison Berry. Use permitted by Creative Commons.

David Lee

More Posts by David Lee

David S. Lee, MPH, is the Director of Prevention Services at the California Coalition Against Sexual Assault where he provides training and technical assistance on prevention. David manages the national project PreventConnect, an online community of violence against women prevention practitioners, funders, researchers and activists. For over 27 years David has worked in efforts to end domestic violence and sexual assault.

{ 2 comments… read them below or add one }

Brad March 30, 2011 at 8:07 am

The Evolution of Sexual Violence Prevention and the Urgency for Effectiveness is an EXCELLENT article and precisely describes many of the problems found in measuring prevention programs (and in reporting the results). I really loved the manner in which they addressed the issue (and challenges) of outcome evaluation, and the need to use the principles of prevention in determining the promise of a program. This passage is particularly on point in my opinion:

“Psychometrically sound instruments are necessary to overcome the effects of social desirability and demand characteristics that undermine the valid measurement of violence perpetration. As program participants are often aware that rape and sexual assault are socially undesirable behaviors and because a participant may only regard the most severe acts (e.g., forcible rape) as sexually violent rather than considering all possible sexually violent acts (e.g., sexual harassment and coercion; Basile & Saltzman, 2002), subjective measures of attitude and behavior change administered in a focus group or one-on-one (i.e., Foubert et al., 2010; Foubert & Perry, 2007) are poor proxies for actual behavior change. While qualitative measurement of outcomes can be useful in the very early stages of program evaluation, reliable and valid measures of behavior (e.g., Sexual Experiences Survey; Koss & Oros, 1982) administered repeatedly over time are necessary in evaluation research to establish a program’s effectiveness in changing behaviors…The translation of these questionable results into practice is problematic, as practitioners may select programs that claim to produce behavior change, when in fact the methods used do not provide sufficient evidence to support these conclusions”

They discuss the utility of focus group / qualitative data as a way to see if a program is on the right track (which I was glad to see), and go on to make a crucial point about problems that can arise when a program makes claims beyond what the outcome results reasonably indicate – but do so in a manner that most practitioners (at least those without a background in the scientific method, measurement, or evaluation) wouldn’t be able to catch.

An important consideration that underpins all of this is the manner in which program evaluation is intermingled with financial and political concerns. Pressures from self and institutions (e.g., universities, publishers, etc.) to promote a given SV prevention program as “outstanding” or “the best” are just as common in the field of SV prevention as they are anywhere else. Program developers/authors want recognition, clout, and consulting fees, and institutions want status and revenue. These considerations are bound to affect the manner in which a SV prevention program is developed and evaluated, and how those outcomes are reported and promoted. In nearly 15 years of doing SV prevention work I’ve seen these types of should-be-extraneous concerns repeatedly creep into this field under the guise of “science” and do so across a variety of curricula, programs, etc. It frustrates me because if I see a program being oversold (e.g., the results/impact being “spun” as more impressive than is warranted), it tells me that the priorities of that program, its developers, and its supporting institution are out of whack. It tells me that people are not acting in good faith. Sometimes this is made all too clear. I’ve seen certain established prevention programs threatening legal action against anyone questioning their programmatic claims (folks who’ve been on Prevent Connect since the very early days might remember the situation I’m talking about…though there have been several instances of this across multiple programs of which I am aware), a SV prevention program author ambushing a “competing” author at professional conferences with one-sided handouts, and borderline intimidation tactics being used at a meeting by a consultant who apparently felt that their program should be above above scrutiny.

I’d love to see our field to preempt this type of counter-productive (and petty) behavior, and I think evaluating programs against benchmarks that map to the principles of prevention (though perhaps a modified, more SV-specific set of principles) is the way to go. Also, not accepting a program’s claims until it has been evaluated by outside, independent (as in, not at all associated with the program author(s)) researchers might also be a good thing to always keep in mind. I’m not sure if any SV prevention program has gone through an independent and rigorous outcome evaluation process to date…I could be wrong though…I know the CDC recently began putting a few well-known SV prevention programs through this type of evaluation, which is great.

Bravo to Andra and her co-authors for this article, and for making a strong case for using the principles of prevention!

Reply

John D. Foubert, Ph.D. August 22, 2011 at 9:34 am

I hope that readers will give thoughtful consideration to my response to this article, now published online by the Journal of Interpersonal Violence. The abstract for the article is as follows:
Rape prevention programmers and researchers have long struggled to select the most appropriate theoretical models to frame their work. Questions abound regarding appropriate standards of evidence for success of program interventions. The present article provides an alternative point of view to the one put forward by seven staff members from the U.S. Centers for Disease Control and Prevention (Tharp et al., 2011). Questions are posed for readers to consider regarding the appropriateness of the medical model for rape prevention programs, whether randomized control trials are the one and only gold standard, whether programs presented to groups should be evaluated at the group or individual level, whether subscribing to principles of prevention selected by the CDC for other disciplines translate well to rape prevention, what constitutes sufficient dosage, and what constitutes a rigorous research program studying an evolving rape prevention intervention.

Reply

Leave a Reply

Previous post:

Next post: