Lower crime AND incarceration?

prison bed

A bunch of very effective and well-evidenced programs from the US.

I had the good fortune to attend a presentation by Mark Kleiman, given to the members of the NSW Government Better Evidence Network in late 2013. I was incredibly impressed. This was the best presentation on evidence-based criminal justice programs I’ve ever seen.

As the Ministry of Justice in the UK consults on their new approach to probation, I think it’s timely to check out the parole approaches promoted by Kleiman.

Basically, Kleiman argues that parole systems offering severe, delayed punishment are ineffective and the approaches that work are “swift-certain-not-severe”. Read his article A New Role for Parole – he says it so well I find it hard to add anything!

I’ll throw in a quote and let you research the details of these programmes yourself.

The best-publicized program built on this set of principles is the HOPE program in Honolulu, which requires random drug tests of probationers and, for those who fail, an immediate short stint (typically two days) in jail, with no exceptions. The SWIFT program in Texas, the WISP program in Seattle, the Swift and Sure program in Michigan, and Sobriety 24/7 in South Dakota all work the same way, and all have the same results: drastic reduction in illicit-drug use (or, in the case of 24/7, alcohol abuse), reoffending, revocation, and time behind bars.

There’s nothing surprising about the fact that this approach works—it’s simply the application of well-known behavioral principles to a fairly straightforward problem. What is surprising is how well it works. In Hawaii, HOPE clients are mostly longtime criminally active drug users with a mean of seventeen prior arrests. A drug treatment program would be delighted if it could get 20 percent of such a population into recovery—and most would quickly drop out and go back to drug use. But in a carefully done randomized controlled trial with 500 subjects, eight out of ten assigned to the HOPE program finished the first year of the program in compliance and drug free for at least three months, with no rearrest. Most of them either never had a missed or dirty test (which would have led to a forty-eight-hour jail stay) or had only one such incident. That suggests that more than mere deterrence is at work; HOPE clients seem to be gaining the ability to control their own behavior.

How can people with more information be both more confident and more wrong?

I become truly frustrated when faced with someone who insists they are an expert in something they know absolutely nothing about. As a believer in evidence, my first instinct is to provide them with more information, but perhaps this isn’t always a good idea.

The overconfidence effect is a decision-making bias where a person’s confidence about a decision is greater than their accuracy when making it. A great article by Hall, Ariss and Todorov (2007) The illusion of knowledge: When more information reduces accuracy and increases confidence asked participants to predict and bet on basketball games and found that increasing information didn’t cause the expected effects. Their findings have implications for settings such as political campaigns, where the decision to provide voters with an abundance of accurate information countering the false claims may not have the desired effect. Nyhan and Reifler’s paper When Corrections Fail: The Persistence of Political Misperceptions describes this ‘backfire effect’ in which corrections to mock news articles reinforced belief in the original, incorrect version.

So sometimes holding back on the evidence lesson is a good idea with an ignorant audience. And perhaps if I’m so sure I’m right, I might have it so wrong I should be keeping my mouth shut anyway!

Bolshy Divas – gathering grass roots voices and stories to effect policy change

Nothing communicates a policy change or program evaluation like a story of someone whose life was changed, however when it comes to making decisions based on the evidence, one anecdote isn’t enough. But what if you have lots and lots of anecdotes? Then you have a very strong picture of what life is like for the beneficiaries of policies and services.

The Bolshy Divas are an anonymous grassroots collective formed in Western Australia and have become a national voice on disability reform. They campaign for genuine consultation, jargon-free documents and transparent processes.

My favourite part of Karen Soldatic and Terence Love’s article New forms of disability activism: who on earth are the Bolshy Divas? from Ramp Up on the ABC reads:

Faced with resistance from the WA Government to disability reform, in 2010, they amassed forces and gathered 100 stories of unmet need virtually overnight. These stories were presented to Premiers and the Prime Minister at the COAG meeting when the WA Government wavered on their commitment to a National Disability Insurance Scheme. The WA Premier, Colin Barnett, backflipped and slipped into the ranks of NDIS supporters, leaving WA Government staff and advisors perplexed…

I love the humour and realism of the Bolshy Divas’ campaigns, also their respect for the human qualities of the politicians they target.

Governments in Australia find it difficult to consult effectively with the diverse range of beneficiaries of their services – working with groups like the Bolshy Divas to access community voices would strengthen the evidence available to make and illustrate policy.

DFID paper on impact evaluations in international development – so useful!

This Department for International Development (DFID) UK working paper is fantastic – so useful to have a summary of impact evaluation set out so clearly.

DFID Working Paper 38. Broadening the range of designs and methods for impact evaluations. Download PDF here or link to the paper on the DFID website here

This report brings together the findings and conclusions of a study on Impact Evaluation (IE) commissioned by DFID. It comprises an executive summary and 7 chapters:

  • Introducing the study
  • Defining impact evaluation
  • Choosing designs and methods
  • Evaluation questions and evaluation designs
  • Programme attributes and designs
  • Quality assurance
  • Conclusions and next steps

Measuring the effect of interventions that strengthen families? Start here!

Children of Parents with a Mental Illness (COPMI) is an Adelaide-based organisation with a website rich in resources, both for families living with mental illness and those that support them. I was particularly impressed with the research section of the site – it’s easy to navigate, up-to-date and provides a wealth of information for evaluators of family-based interventions. They list several measures of parental self-efficacy and competence, summarising their reliability and validity, as well as an easy-to-read overview of evaluation. Their research information on young people includes lists of measures of stress and coping, self-esteem, connectedness, knowledge of mental health, strengths and difficulties and resiliance.

Health correlation fail – misinterpreting association as cause!

Gary Taubes had a great rant in March on misinterpreting correlation in health research. He describes the compliance effect, where the “Girl Scouts” in longitudinal health studies do everything they’re told is good for their health. So they are very healthy, but this doesn’t mean that every single thing they do is good for your health. It’s a great point to keep in the back of our minds when we attempt to attribute an effect, particularly in longitudinal studies.

How do you know when outcome change can be attributed to your intervention?

It was exciting to read the new working paper Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework by Howard White and Daniel Phillips. While it would be ideal that all the interventions we design would have the number of participants (n) and impact that would give us a statistically significant result at a high level of confidence, there are many reasons that this doesn’t happen. For payment-by-results contracts, in particular social impact bonds, attributing an impact to an intervention is a pre-requisite for the transfer of public funds. Also, funders the world over are attempting to identify the impact they are making across their portfolios, to increase the effectiveness of their investments. White and Phillips produce a fantastic summary of methods and examples that seek to attribute change to a cause. While their framework of small n methods is useful, it’s their up-to-date literature review that I find most useful.

The paper is published by 3ie: International Initiative for Impact Evaluation. 3ie have developed a database of policy briefs, impact evaluations and systematic reviews. They’re governed and staffed by a trans-global team, and while focussed on international development, their evaluation work is certainly relevant for interventions that alleviate disadvantage at a local or national level.

Will we see more randomised controlled trials in social program evaluation?

Randomised controlled trials are the preferred measurement method for the NSW social benefit bond (social impact bond) trials. The Coalition for Evidence-Based Policy published an overview and demonstration of rigorous-but-low-cost program evaluations in March 2012. The publication highlights the use of randomized controlled trials (RCTs) with administrative data systems, providing a number of examples from existing studies. RCTs are widely considered best practice with respect to program evaluation.

Why are crime rates in similar towns are so different?

A couple of great studies in NSW look at communities with similar demographics and differing crime rates and ask what it is that makes the difference. One ‘Why do some Aboriginal communities have lower crime rates than others….A pilot study‘ was published by R. McCausland and A. Vivian in the ANZ journal of criminology vol. 43(2) pp 301-332. The other ‘‘A tale of two towns: Social structure, integration and crime in rural NSW‘ was published by P Jobes, J Donnermeyer and E Barclary in Sociologica Ruralis vol. 45, pp. 224-244.

Big Data… what’s it all about?

I’m still getting my head around the concept of big data, but it’s creating some buzz. From its Wikipedia page

“Big data” is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set.

I quite like Melissa Hardt’s brief and easy to read Govloop blog post summarising big data. In it she refers to the Obama administration’s US$200m “Big Data” Initiative announcement – it ends with a good summary of spending in various departments.

I heard it said that “big data” is a McKinsey term – whether it is or not, they’ve produced a report that scans the data, systems, talent and potential in the US.

The interest point here for me is data visualisation – realtime, moving pictures of systems.. imagine flexible public services with realtime responses to their service environment…