Randomised controlled trials (RCTs) in public policy

RCT

The basic design of a randomised controlled trial (RCT), illustrated with a test of a new ʻback to workʼ programme (Haynes et. al, 2012, p.4).

In 2012, Laura Haynes, Owain Service, Ben Goldacre & David Torgerson wrote the fantastic paper Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials. They begin the paper by making the case for RCTs with the following four points.

1.We don’t necessarily know ‘what works’ – “confident predictions about policy made by experts often turn out to be incorrect. RCTs have demonstrated that interventions which were designed to be effective were in fact not”

2. RCTs don’t have to cost a lot of money – “The costs of an RCT depend on how it is designed: with planning, they can be cheaper than other forms of evaluation.”

3. There are ethical advantages to using RCTs – “Sometimes people object to RCTs in public policy on the grounds that it is unethical to withhold a new intervention from people who could benefit from it.” “If anything, a phased introduction in the context of an RCT is more ethical, because it generates new high quality information that may help to  demonstrate that an intervention is cost effective.”

4. RCTs do not have to be complicated or difficult to run – “It is much more efficient to put a smaller amount of effort [than a post-intervention impact evaluation] into the design of an RCT before a policy is implemented.”

Laura and her team are making a huge difference to the way the UK Government perceives and implements RCTs.

The World Bank has also published some fantastic guidance in their  Impact Evaluation OverviewThis includes information abou their  Development Impact Evaluation (DIME) initiative that has the following objectives:

  • “To increase the number of Bank projects with impact evaluation components;
  • To increase staff capacity to design and carry out such evaluations;
  • To build a process of systematic learning based on effective development interventions with lessons learned from completed evaluations.”

I’ve popped both these resources on the Social Impact Bond Knowledge Box page Comparisons and the counterfactual, but thought they were so valuable it was worth expanding on them here.

DFID paper on impact evaluations in international development – so useful!

This Department for International Development (DFID) UK working paper is fantastic – so useful to have a summary of impact evaluation set out so clearly.

DFID Working Paper 38. Broadening the range of designs and methods for impact evaluations. Download PDF here or link to the paper on the DFID website here

This report brings together the findings and conclusions of a study on Impact Evaluation (IE) commissioned by DFID. It comprises an executive summary and 7 chapters:

  • Introducing the study
  • Defining impact evaluation
  • Choosing designs and methods
  • Evaluation questions and evaluation designs
  • Programme attributes and designs
  • Quality assurance
  • Conclusions and next steps

Social impact considered in refusal of alcohol licence

Interesting to see this article in the Manning River Times (August 16 2012) on how the Office of Liquor and Gaming considered evidence from local Police in their decision to refuse a discount liquor licence to supermarket giant Aldi in the small town of Taree, in New South Wales. Evidence included statements from Police and crime statistics for the area. While social impact reports are not required in many Government decisions, it’s great to see the local Police putting a case together for this one.

Measuring the effect of interventions that strengthen families? Start here!

Children of Parents with a Mental Illness (COPMI) is an Adelaide-based organisation with a website rich in resources, both for families living with mental illness and those that support them. I was particularly impressed with the research section of the site – it’s easy to navigate, up-to-date and provides a wealth of information for evaluators of family-based interventions. They list several measures of parental self-efficacy and competence, summarising their reliability and validity, as well as an easy-to-read overview of evaluation. Their research information on young people includes lists of measures of stress and coping, self-esteem, connectedness, knowledge of mental health, strengths and difficulties and resiliance.

How do you know when outcome change can be attributed to your intervention?

It was exciting to read the new working paper Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework by Howard White and Daniel Phillips. While it would be ideal that all the interventions we design would have the number of participants (n) and impact that would give us a statistically significant result at a high level of confidence, there are many reasons that this doesn’t happen. For payment-by-results contracts, in particular social impact bonds, attributing an impact to an intervention is a pre-requisite for the transfer of public funds. Also, funders the world over are attempting to identify the impact they are making across their portfolios, to increase the effectiveness of their investments. White and Phillips produce a fantastic summary of methods and examples that seek to attribute change to a cause. While their framework of small n methods is useful, it’s their up-to-date literature review that I find most useful.

The paper is published by 3ie: International Initiative for Impact Evaluation. 3ie have developed a database of policy briefs, impact evaluations and systematic reviews. They’re governed and staffed by a trans-global team, and while focussed on international development, their evaluation work is certainly relevant for interventions that alleviate disadvantage at a local or national level.

NPC’s shared measurement series – developing common metrics for common outcomes

New Philanthropy Capital (NPC) has so far produced three reports in its Measuring together series. Each report looks at measuring both final outcomes for program participants and outcomes achieved along the way. This is one of those endeavours where we all benefit from the reports and their methodologies, but the benefit of having charities in the same service area work together to develop them may be even greater.

[post edited April 2013]

I’m going to do an update here and just link to NPC’s work Mapping Outcomes for Social Investment, part of developing an impact measurement framework for Big Society Capital – there’s the outcomes matrix, outcomes maps and ‘Investing for Good’ by Adrian Hornsby and Gabi Blumberg of Investing for Good. The Good Investor is available in one of the most user-friendly formats if you click on that link and I can’t wait until the matrix and maps go online and PDF free (hint, hint)!!

Will we see more randomised controlled trials in social program evaluation?

Randomised controlled trials are the preferred measurement method for the NSW social benefit bond (social impact bond) trials. The Coalition for Evidence-Based Policy published an overview and demonstration of rigorous-but-low-cost program evaluations in March 2012. The publication highlights the use of randomized controlled trials (RCTs) with administrative data systems, providing a number of examples from existing studies. RCTs are widely considered best practice with respect to program evaluation.

RBS ranks social enterprises on turnover growth

RBS produces an annual SE100 index that ranks social enterprises based purely on their growth in turnover in the previous year. Social enterprises sign up by completing a comprehensive online survey that covers features of the organisation, financial information and social impact. The dataset created by this survey would be amazing if the data is fairly clean and most organisations complete the survey (it’s quite long and there doesn’t seem to be a way to save, download or print, which might be a barrier). RBS has partnered with Bristol Business School, so I hope we’ll be seeing some interesting papers come out of this survey!