What do we know about the Utah SIB results (without a counterfactual)?

The Utah SIB recently paid a return for Goldman Sachs, and press releases from both Goldman Sachs and United Way of Salt Lake deemed the program a success. But this was met with some criticism, most notably by the New York Times in Nathaniel Popper’s article Success Metrics Questioned in School Program Funded by Goldman. Now I would argue that success for each stakeholder is achieving whatever they wanted to achieve. So as far as I’m concerned, claiming success simply means that things happened as you wanted. But we might also assume that a government’s objectives are what it’s prepared to pay for via the SIB payment metric.

So how does the payment metric for the Utah SIB work?

For the first year results, Goldman Sachs was paid 95% of the savings to the state. Savings to the state are calculated from the number of children identified as ‘likely to use special education in grade school’[i] (110 in year 1) minus the number of children who used special education (1 in kindergarten) multiplied by the cost of a special education add-on for one year ($2607).

Is that a success?

Well, the program is doing very well at delivering on its payment metric. Of the 110 children identified as likely to use special education, only one of them is using special education in kindergarten. If this is the definition of success, then the program is definitely a success!

Utah 1

(United Way (2015) SIB fact sheet)

So what’s the problem?

Many people who aren’t involved in the SIB would define success a little differently to the payment metric. They would define the success of the program by the reduction in how many children would require special education support. What we don’t know is how many of the 110 children would have needed special education without the program. I teach my probability classes that ‘likely’ means 50%-80%. But the payment metric seems to assume that 100% of the children would have needed special education without the program, according to the savings-to-government calculation. In order to know how much the program improved things for the children involved, we need a comparison group or ‘counterfactual’, an estimate of how many of the children would have ended up using special education. A counterfactual means you can claim you caused the results, the absence of a counterfactual means you can only say you contributed to them.

What’s a counterfactual?

A counterfactual or comparison group can be constructed in several ways. “A good comparison group is as similar as possible to the group of service users who are receiving an intervention, thus allowing you to be confident that the difference in outcomes between the groups is only caused by the intervention.”[ii] Some of the more commonly used counterfactuals in SIBs are shown below.

Utah 2

If you would like to know more, I recommend this Guide to Using Comparison Group Approaches from NPC and Clinks in the UK. And for guidance on randomised control trials in public policy setting you can’t go past the UK Cabinet Office’s Test, Learn, Adapt.

The Utah SIB involved no comparison group – certainly the payment metric didn’t.

So without a counterfactual, what can we say about this SIB?

  • “Of the 110 four-year-olds had been previously identified as likely to use special education in grade school…only one went on to use special education services in kindergarten.”[iii]
  • “These results triggered the first payment to investors for any SIB in the United States.”[iv]
  • “As a result of entering kindergarten better prepared, fewer children are expected to use special education and remedial services in kindergarten through 12th grade, which results in cost savings for school districts, the state of Utah and other government entities.”[v] [note this says ‘fewer children are expected to use’, not ‘fewer children use’]
  • “109 of 110 At-Risk Utah Students Avoid Special Education Services Following High-quality Preschool”[vi] [this would be untrue if the word ‘following’ was changed to ‘due to’ or ‘because of’]
  • “Utah’s [curriculum and testing] methodology was vetted both locally and nationally by early education and special education experts and leaders”[vii]
  • “They lacked certain basic data on what would have been expected to have happened to the students without the Goldman-funded preschool”[viii]
  • “My kids have really grown. I don’t think [my kids] would be where they are if it wasn’t for the preschool. That basic step is what prepares you to succeed in school, and later, in life.”[ix]

What can’t we say?

  • “School districts and government entities saved $281,550 in a single year, based on a state resource special education add-on of $2,607 per child.”[x][we have no idea what they would have spent on this group otherwise]
  • “High-quality preschool changes the odds”[xi][we simply don’t know what the odds would have been without the preschool program, so we can’t say that they’ve changed]
  • “Fewer children used special education services and remedial services by attending the SIB-financed Preschool Program, saving money for school districts and government entities”[xii]

What other SIBs don’t have a counterfactual?

  • UK: ten DWP Innovation Fund programs (seven of which were SIBs) [the Impetus-PEF ThinkForward SIB press release shows similar difficulty to the Utah SIB in understanding the difference made to young people. While 90% of young people engaged in further education, employment or training seems a wonderful result, there is no estimate of what might have happened otherwise.]
  • UK: seven Fair Chance Fund SIBs
  • UK: four Youth Engagement Fund SIBs
  • UK: Manchester Children in Care
  • UK: It’s All About Me – Adoption SIB
  • Canada: Saskatchewan single mothers’ SIB
  • Australia: Newpin SIB (for the first three years while a control group is established)

Note that most government spending on social services is not compared to a counterfactual. Some people argue that the perceived requirement for a SIB counterfactual creates an unnecessary additional barrier to SIB development, but others argue that it’s the best thing about SIBs – for the first time we are having mainstream discussions about standards of measurement and evidence in social services.

If you know of any government-funded social programs other than SIBs that do have a counterfactual, please post a link to them in the comment box below.

Why doesn’t every SIB have a counterfactual?

  • In order to estimate the effect of an intervention with any confidence, you need a large sample size. This is called ‘statistical power’ – I’ve tried to explain it in SIB Knowledge Box: Statistical Power. If a program is working intensively with just a few people, as is the case in Saskatchewan (22 children in SIB), then a reliable comparison to a counterfactual is not possible.
  • It is more work to set up a counterfactual – a similar comparison group must be established and this can take varying degrees of effort. It also takes skill that is in short supply. Biostatisticians are one of the the best resources for this kind of work. Most government stats units do not have experience in this kind of work.
  • Without a counterfactual, results can be counted as they are achieved, rather than waiting for a statistical comparison for the group, so investors can get paid earlier and more frequently and managers can ‘track’ performance.

As always, if there’s anything in this article that needs correcting or information that should be included, please either comment below or use the contact page to send me an email.


[i] United Way (2015) SIB fact sheet

[ii] NPC & Clinks (2014) Using comparison group approaches to understand impact

[iii] Edmondson, Crim, & Grossman (2015) Pay-For-Success is Working in Utah, Stanford Social Innovation Review

[iv] Edmondson, Crim, & Grossman (2015) Pay-For-Success is Working in Utah, Stanford Social Innovation Review

[v] United Way of Salt Lake 2015, Social Impact Bond for Early Childhood Education Shows Success

[vi] United Way of Salt Lake 2015, Social Impact Bond for Early Childhood Education Shows Success

[vii] Bill Crim, 2015, When Solid Data Leads to Action – Kids’ Lives Improve

[viii] Nathaniel Popper, 2015, Success Metrics Questioned in School Program Funded by Goldman

[ix] United Way (2015) SIB fact sheet

[x] Edmondson, Crim, & Grossman (2015) Pay-For-Success is Working in Utah, Stanford Social Innovation Review

[xi] United Way (2015) SIB fact sheet

[xii] United Way (2015) SIB fact sheet

Rikers Island social impact bond (SIB) – Success or failure?

There’s been a lot of discussion over tRikershe past few weeks as to whether Rikers Island was a success or failure and what that means for the SIB ‘market’. You can read the Huffington Post learning and analyses from investors and the Urban Institute as to the benefits and challenges of this SIB. But I think the success and failure discussion fails to recognise the differences in objectives and approaches between SIBs. So I’d like to elaborate on one of these differences, and that’s the attitude towards continuous adaptation of the service delivery model. Some SIBs are established to test whether a well-defined program will work with a particular population. Some SIBs are established to develop a service delivery model – to meet the needs of a particular population as they are discovered.

1.     Testing an evidence-based service-delivery model

This is where a service delivery model is rigorously tested to establish whether it delivers outcomes to this particular population under these particular conditions, funded in this particular way. These models are often referred to as ‘evidence-based programs’ that have been rigorously evaluated. The US is further ahead than other countries in the evaluation of social programs, so while these ‘proven’ programs are still in the minority, there are more of them in the US than elsewhere. These SIBs are part of a movement to support and scale programs that have proven effective. They are also part of a drive to more rigorously evaluate social programs, which has resulted in some evaluators attempting to keep all variables constant throughout service delivery.

An evidence-based service delivery model might:

  • be used to test whether a service delivery model that worked with one population will work with another;
  • be implemented faithfully and adhered to;
  • change very little over time, in fact effort may be made to keep all variables constant e.g. prescribing the service delivery model in the contract;
  • have a measurement focus that answers the question ‘was this service model effective with this population’?

“SIBs are a tool to scale proven social interventions. SIBs could fill a critical void: other than market-based approaches, a structured and replicable model for scaling proven solutions has not existed previously. SIBs can give structure to the critical handoff between philanthropy (the risk capital of social innovation) and government (the scale-up capital of social innovation) to bring evidence-based interventions to more people.” (McKinsey (2012) From potential to action: Bringing social impact bonds to the US, p.7).

2.    Developing a service delivery model

This is where you do whatever it takes to deliver outcomes, so that the service is constantly evolving. It may include an evidence-based prescriptive service model or a combination of several well evidenced components, but is expected to be continuously tested and adjusted. It may be coupled with a flexible budget (e.g. Peterborough and Essex) to pay for variations and additions services that were not initially foreseen. This approach is more prevalent in the UK.

A continuously adjusted service delivery model might:

  • be used to deliver services to populations that have previously not received services, to see whether outcomes could be improved;
  • involve every element of service delivery being continuously analysed and refined in order to achieve better outcomes;
  • continuously evolve – the program keeps adapting to need as needs are uncovered;
  • have a measurement focus that answers the question ‘were outcomes changed for this population’?

Andrew Levitt of Bridges Ventures, the biggest investor in SIBs in the UK, “There is no such thing as a proven intervention. Every intervention can be better and can fail if it’s not implemented properly –it’s so harmful to start with the assumption that it can’t get better.” (Tomkinson (2015) Delivering the Promise of Social Outcomes: The Role of the Performance Analyst p.18)

Different horses for different courses

Rikers New York City

The New York City SIB was designed to test whether the Adolescent Behavioral Learning Experience (ABLE) program would reduce the reoffending of the young offenders exiting Rikers Island. Fidelity to the designated service delivery model was prioritised, in order to obtain robust evidence of whether this particular program was effective. WYNC News reported that “Susan Gottesfeld of the Osborne Association, the group that worked with the teens, said teens needed more services – like mental health care, drug treatment and housing assistance – once they left the jail and were living back in their neighbourhoods.”

In a July 28 New York Times article by Eduardo Porter, Elizabeth Gaynes, Chief Executive of the Osborne Association is quoted as saying “All they were testing is whether M.R.T. by itself would make a difference, not whether something you could do in a jail would make a difference,” Ms. Gaynes said. “Even if we could have raised money to do other stuff, we were not allowed to because we were testing M.R.T. alone.”

This is in stark contrast with the approach taken in the Peterborough SIB. Their performance management approach was a continuous process of identifying these additional needs and procuring services to meet them. The Peterborough SIB involved many adjustments to its service over the course of delivery. For example, mental health support was added, providers changed, a decision was made to meet all prisoners at the gate… as needs were identified, the model was adjusted to respond. (For more detail, see Learning as They Go p.22, Nicholls, A., and Tomkinson, E. (2013). Case Study: The Peterborough Pilot Social Impact Bond. Oxford: Saïd Business School, University of Oxford.)

Neither approach is necessarily right or wrong, but we should avoid painting one SIB a success or failure according to the objectives and approach of another. What I’d like to see is a question for each SIB: ‘What is it you’re trying to learn/test?’ It won’t be the same for every SIB, but making this clear from the start allows for analysis at the end that reflects that learning and moves us forward. As each program finishes, let’s not waste time on ‘Success or failure?’, let’s get stuck into: ‘So what? Now what?’

Huge thanks to Alisa Helbitz and Steve Goldberg for their brilliant and constructive feedback on this blog.

Developing a counterfactual for a social impact bond (SIB)

The following was taken from a presentation by Sally Cowling, Director of Research, Innovation and Advocacy for UnitingCare Children, Young People and Families. The presentation was to the Social Impact Measurement Network of Australia (SIMNA) New South Wales chapter on March 11 2015. Sally was discussing the measurement aspects of the Newpin Social Benefit Bond, which is referred to as a social impact bond in this article for an international audience.

The social impact bond (called Social Benefit Bond in New South WaleSally Cowlings) was something very new for us. The Newpin (New Parent and Infant Network) program had been running for a decade supported by our own investment funding, and our staff were deeply committed to it. When our late CEO, Jane Woodruff, appointed me to our SIB team she said my role was to ’make sure this fancy financial thing doesn’t bugger Newpin up’.

One of the important steps in developing a social impact bond is to develop a counterfactual. This estimates what would have happened to the families and children involved in Newpin without the program, the ‘business as usual’ scenario. This was the hardest part of the SIB. The Newpin program works with families to become strong enough for their children to be restored to them from care. But the administrative data didn’t enable us to compare groups of potential Newpin families based on risk profiles to determine a probability of restoration to their families for children in care. We needed to do this to estimate the difference the program could make for families, and to assess the extent to which Newpin would realise government savings.

Experimenting with randomised control trials

NSW Family and Community Services (FACS) were keen to randomly allocate families to Newpin as an efficient means to compare family restoration and preservation outcomes for those who were in our program and those who weren’t. A randomised control trial is generally considered the ‘gold standard’ in the measurement of effect, so that’s where we started.

Child's drawing of a happy kidOne of my key lessons from my Newpin practice colleagues was the importance of their relationships and conversations with government child protection (FACS) staff when determining which families were ready for Newpin and had a genuine probability (much lower than 100%) of restoration. When random allocations were first flagged I thought ‘this will bugger stuff up’.

To the credit of FACS they were willing to run an experiment involving local Newpin Coordinators and their colleagues in child protection services. We created some basic Newpin eligibility criteria and FACS ran a list from their administrative data and randomly selected 40 families (all of whom were de-identified) for both sets of practitioners to consider. A key part of the experiment was for the FACS officer with access to the richer data in case files to add notes. Through these notes and conversations it was quickly clear that a lot of mothers and fathers on the list weren’t ready for Newpin because:

  • One was living in south America
  • A couple had moved interstate
  • One was in prison
  • One had subsequent children who had been placed into care
  • One was co-resident with a violent and abusive partner – a circumstance that needed to be addressed before they could commence Newpin

From memory, somewhere between 15 and 20 percent of our automated would-be-referrals would have been a good fit for the program. It was enlightening to be one of the non-practitioners in the room listening to specialists exchange informed, thoughtful views about who Newpin could have a serious chance at working for. This experiment was a ‘light bulb moment’ for all of us. For both the government and our SIB teams, randomisation was off the table. Not only was the data not fit for that purpose, we all recognised the importance of maintaining professional relationships.

In hindsight, I think the ‘experiment’ was also important to building the trust of our Newpin staff in our negotiating team. They saw an economist and accountant listening to their views and engaging in a process of testing. They saw that we weren’t prepared to trade off the fidelity and integrity of the NewpiChild's drawing of happinessn program to ‘get’ a SIB and that we were thinking ethically through all aspects of the program. We were a team and all members knew where they did and didn’t have expertise.

Ultimately Newpin is about relationships. Not just the relationships between our staff and the families they work with, but the relationship between our staff and government child protection workers.

But we still had the ‘counterfactual problem’! The joint development phase of the SIB – in which we had access to unpublished and de-identified government data under strict confidentiality provisions – convinced me that we didn’t have the administrative data needed to come up with what I had come to call the ‘frigging counterfactual’ (in my head the adjective was a bit sharper!). FACS suggested I come up with a way to ‘solve’ the problem and they would do their best to get me good proxy data. As the deadline was closing in, I remember a teary, pathetic midnight moment willing that US-style admin data had found a home in Australia.

Using historical data from case files

Eventually you have to stop moping and start working. I decided to go back to the last three years of case files for the Newpin program. Foster care research is clear that the best predictor of whether a child in the care system would be restored to their family was duration in care. We profiled all the children we had worked with, their duration in care prior to entry to Newpin and intervention length. FACS provided restoration and reversal rates in a matrix structure and matching allowed us to estimate that if we worked with the same group of families (that is, the same duration of care profiles) under the SIB that we had in the previous 3 years, then the counterfactual (the percentage of children who would be restored without a Newpin intervention) would be 25%.

As we negotiated the Newpin Social Benefit Bond contract with the NSW Government we did need to acknowledge that a SIB issue had never been put to the Australian investment market and we needed to provide some protection for investors. We negotiated a fixed counterfactual of 25% for the first three years of the SIB. That means that the Newpin social impact bond is valued and paid on the restoration rate we can achieve over 25%. Thus far, our guesses have been remarkably accurate. To the government’s immense credit, they are building a live control group that will act as the counterfactual after the first three years. This is very resource intensive but the government was determined to make the pilot process as robust as possible

In terms of practice culture, I can’t emphasise enough the importance of thinking ethically. We had to keep asking ourselves, ‘Does this financial structure create perverse incentives for our practice?’ The matched control group and tightly defined eligibility criteria remove incentives for ‘cherry picking’ (choosing easier cases). The restoration decisions that are central to the effectiveness of the program are made independently by the NSW Children’s Court and we need to be confident that children can remain safely at home. If a restoration breaks down within 12 months our performance payment for that result is returned to the government. For all of us involved in the Newpin Social Benefit Bond project behaving thoughtfully, ethically and protecting the integrity of the Newpin program has been our raison d’etre. That under the bond, the program is achieving better results for a much higher risk of group of families and spawning practice innovation is a source of joy which is true to our social justice ethos.

Delivering the Promise of Social Outcomes: The Role of the Performance Analyst

I’ve wanted to write about performance management systems for a long time. I knew there were people drawing insights from data to improve social programs and I wanted to know more about them. I wanted to know all about their work and highlight the importance of these quiet, back-office champions. But they weren’t easy to find, or find time with.

Dan
Dan Miodovnik, Social Finance

I worked at Social Finance in London for three months in late 2013, a fair chunk of that time spent skulking around behind Dan Miodovnik’s desk. I’d peer over his shoulders at his computer screen as he worked flat out, trying to catch a glimpse of these magic performance management ‘systems’ he’d developed. At the end of my time at Social Finance, I understood how valuable the performance management role was to their social impact bonds (SIBs), but I still had no idea of what it actually entailed.

Antonio Miguel, The Social Investment Lab
Antonio Miguel, The Social Investment Lab

Then early 2014 Antonio Miguel and I took a 2-hour bullet train ride through Japan while on a SIB speaking tour. On this train journey I asked Antonio to open his computer and show me the performance management systems he’d worked on with Social Finance. Two hours later, I understood the essential components of a performance management system, but I didn’t fully grasp the detail of how these components worked together.

So I proposed to Dan that we join Antonio on the beaches of Cascais in Portugal in August 2014. My cunning research plan was to catch them at their most relaxed and pick their brains over beach time and beers. Around this time I saw a blog written by Jenny North, from Impetus-PEF that mentioned performance management. A call with her confirmed that they were as enthused about performance management as I was. So I drafted a clean, six-step ‘how to’ guide for constructing a performance management system. I hoped that a quick edit from Dan and Antonio, a couple of quotes and I’d be done.

Interviewing Dan and Antonio blew me away. Only when I heard them talk freely about their work did I realise the magic wasn’t in their computer systems, it was in their attitudes. It was their attitude to forming relationships with everyone who needed to use their data. It was their attitude to their role – as the couriers, rather than the policemen, of data.

They told me that there were plenty of ‘how to’ guides for setting up systems like theirs, but that the difficult thing was getting people to read and implement them.

Isaac Castillo, DC Promise Neighbourhood Initiative
Isaac Castillo, DC Promise Neighbourhood Initiative

They suggested I throw out my draft and interview more people. People who were delivering services and their investors. I didn’t just need to understand the system itself, I needed to understand what it meant for the people who delivered and funded services. I gathered many of these people at San Francisco’s Social Capital Markets (SOCAP) conference and several more from recommendations. One of these recommendations was Isaac Castillo, who works with the DC Promise Neighbourhood Initiative’s collective impact project. He is now managing not only his team of performance analysts, but the service delivery team too. It’s revolutionary, but it makes complete sense.

Interviewing these people has been a most humbling experience. It has revealed to me the extent of their dedication, innovation and intelligence. It has also revealed to me how little I knew, and in turn, how little we, as a sector, know about these people and their work. I am honoured to share their stories with you – please read them at deliveringthepromise.org.


This research is published by The Social Investment Lab (Portugal), Impetus-PEF (UK) and Think Impact (Australia).

logos in row

Malaysian Innovation: Building a Social Impact Bond (SIB) Pipeline

Agensi Inovasi Malaysia, part of the Malaysian Government, has embarked on a journey towards Social Impact Bonds that reflects the Malaysian social and policy context. There are three innovative features of their program, ‘Social Service Delivery’, worth highlighting:

  1. Explaining SIBs as a public-private partnership for social good
  2. Creating a market of new interventions to contract via a SIB
  3. Exploring Islamic finance as a source of SIB funding

Let’s explore each of these innovations in turn.

Explaining SIBs as a public-private partnership for social good

Social Impact Bonds were first implemented by an organisation called Social Finance in the UK in 2010. The idea has since generated interest all over the world. The concept can be overwhelming for stakeholders, who seek to understand how far away this model might be from their current reality. In Malaysia, Social Impact Bonds have been framed as the logical next step after the recent introduction of other long-term partnerships and privately financed initiatives (PFIs) towards new infrastructure such as buildings and roads. The 2010 New Economic Model for Malaysia from the National Economic Advisory Council called for ‘academia, business, the civil service, and civil society’ to ‘work together in partnership for the greater good of the nation as a whole’ (Part 1, p. 68). Social Impact Bonds are one vehicle by which these recommendations will be delivered. They are an arrangement where a non-government organisation delivers an intervention that is first financed by private investors who stand to be repaid with interest from government funding if a social outcome is achieved. There are incentives for each stakeholder to be involved (see the Agensi Inovasi Malaysia diagram below).

Diagram of objectives of program

(Agensi Inovasi Malaysia)

Creating a market of new interventions to contract via a SIB

Most jurisdictions that have developed a SIB have first scanned their market for investors, intermediaries and proven or promising social delivery organisations. And then they’ve thought about how to run a procurement process that brings the best of these players together, along with an intervention to achieve a priority outcome for Government. Although procurement approaches have varied, all have rested on the ability of the market to delivery suitable interventions that can be managed by organisations with sufficient capabilities to produce the desired social outcomes. Agensi Inovasi Malaysia has enhanced their opportunity to engage with capable service providers by holding a competition for new ideas in priority areas, and then incubating and collecting evidence on these new initiatives, with the end goal of a Social Service Delivery contract. This is not only a way to provide services that are suitable for the first Social Impact Bonds in Malaysia, but creates a pipeline of evidenced programs for the future.

Social impact bonds emerged in the UK in 2010, with 23 currently in operation. Development plateaued, however, during 2013 and 2014 (see chart below).

SIBs launched white background

In the latter half of 2013, attention turned to the development of a pipeline of SIBs to bring to market. Big Lottery Fund and the UK Cabinet Office are working together on “a joint mission to support the development of more SIBs” through their social outcomes funds totalling £60 million. Social Finance, in partnership with the Local Government Association, has been commissioned to support applicants to their funds and there is also a program of grants for organisations requiring specialist technical support to apply (Big Lottery Fund).

Agensi Inovasi Malaysia will potentially avoid the problems of the UK, by seeding and supporting a pipeline of interventions up front. This pipeline has been created through the ‘Berbudi Berganda: Social Impact Innovation Challenge’ which called for social organisations to submit their ideas for interventions to tackle the priority issues of:

  • youth unemployment
  • homelessness
  • elderly care.

The top 12 organisations won funds and support to implement their ideas, the impact of which will be the subject of action research over their first four months. This research will form the basis of a framework and delivery model addressing the priority issues. The pilot program timeline is below.

Apr 2014 Feasibility study
Sep 2014 Focus group discussion
Oct – Nov 2014 Social Innovation Challenge
Jan – Apr 2015 Incubation
Jan – Apr 2015 Intervention
Jan – Apr 2015 Action research and impact study
2015 Social Finance Policy Framework
2015-16 Model for ‘Social Service Delivery’

The benefits of the competition and incubation approach include:

  • focusing NGO innovation in government priority issue areas
  • government being able to work with NGOs over a longer period of time, thus gaining a better understanding of the ability of the organisation to deliver effective programs and outcomes
  • creating an evidence base that will inform the design of ‘Social Service Delivery’
  • supporting organisations to build and test interventions suitable for a Social Impact Bond.

The Agensi Inovasi Malaysia approach might require more up-front government funding than other jurisdictions have been or will be able to provide. But for a government that has limited experience outsourcing social services, it is a collaborative and supportive way to create a market of interventions that might otherwise not exist.

Exploring Islamic Finance as a source of SIB funding

The potential for Islamic finance to become a source of funding for Social Impact Bonds is significant and has not yet been explored. The Islamic religion obliges its followers to give the zakat, a portion of their wealth to ease inequality and suffering. The total given each year is estimated at 15 times that of global humanitarian aid contributions, and in Malaysia the zakat collected by Government is over US $400 million (Irin News).

Islamic finance includes Musharakah (Joint Venture Partnership), Waqaf (charitable donations), Debt Structure, and Sukuk (Islamic Bonds). A Musharakah could be used as the structure that holds the contracts with other parties. Sukuk could be used for investment, although their flexibility in terms of repayments that are dependent on outcomes will need to be determined. Waqaf could be used to fund a specific fixed cost such as legal fees, extra staff for development of a SIB, software, premises, audit, insurance, performance management or evaluation. The way this could fit into a Social Impact Bond structure is shown below.

Malaysia 2

Conclusion

Agensi Inovasi Malaysia has created a unique pathway towards Social Impact Bonds. Their approach mitigates the risks of implementing the model in a country without a history of outsourcing social services. They have framed this new contracting model in the broader policy context of public-private partnerships, which aids wider understanding of both the model and the objectives of government. By seeding and supporting new programs that address priority issues, the Government will be able to understand and evidence the impact of these new programs, before contracting them for ‘Social Service Delivery’. Finally, the exploration of the role Islamic finance can play in a Social Impact Bond has the potential to be applied in other jurisdictions and extends the ability of Islamic finance to achieve social outcomes.

This blog was written as a result of a project Emma is working on with Agensi Inovasi Malaysia. It describes aspects of their programs that she found interesting and relevant. These are Emma’s personal views and should not be taken as representative of Agensi Inovasi Malaysia or any other organisation. 

Procurement precedents for social impact bonds (SIBs)

There are many ways to procure for a SIB. The following examples of procurement processes have been chosen to demonstrate variation. The advantages and disadvantages of each are context specific – if you are developing a new procurement process you might want to think about whether each variation promotes or hinders your objectives.

I the word ‘procurement’ to refer to any of the means by which governments might ask external organisations to deliver a service under contract.

Please refer to the source information if you are producing further publications – I have tried to faithfully summarise each procurement process, but my interpretations have not been checked with the parties involved. Happy to accept corrections or suggestions.

Ontario, Canada

Deloitte won the initial RFP is currently in the final stages of that contract, with an Ontario Government decision expected in the next few months. An interesting feature of this process is the parallel ‘internal’ and ‘external’ streams, where public servants are proposing their outcome ideas at the same time as people in the market are also proposing. External ‘registrations’ of interest were called for in the following priority areas:

  • Housing – Improving access to affordable, suitable and adequate housing for individuals and families in need.
  • Youth at Risk – Supporting children and youth with one or more of the following: overcoming mental health challenges, escaping poverty, avoiding conflict with the law, youth leaving care, Aboriginal, racialized youth, and other specific challenges facing children and youth at risk, for example employment.
  • Employment – Improving opportunities for persons facing barriers to employment, including persons with disabilities.

procurement Ontario

New South Wales, Australia

In New South Wales we suffered from locking ourselves out of developing the idea with organisations over the 6 months it took to run RFP and negotiate the contracts for the next stage. We did not agree a maximum budget or referral mechanism until the joint development stage – we asked for organisations to come up with these as well as a full economic and financial model in their RFP. None of us who were involved in designing the procurement process feel we got it quite right, yet given the opportunity, we would all redesign it in different ways! (See NSW Treasury page on ‘Social Benefit Bonds’)

procurement NSW

Procurement timeline:

November 2010 NSW Government commissions a feasibility study from Centre for Social Impact
February 2011 SIB Feasibility Study report submitted  and published
March 2011 State government elections and change of government (left to right)
September 2011 (due Nov) SBB Trial Request for Proposal released
March 2012 3 consortia announced joint development phase begins
March 2013 Newpin Social Benefit Bond contracts agreed
June 2013 Benevolent Society + 2 banks Social Benefit Bond contracts agreed

New York City, USA

An interesting feature of the New York City SIB development process was that service delivery partners were procured for first, and started delivering services while being involved in developing a SIB for future financing of the service.

procurement New York City

New Zealand

The New Zealand process appears to be the only one where the government procured for the intermediaries and service providers separately. It is not yet clear what the benefits of this might have been or how they will be matched up.

procurement New Zealand

Massachusetts

Several US states have followed a similar procurement process to Massachusetts, which first involved a Request for Information from organisations external to government. This approach allows the market to shape government thinking and recognises that there may be social issues and intervention types that government hasn’t previously considered. Some jurisdictions have accomplished this with less formal consultations e.g. Queensland Government’s cross-sector payment-by-outcomes design forum and Nova Scotia Government’s cross-sector SIB Working Group.

procurement Massachusetts

Massachusetts Selection Criteria:

  1. Government leadership to address and spearhead a public/private innovation.
  2. Social needs that are unmet, high-priority and large-scale.
  3. Target populations that are well-defined and can be measured with scientific rigor.
  4. Proven outcomes from administrative data that is credible and readily available in a cost effective means.
  5. Interventions that are highly likely to achieve targeted impact goals.
  6. Proven service providers that are prepared to scale with quality.
  7. Safeguards to protect the well-being of populations served.
  8. Cost effective programs that can demonstrate fiscal savings for Government.

Department of WOrk and Pensions UK

The Department of Work and Pensions developed a ‘rate card’ for payment per individual outcome for their procurement. They asked organisations to choose a subset of outcomes to deliver, nominate a price per outcome and the intervention that would achieve them. A social impact bond structure was not mandated – seven out of the ten chosen programs involved external investors. The following process occurred twice in 2012:

procurement DWP

DWP Rate Card: DWP pays for one or more outcomes per participant which can be linked to improved employability. A definitive list of outcomes and maximum prices DWP was willing to pay for Round 2 is:

DWP rate card

Saskatchewan, Canada

This process may be followed if an unsolicited proposal is received. An interesting feature of the Saskatchewan SIB is that the investor has also signed the contract with government.

procurement Saskatoon

Essex, UK

The process of developing the Essex social impact bond is described in Social Finance’s Technical Guide to Developing Social Impact Bonds. Social Finance worked closely with Essex County Council to research and develop a SIB, with the final step being procuring for a service provider.

procurement Essex

Conclusion

Governments need to think about which information need to be included in a procurement document. For example, if it is desired that organisations external to government come up with completely new service areas, then a procurement process that does not state the social issue to be addressed or contracting department might be suitable. But information and constraints that are known should be included in a tender document. It’s simply irresponsible to have a criminal justice organisation spend time working on a response offering intensive services for 30 female offenders if there was never any possibility the SIB was going to be in justice, or with female offenders, or with a small group of people.

The Peterborough Social Impact Bond (SIB) conspiracy

If you think Social Impact Bonds are the biggest thing to hit public policy EVER, then you were probably horrified at the cancellation of the final cohort of the flagship Peterborough SIB. How is it possible? What does it mean?

Since the news was broken in April this year (2014), I’ve had questions from as far afield as Japan and Israel trying to discover the UK Government’s TRUE agenda. More recently, at the SOCAP Conference in San Francisco in August, it was raised again. Eileen Neely from Living Cities, which has provided $1.5 million in loan financing for the Massachusetts Social Impact Bond was discussing “shut down risk: what happens if one of the parties decide they don’t want to play.”

She said, “In the Peterborough deal in the UK, the government decided that they weren’t going to play any more… so there’s some who say ‘Oh it’s because it wasn’t going well’ and others are saying ‘It was cos it was going too well’ so whichever it is, they decided that they weren’t going to do it, that they weren’t going to go into the next cohort, so what does that mean to the investors?” Eileen made it quite clear “I haven’t talked to any of the participants there, I’m just outside, reading the articles and the blogs …”

I thought it was about time we summarised the evidence for those who continue to ask these questions. Continue reading

Social Impact Bonds and Pay for Success – are they synonyms?

On a recent trip to the US, I noticed that the discussions around ‘Pay for Success’ were a little different to those I’d been having on ‘Social Impact Bonds (SIBs)’ with other countries. Particularly in the measurement community, there was an idea that Pay for Success took measurement of social programs to a new level: that ‘Pay for Success’ meant paying for an effect size (by comparison to a control group), rather than ‘Pay for Performance’ which paid for the number of times something occurred. Continue reading

Using SROI for a Social Impact Bond

Social Return on Investment (SROI) and Social Impact Bonds (SIBs) are two ideas that are increasingly mentioned in the same breath. SROI is a measurement and accounting framework and SIBs are a way to contract and finance a service. Both require three common ingredients:

  • the quantification of one or more social outcomes for beneficiaries,
  • a valuation of these outcomes, and
  • an estimation of the cost of delivering these outcomes.

While not a necessary ingredient, SROI can contribute to the design, operation and evaluation of SIBs.

*NB the word ‘outcome’ is used here to represent a change in someone’s life – some readers (particularly from the US) may use the word ‘impact’ to mean the same

SIBs and SROI 1 Continue reading