Why benefit numbers decreasing is a stupid metric for success

Jess Berentson-ShawTax and Welfare

I recently attended a conference entitled ‘Evidence to Action’. It was a conference primarily aimed at those in the public service and the not for profit sector. It is an annual event put on by a small government outfit called SUPERU. The name is irrelevant; SUPRERU is a beacon of evidence based hope in a sea of anecdote and poorly developed and evaluated policy. They do a great job of emphasising the vital importance of evidence (and the right type of evidence) that SHOULD be used to inform where taxpayer dollars are best spent.

How do We Know Social Improvement Programmes (or Social Investment) Work? 

So there was the Associate Minister, opening this conference on how to better use evidence in policy and programme delivery. The minister spoke about social investment; spoke about the need to use data and evidence to drive that social investment. This sounded good. We like evidence. Then it all went a bit pear shaped as the Minister talked about the success that the government had had through the use of evidenced based policy. One of the successes that made it onto her list was the reduction of numbers of people receiving welfare. I paused from my Twitter scrolling, looked up and wondered had anyone else in the room noted the supreme irony in that statement. I think they might have, Minister.

Let me explain the irony.

Measuring Outcomes that Matter to Real People Not Politicians Gives a Very Different View on What Works

When you are talking how to deliver better services to people in New Zealand through the use of evidence and data there is a little word that matters a lot. That word is ‘Outcomes’. The way to know if a policy is working for the people you serve is to measure the outcomes that matter to people. The outcomes that make their lives noticeably better. What is NOT an outcome is how many people you have tipped off the benefit. This is a punitive statistic, a number, it tells us one thing – that the government has saved money on benefit payments.

It is not an outcome because of what it DOES NOT tell us. It does not tell us for example what has happened to these people. Have they got jobs? Have their economic circumstances improved? Has it increased costs in other areas because there is a negative impact on child well-being when you tip people off welfare? An outcome has meaning for the people you are serving and outcomes also take into account the impact that a policy will have on the entire system.

The other great irony here is that if we do use meaningful outcomes to measure the effectiveness of punitive ‘welfare to work’ policies we see they make things worse, especially for children. High quality international evidence tells us that being tipped off welfare via punitive social policies (like we currently have) does not make people’s (especially families with children) lives better; it does not improve their overall economic well-being and their kids have poor educational and adult outcomes. Nice work! What the Minister did in her opening address (and we are not so much picking on this Minister as politicians in general here) was skate over the meaningful outcomes for New Zealand’s children, made not a mention of our infectious disease rate, nary a whisper about children’s educational attainment, and certainly no discussion on the numbers of families who cannot cover the basics (or extras) for their kids. And you know why? Because these things have not changed, not even a little. The Government’s punitive welfare reform has not worked in a meaningful way on the outcomes that matter to the people of New Zealand.

By Not Using Evidence in the Right Way We are Throwing Good Money after Bad

If the Minister had stayed for the keynote speaker she would have heard a quote from Peter Rossi

“The expected value of any net impact assessment of any large scale social program is zero”

Known as the Iron Law of Evaluation it underscores a depressing truth – many social policies and programmes that are implemented do not change the lives of those people they are delivered to. This lack of effectiveness can be attributed to the fact that we are not driving our social policies from the evidence, and we are not identifying and measuring the right outcomes (the ones that matter to people). Instead we are driving them from an ideological or political position and we are measuring outputs (not real people focussed outcomes) that meet that agenda. There has for many years been a real unease in the way the public service has been reoriented, reoriented not to provide independent evidence based advice, but instead to serve at the pleasure of individual politicians. Something those who have previously been in government have made some pointed comments about.

I am sure I am not the only one who attended that conference who had a real desire to stand up and point out the large rainbow hued elephant in the room as the Associate Minister was speaking. But many policy makers do not have the luxury of doing that (or the ability to write a blog about it later). However at the Morgan Foundation we can say we have things wrong. We can have all the conferences in the world about how to better use evidence, how to implement evidence driven programmes, how to measure whether those programmes are working in a reliable and valid way, and how to use outcomes that have real meaning to real people. But until politicians believe in the dispassionate and independent use of evidence to design, deliver and evaluate policy from first principles, we are going to be seeing a lot of elephants wandering around conference rooms across the country.


Why benefit numbers decreasing is a stupid metric for success was last modified: April 7th, 2016 by Jess Berentson-Shaw
About the Author

Jess Berentson-Shaw

Dr Jess Berentson-Shaw is a science researcher working for the Morgan Foundation. Jess holds a PhD in Health Psychology from Victoria University. Jess has over 10 years’ experience working on applying science and evidence to public policy. She worked on improving the use of science in public health practice in NZ, before working as a Research Fellow at University College in London, where she researched how doctors and clinicians translate scientific evidence into their clinical practice. While in the UK she also developed a national data collection system, which was used to determine what factors contribute to poor outcomes for women and babies during pregnancy and birth. On her return to New Zealand she directed a research group that specialised in the independent evaluation and application of research and science to health policy and practice. Jess loves science and what it can do to make the world a fairer place.