At HMCTS, we’ve historically measured ourselves on the time it takes to “dispose of” a case – in other words, how long do people have to wait from the time a case is lodged with us, to getting a decision or outcome. We’ve also carefully counted the number of “sitting days” that our budget is supposed to cover for each jurisdiction, or case type – this is the number of judge days that taxpayers are paying for, and it’s obviously very closely related to the time taken. If you have more days a judge can sit, then you can dispose of more cases in the time available.
But how important is this measure (time taken) to our users? And what about how well we’re performing against our organisation’s objectives? HMCTS is a joint venture between the judiciary and the government. Our mission is to provide access to justice through the effective administration of courts and tribunals. Ensuring our administration is just, accessible and proportionate means that judges can take high quality decisions on their cases. So what should we be measuring to assess our performance against these objectives?
Timeliness is one aspect of this, but by no means the only one. Access to justice is a complex thing to measure, and picking out the parts that courts and tribunals can affect is tricky too. But Dr Natalie Byrom from the Legal Education Foundation has published a number of reports on this (latest one here), and is getting closer to an answer. She was kind enough to come and spend a few months with me and my team last year, helping us to work out what we could do. And I’ve brought a number of lessons from the private sector here as well.
We’ve agreed that we should measure the EFFORT it takes for a person to use the courts and tribunals service – part of working out how hard it is to access justice. This is a lesson we’ve reapplied from Amazon: the online retailer is continuously looking for ways to minimize the effort it takes to buy something from its site. Some of the things we can measure to understand how hard it is to access justice are: how long it takes to fill out an application form, how many people need support to complete forms, how many times they need to contact us, how often someone has to attend a court or tribunal building, how long someone has to wait before their call is answered, and so on.
Another aspect that we want to measure is how reliable peoples’ EXPERIENCE with HMCTS is. It’s terribly important for people to have trust in the rule of law, and when we make mistakes, as every organisation sometimes does, it can have devastating effects on peoples’ lives. So we need to understand in detail the kind of mistakes we make, how serious those mistakes are, and how good we are at fixing the things we got wrong. This is a lesson we’ve brought in from the airline industry, and from the NHS. We’ve identified “never events”, such as detaining a person unlawfully, wrongly disqualifying someone from driving, or disclosing a vulnerable person’s address. And we’ve become a lot better at recording the complaints that people make about us, and sharing that information widely so that more colleagues are aware and able to innovate to improve. We know that adjourning hearings (moving the date of a hearing back) can be really disruptive for people, and so we want to measure how often we do that because of an avoidable administrative problem, rather than for a good reason.
And of course, we want to know what people think of their experience with us, what we call their PERCEPTION of HMCTS. We’ve started adding the option to answer some survey questions at the end of a telephone call, and we’re experimenting with feedback mechanisms in our buildings. You may have noticed that many companies do this after a call centre interaction and use that information to improve their processes and systems. So far, we have learned that people find our staff helpful and empathetic, but can find our processes bewildering and overly complex. This is informing our content strategy, and helping us to make things easier to understand for colleagues and for our users.
So as well as measuring sitting days and timeliness, we’re increasingly looking at measures of EFFORT, EXPERIENCE and PERCEPTION. We also plan to start capturing people’s characteristics (their gender, ethnicity, whether they have a disability, and so on), so we can see if there are any groups who are being treated differently, and take steps to address signs of unequal treatment. All of this is driving our improvement efforts, and helping us to make the administration of justice just, proportionate and accessible.