QED

Absurdity at the top

What’s the quality of the quality measuring process?

Nobody is against improving quality. And often to do that you need some way of measuring what you have, of giving it a score in terms of its existing quality. But that’s where things get tricky.

Take this government’s new attempt to measure the quality of research being done in the universities here in Australia. The idea is that they are going to set up a system that will give us a rough-and-ready notion or score for the existing quality of the research being produced. Sounds great, right? Alas, it’s not at all clear that it is. In fact, it looks to be an opaque and uninformative system with unintended incentives that no overseas person could take overly seriously.

At least that’s the case in my discipline of law. Consider how this new system is going to measure the quality of what we Australian legal academics are producing. First it sets out criteria that will count in this measuring process. So each academic will nominate some of his or her best publications over the past 6 years. And then the person’s overall publication record will be considered. And his or her department’s record for graduating doctoral students will be thrown into the hopper, as well as the grant income people have won and the esteem in which they are held.

That all sounds pretty reasonable, doesn’t it? But take a closer look. Start with how a particular publication is going to be assessed. What has been done is that the body administering this thing, the Australian Research Council (ARC), has created a four tiered list of journals. What they consider to be the best ones in the world get an A* score, then down from that an A score, and then a B score. Everything not in one of these tiers falls into the C group and is worth very little.

The idea is that if you get published in an A* journal that gets you more points, more credit, than one further down, and so on.

So how the list is drawn up really, really matters. And to be blunt, the process has been awful and the end result in some ways a bit of a joke. The first tentative list was done by the ARC itself. They basically just copied a US list, which was overwhelmingly focused on US law journals. This was such a howler of a list that the job was then handed over to the Council of Australian Law Deans (CALD).  

At this point the creation of the tiered categories, as you would expect, became political. So Australian law journals got plunked into the A* category with the Harvard Law Review and the Oxford Journal of Legal Studies. Take it from me, no one outside of Australia would think any Australian law journal was in that league.

Of course one can understand a bit of nationalism and pretence, though I emphasise the word ‘bit’. That said, which journals from here got in which categories was also highly contestable. So after the CALD list was created the ARC made changes.

On what basis you might ask? Well, they tell us they wrote to 24 experts and asked what they thought. Now at this point things get amazing. The ARC will not say who these 24 people were. Rumour has it that only 11 of them replied, but the ARC won’t say what those replying said either.  Nor will they say how these people were selected, or what percentage were Australians.

So we have a list of journals that will have real, hard monetary effects on universities in Australia, a list that I think is highly flawed (and allow me here to declare a personal interest as the editor of an Australian law journal), and one where the ARC refuses to release any information at all about the people who made it.

This is flat out outrageous. The ARC’s attempt to hide behind some pathetic abstraction like ‘confidentiality’ is beyond parody.  These so-called experts aren’t writing a reference about another person. They’re ranking law journals. If they won’t sign their names to their opinion, and let it become public, they shouldn’t get the job or their views should be wholly discounted. It looks very much as though the ARC simply hasn’t got the bottle to release this information.

And in New Zealand, in a similar exercise, everyone got to know who the so-called experts were, if for no other reason than that law is an inherently political and politicised discipline, much more so than physics or medicine.  So over there it was open and transparent in contrast to how it is comparatively far more secretive, opaque and downright indefensible here.

The point is that the list of prestigious law journals has been made by a group of people but we don’t know who they were; we don’t know what they said; we don’t know how they were selected or on what basis; and we don’t know how many were overseas people.  This is the process that is being used to determine quality.

Have you stopped laughing yet?

And if that’s not bad enough, it gets worse. Take the fact that one’s past ability to win research funds or grant income is being used as one of the criteria of quality.  In the hard sciences this might – and only might – be halfway defensible. In law and the humanities it is completely bogus. Research grants are an input, not an output. What matters is the quality of what you produce, not the fact you could convince some ARC appointed people to give you money.

If you used that money to write something excellent, it will be counted anyway. Factoring in the fact you got some people to give you money is, at best, double counting. At worst, and more plausibly, it is counting something that shouldn’t be counted at all. (Imagine a car manufacturer that got government money to build a plant pointing to the fact it got that money as a sign of excellence, rather than to how many cars it sold. GM must be really excellent!)

Counting inputs as outputs is bizarre. And the emphasis on it here in Australia in law would leave overseas legal academics in the US and Canada and the UK dumbfounded.

That’s all that space allows in the way of critiquing what will be a pretty awful method of measuring quality, at least in my discipline. Young Australian academics will be forced to play a game with bizarre incentives, one that would not help them if they were working overseas. Sure, it will measure something, it’s just that it won’t be what most of us would call ‘quality’.

The whole thing partially reminds me of the UK’s experience with unintended consequences. They decided to measure the number of people in emergency waiting rooms, as a way to improve quality. Some administrators responded by keeping patients in ambulances outside, to keep numbers down.

Something or someone isn’t top quality just because some set of criteria has been met. It all depends on what those criteria are, how transparent they are, and what sort of unwanted side-effects they throw up. The Minister needs to fix this now. Failing that, the Opposition needs to say that if it wins, it will do what this government won’t. 

James Allan is Garrick Professor of Law at the University of Queensland

Leave a Reply