James Allan

The publishing game


The Government, the ARC and journal rankings


Last week in the Australian’s Higher Education section there was general cheering at the May 30th announcement by the Minister Kim Carr and the Australian Research Council (ARC) to dump the journal rankings list, the cornerstone criterion in the government’s research evaluation process.

And at first glance what was not to like about this announcement? I can’t speak for other disciplines but in law the ARC’s list of ranked journals was laughable. Many top US law journals were downplayed, some bizarrely; political correctness was pervasive; the score of Australian law journals was inflated well beyond the proper desire to give a boost up to the home side; and even the scores of just the Australian law journals themselves seemed to have been driven in significant part by lobbying and politics. In short, the ranked listings were a joke.

So I was amongst the first to join in and welcome the removal of this core component of the government’s research evaluation process.

But after the initial delight wore off, I asked myself what would replace this journal rankings list and form the new core element in measuring the comparative quality of research excellence in this country.

Here is where one normally turns to the ARC announcement itself for guidance. So, somewhat naively, that is what I did. At this point let me quote form that ARC announcement word for word:

Refined journal and conference quality indicator – the prescriptive A*/A/B/C ranks used in ERA 2010 will no longer be used in ERA 2012; instead RECs will be presented with a profile of journals and conferences for each unit of evaluation (UoE) ordered by descending frequency of publication. This approach will allow RECs to identify the depth and spread of publishing behaviours and make informed expert judgement regarding the quality and relevance of the journals and conferences to each UoE.

But even allowing for the usual management jargon and poor grammar, it’s not at all clear what that actual means. I keep reading it and still I’m not sure. I think it means that each law school will now just give a list of the journals in which its academics have published, with the number of times the department has done so.

And then what? How do you judge such a “profile of journals .. ordered by frequency of publication” unless you bring to the task a view on the quality of each of those journals? Seriously, how do you do it?

One possibility is that the RECs (that’s more jargon for ‘Research Evaluation Committees’, meaning a few hand-picked academics for each discipline who get to score everyone else but whose own appointments are made in Pope-like secrecy) will sit down and read each and every submitted article and decide for itself on the paper’s quality.

Have you stopped laughing yet at that suggestion? Come off it. We know going in that the hand-picked quality assessors on these RECs will not read every single one of the submitted papers. We know going in that they will sometimes/often/usually/almost always (pick your poison based on how pessimistic you are) use the name and reputation of journals as a proxy for whether they score a publication as top notch, mediocre, or awful.

So what the new system seems to be doing is driving the ranking of journals underground. It will now exist in the minds of the REC members, and so we’ll have a set-up that is opaque and whose scores and rankings are hard to dispute because nothing has been made public and contestable.

I’d like someone to explain to me how that sort of set-up differs from some magical black box-type alternative medicine set-up where a bunch of inputs are willy-nilly dumped in one end and, hey presto, a ranking or supposed cure comes out the other.

Let’s face it. Unless you believe each submitted published article is going to be individually read and assessed by these REC members (and if you do I have some seaside property near Alice Spring I’d like to sell you), then the journals will still continue to exist as a proxy for quality; it’s just that none of us will know what the REC thinks about each journal.

The process will become even more opaque than it has been. The views of the REC members about each journal will not be known. We won’t know if the old (and God awful) rankings list that has officially been jettisoned will continue to exert an influence on the REC members, now in a new thoroughly unaccountable way.

Look, the old 2010 ERA quality assessment process was already one of the most opaque and awful processes imaginable, making the system used in New Zealand look as though it had been designed by geniuses (and believe me it’s not easy to make the Kiwi system look that good).

Under the 2010 ERA process no one knew how the final score was arrived at. No one knew the weightings put on the scored excellence of published research, on grant getting success, and on Ph.D. completions. We didn’t even know for sure whether the same secret formula for combining these three had been applied to all departments, or if it had been varied as the need arose.

Of course it’s scandalous, in my view, that grant getting success counts for anything at all. Getting someone to give you money to do research to write a paper is an input. It is not an output. It is plain out bizarre to count success in grant getting as anything at all, though of course we know that the universities are desperate for this money and the ARC loves to be the ones giving the grants and then saying that getting them means you’re a better researcher.

Really? Imagine two law departments that have produced exactly the same number of articles in exactly the same journals by exactly the same number of academics. One of these two produced its research publications after employing a team of bureaucrats to apply for, and win, millions and millions in grants from the ARC and so, ultimately, the taxpayer.

The other law department applied for no grants, cost the taxpayer nothing like what the first one did, and yet produced precisely the same outputs as the first one.

In our research quality assessment framework here in Australia, the one that makes my colleagues in Canada, the US and UK laugh out loud, the first of these two law departments will be scored as much better than the second. In fact this Kafkaesque state of affairs is mimicked within most universities when it comes to promotions, most universities copying this bizarre set-up and insisting on grant getting success, as though the world of the natural sciences is the same world that lawyers and those in the humanities live in.

But leaving the perversity of counting grant getting success, which I emphasise is not being changed, what seems to have happened with this latest announcement is that the laughable and indefensible ranked journal list is being jettisoned, that much is clear. It is to be replaced by something. What that is precisely has been disguised and finessed by near-on incomprehensible gobbledegook writing, but it would seem that it is that each REC member will, quietly and outside the public glare, bring his or her own sense of a journal’s quality to bear in ranking research quality.

So getting someone who likes you on to one of these RECs now becomes even more imperative, especially as the rest of the former bad system has been left as is.

I’d love to be able to end on an optimistic note and suggest that when a Coalition government one day comes back into office it is overwhelmingly likely it will massively make over this ERA monstrosity, perhaps even disband it. But we all saw that the former Howard government did nothing to stop the creeping pervasive managerialism and bureaucracy of Australian universities and of their overseers.

So pessimism seems the order of the day.

Leave a Reply