Topic Tags:
0 Comments

Do They Know What They’re Doing?

Michael Giffin

Dec 01 2012

7 mins

Inspired by Philippa Martyr’s perceptive reflection on the grants industry (Quadrant, October 2012), I’d like to share my experience of that industry’s other end: peer-reviewed publication. Although Martyr focuses on the humanities, my experience tells me her reflection also applies to some parts of the sciences. Like Martyr, I’m trained in the humanities and I don’t claim to be a scientist. Twenty years ago I didn’t know the difference between case and control, or big N and little n, but one absorbs a lot as an author and editor working with professionals in population and public health. After I co-authored a few published articles, a couple of overseas journals asked me to become a reviewer; admittedly within my narrow range of competencies, from which I’m careful not to stray. Co-authors become reviewers for a variety of reasons; some see it as part of their professional development; others want to get in touch with their inner axe murderer. I’m not sure where I fit within this spectrum.

It’s important to be positive and constructive when reviewing; however, given the poor quality of many submissions, this can be difficult. During my first few years as a reviewer, I agonised over every submission and commented extensively on each of its major parts: background, methods, results and discussion. I would work on a review for up to three days, send it off, and spend a few months worrying about whether or not my comments were appropriate. It was always a relief to receive the journal’s editorial decision and learn that my comments dovetailed with those of the other reviewers. They usually noticed things I hadn’t; I usually said things they couldn’t, being an outsider unconstrained by the politics of the research sector. So far none of my reviews has resulted in a journal accepting a submission outright. A small minority have been invited to revise and resubmit. The large majority have been rejected. Is my experience typical? I don’t know.

Over the years I’ve learned to place boundaries around reviewing. I no longer spend more than one long day on a review. I read the submission through once and make notes. When writing the review, I may make a few general comments on the background, results and discussion sections; however, most and sometimes all of my comments are reserved for the methods section, which is usually the most problematic. Why? Because the methods section is meant to be a recipe, so other researchers can reproduce the study if they wish, but it’s becoming increasingly apparent that many authors don’t want to tell their readers much about what they did and how they did it. Although the recipe principle is widely accepted in peer-reviewed publication, we are now seeing authors attempting to leap from their background to their results. This makes it hard for the reviewer to discern whether their studies were poorly designed or whether they were well designed but poorly described.

Another problem, particularly with studies that have conducted a secondary analysis of primary data obtained from elsewhere, is the widespread tendency to avoid telling the reader where the primary data came from, how they were collected, what they represent, and whether there was any pre-existing method attached to them (which there invariably was). To put it bluntly, those who conduct secondary analyses tend to give the impression of having done much more to the primary data than they actually did. This tendency is neither appropriate nor ethical.

On reading a problematic methods section for the first time, several questions come to my mind. Is this a tactic or an oversight? Should the authors have known better? Should I make allowances for them? Are they trying to hide something from me? Did they think I wouldn’t notice? As each submission is blinded, who are the authors? Are they students looking for comments on their work, to make up for inadequate supervision? Are they academics who don’t know how to supervise adequately? Are they researchers who lack rigour? Are they freelance writers brought in at the end of the research process, who aren’t responsible for the research, who may not understand its complexities, who are only there to package the research for the widest possible audience? Are they professors at the top of the research tree keeping an eye on their next grant application? Given that most submissions have several authors, each of whom is associated with the research sector in some way, the answer is likely to be some or all of the above.

As far as I can see, part of the problem is that, in peer-reviewed publication, contemporary theory and contemporary practice are conspiring against common sense. Increasingly, the background section has an unfocused, all-purpose character; it tends to be overly theoretical, overly written, and reads like the first chapter of a thesis someone is writing or supervising; I get the feeling it’s routinely pasted into the beginning of several different papers and no one is bothering to adapt it to any particular paper. Increasingly, the methods section tells us that data have been run through software, which has applied a sophisticated model to them, but we are not told what we need to know about where these data came from, how they were collected, or what they represent. Increasingly, the results section presents a complicated analysis of these data which is designed to intimidate; the reader is apparently meant to be so impressed by the sophisticated modelling that they won’t ask whether these data really warrant it. While this phenomenon is to be expected in quantitative research, qualitative researchers are now using their own analytical software, and doing their own modelling, which is beginning to produce absurdities.

Another part of the problem is the evolution of the research sector and its governance. Not long ago things were simpler; a much lower proportion of the population pursued tertiary study; government bureaucracies disbursed taxpayers’ money to universities to conduct research; now we have pillars and centres of excellence added to the mix; many chief executives, professors and associate professors are wearing conjoint hats. All of this might be necessary, as far as commissioning and conducting research is concerned, and of course we should assume that the research sector is adapting and regulating itself in a positive way, but cracks are appearing.

It’s a good thing that, in those articles that actually do get published, the methods sections are adequate; however, given that the large majority of journal submissions are rejected, and many if not most of these are rejected because of under-developed methods sections, we obviously need more information about this phenomenon. We need to know what proportion of this research is poorly designed, since that’s a measure of the research sector’s overall health; we need to know what proportion of this research is well designed but poorly described, since that’s a measure of how well the research sector is influencing policy and practice. As these two issues, poorly designed research versus poorly described research, are different aspects of the same problem, we need well designed and well described studies that shed light on the problem, before we can work out how extensive it is and ways of addressing it.

Philippa Martyr’s article identifies two fundamental problems in the humanities: first, as the grants industry is a huge imposition on the public purse, we need to find ways to privatise it or encourage it to become self-funded; second, the university sector has become an extension of the child-minding industry, now that a significantly higher proportion of the population is pursuing tertiary education and expects to be absorbed into the research sector. It’s still widely assumed that these two problems, grant funding and child minding, don’t apply to the sciences, which are meant to be more rigorous than the humanities, but this isn’t necessarily so. It only stands to reason that not every science graduate will be a good researcher.

If my experience as a reviewer is representative of reviewers in general, it’s disturbing that none of my reviews has resulted in a journal accepting a submission outright, that only a small minority have been invited to revise and resubmit, and that the large majority have been rejected. It may be some comfort that the editorial decision is based on other reviews and not on mine alone, which means there’s broad agreement on the poor quality of most research papers being offered for peer-reviewed publication. It would still be gratifying, however, if someday I was asked to review a submission that was good enough to be accepted outright.

Michael Giffin wrote on sin in the November issue.

Comments

Join the Conversation

Already a member?

What to read next

  • Letters: Authentic Art and the Disgrace of Wilgie Mia

    Madam: Archbishop Fisher (July-August 2024) does not resist the attacks on his church by the political, social or scientific atheists and those who insist on not being told what to do.

    Aug 29 2024

    6 mins

  • Aboriginal Culture is Young, Not Ancient

    To claim Aborigines have the world's oldest continuous culture is to misunderstand the meaning of culture, which continuously changes over time and location. For a culture not to change over time would be a reproach and certainly not a cause for celebration, for it would indicate that there had been no capacity to adapt. Clearly this has not been the case

    Aug 20 2024

    23 mins

  • Pennies for the Shark

    A friend and longtime supporter of Quadrant, Clive James sent us a poem in 2010, which we published in our December issue. Like the Taronga Park Aquarium he recalls in its 'mocked-up sandstone cave' it's not to be forgotten

    Aug 16 2024

    2 mins