The Academic Beatup of Sexual Statistics

A National Student Safety Survey was released on March 23 with media fanfare. Its full title is Report on the Prevalence of Sexual Harassment and Sexual Assault among University Students in 2021. The report is the work of the Social Research Centre for Universities Australia and it drew its data from thirty-eight member universities. Headline and banner stories were prominent in the media. Most bulletins reduced the news to lists of percentages of students harassed or assaulted, and percentages by location. The stats were reported as facts.

The survey’s national snapshot says that 43,819 students participated in the survey. Its key statistics are that one in six students (16.1 per cent) have been sexually harassed since starting university, one in twelve in the last twelve months (8.1 per cent), and that one in twenty have been sexually assaulted since starting university (4.5 per cent) and one in ninety (1.1 per cent) in the last twelve months. When the figures are analysed it seems that far higher percentages of “non-binary, pansexual and students with a disability” are the victims.

As was the case with the previous report in 2017, Change the Course, I cannot but note the issue of student non-response to the survey. Non-response is the giraffe in the room. As far as I can see, it is not discussed.

On page 82 of the survey we read that a pilot test at ANU and Charles Sturt University was run before the survey was rolled out. The pilot was limited to 2000 selected students. Yet we are told that “a total of 332 students participated in the pilot survey for a completion rate of 16.6 per cent”. Each respondent spent on average 11.2 minutes on the task. The final form of the survey ran for four weeks in September and early October 2021 with students from thirty-eight universities taking part.

The report tells us that 10,000 students were selected from each of the universities “to take part in the survey” and thus 378,992 students were selected, nearly 60,000 more than the 319,252 students targeted in the 2017 report. We find the same silence and non-response from today’s students that we found in 2017. Thus, in the latest survey, a grand total of 43,819 students or 11.6 per cent responded. That 11.6 per cent is a percentage of the 378,000 targeted for inclusion and must again be divided by one third to get a percentage of the national student body. That is, the 2022 survey reflects the experiences of 4 per cent of the tertiary students in Australia.

What does the silence of 96 per cent of Australian tertiary students mean? And why can two successive national surveys, marshalling the heavy guns of the peak universities body, be faced by silence from about 90 per cent of those surveyed, let alone the whole cohort? The interesting and unasked question is, “What does this silence mean?” It could have generated the headline, “90 per cent of Uni Students Ignore Universities Australia Survey”.

Nevertheless, immediately and on cue, universities around Australia released their prepared response to the headline results, listing the good things done to change the course since 2017. They are surfing the wave of change that they are creating.

But I remain interested in the silence of the statistical lambs. The Australian Human Rights Commission’s Change the Course report from August 2017 was weak statistically. Weakness began with failing to ask why the vast majority of students surveyed failed to respond or chose not to respond.

In 2017, before Covid, there were more than 1.3 million tertiary students in Australia. The AHRC and its agencies set out to survey, professionally, over 300,000 tertiary students at a total cost of around a million dollars. As with the 2022 survey, only a fraction of the 319,252 students responded in 2017. The big news could have been, but wasn’t: “289,000 students made no response to Sexual Harassment and Assault Survey”.

The 2017 survey had a response rate of 9.7 per cent of the survey pool. I don’t want to make a technical comment about lack of statistical power but simply note that the silent majority outweighed respondents by almost ten to one.

What did concern me was that every subsequent figure and percentage for a rate of assault, and each stat-based conclusion of the report, was in fact drawn from only 2.3 per cent of the student population—from 30,000 of 1.3 million students. This was never really acknowledged, and the figures were shouted out as reflective of the whole. The 10 per cent assault rates referred to are thus 10 per cent of this 2.3 per cent pool, not 10 per cent of the 1.3 million students. It is there that the question of false extrapolation into the silence and non-response of students becomes an issue. The silent are recruited as supporters of the responders.

The authors do say, in a place where the reader had to search for it as if for a needle in a haystack (page 33 of the 2017 report):

The survey data has been derived from a sample of the target population who were motivated to respond, and who made an autonomous decision to do so. It may not necessarily be representative of the entire university student population.

Understatement. It is not representative. The AHRC needed to comment and talk candidly about why less than 10 per cent of those surveyed responded. What do the tiny percentages that result really mean? No valid conclusions could be drawn from the data about the million-plus who are not in the data field.

What do we know about the 1,270,000 students who made no comment? Or even about the 270,000 in the survey who did not respond? We can know nothing at all from the statistics presented.

Commentators used the numbers as if they applied non-problematically to everyone and to all. That was a mistake. The university sector has to date largely glossed over the matter. The slogan “one assault is one too many” is correct but it also covers a multitude of statistical sins.

This is the kind of concern that drove Bettina Arndt, Miranda Devine and perhaps Germaine Greer to maintain that the problem of sexual assault amongst students has not been properly identified let alone described or quantified. The non-response of the vast majority of students can be as much the news as the response of the few who completed the surveys.

The substantive questions raised by further, confidential, narrative reporting of incidents of assault (mentioned in the 2017 Change the Course) raise different and deeply engaging matters. But these cannot be scoped by those who will never read the harrowing contents. Any and every direct report of assault and sexual assault made by a student on campus should be investigated. But investigated by whom? The media? The vice-chancellor? A committee of qualified people reporting to the vice-chancellor? A head of residence? An external professional mediator? Or the police and the courts? And at what trauma to participants? And to what outcome? And with what appeal process and at what cost and at what publicity to the participants? Each path has embedded challenges. Anyone who has responded to specific, actual incidents is fully aware of these challenges, or soon becomes aware of them.

The current National Student Safety Survey seems to say this about the statistical approach taken, given the completion rate of a mere 11.6 per cent or a non-response rate of 88.4 per cent:

To ensure survey results were as representative as possible of the student population, weights were calculated for each respondent. The approach to deriving weights involved calibrating to match population benchmarks for a range of respondent characteristics. These included the strata variables (gender at enrolment, year of study, residency, level of study) and three additional variables found to be related to non-response (age, country of birth and field of education). The weights were calculated separately for each institution and account for non-response bias and selection bias on the benchmark variables used to generate more precise survey estimates.

When I read this, I was reminded of lines by W.H. Auden in “The Shield of Achilles” from 1952. He wrote:

Out of the air a voice without a face
Proved by statistics that a cause was just
In tones as dry and level as the place:
No one was cheered and nothing was discussed.

Much more discussion is needed. The question of the silent ones needs to be asked.

When Dr Ivan Head chaired a Human Research Ethics Committee constituted under the NHMRC, he was routinely made aware of the issue of statistical power and its absence in some research projects

6 thoughts on “The Academic Beatup of Sexual Statistics

  • lbloveday says:

    “Commentators used the numbers as if they applied non-problematically to everyone and to all”.
    And as if all allegations of harassment or assault were factual.

  • DougD says:

    And those who write their pieces in the media call themselves journalists. All they seem to do most of the time is recycle press releases.

  • Brian Boru says:

    Thank you Ivan for exposing the corruption involved in the trumpeting of the reports of this pseudo survey. I remember at the time being incredulous.

  • Katzenjammer says:

    Why do these academics choose to slander their compatriots in academia as sexual pedators, especially of targetting the more vulnerable. Are they just typical innumerate users of statistics, or do their corrupted results fit an agenda?

  • wdr says:

    Another instance of the title of a book published in the US in the 1950s: “How to Lie With Statistics,” at which the Left and the media are particularly adept.

  • 27hugo27 says:

    They really are hell bent on dividing the sexes , waging a phony war on men , and white men in particular . But what is the end game ? Have they thought it through ? There will be no winners here as another nail gets hammered into the west’s coffin. On another , related note , greens leader Bandt has removed the Australian flag from his recent press conference, with only the Aboriginal and TSI flags behind him . I’m sure the ABC/SBS will embrace him even more , as our taxes pay for the lot of them !

Leave a Reply