A National Student Safety Survey has been released (23 March 2022) with media fanfare. Its full title is Report on the prevalence of sexual harassment and sexual assault among university students in 2021. The report is the work of the Social Research Centre for Universities Australia and it focuses on thirty-eight member Universities. Headline and banner stories were prominent in today’s media. Most bulletins reduced the news to lists of percentages of students harassed or assaulted, and percentages by location. The stats were “facts”.
The Survey’s national snapshot says 43,819 students participated in the survey. Its key statistics are that one in six students (16.1 per cent) have been sexually harassed since starting university, one in twelve in the last twelve months (8.1 per cent), and that one in twenty have been sexually assaulted since starting university (4.5 per cent) and one in ninety (1.1 per cent) in the last 12 months. When the figures are unpacked it seems that far higher percentages of ‘non-binary, pansexual and students with a disability’ are the victims.
As was the case with the prior 2017 Report, Change the Course, I cannot but note the issue of overwhelmingly substantial student non-response to the survey instrument. Non-response is the giraffe in the room. As far as I can see, it is not discussed.
On page 82 of the Survey we read that a pilot test at ANU and Charles Sturt was run before the survey was rolled out. The pilot was limited to 2000 selected students. Yet we are told that ‘a total of 332 students participated in the pilot survey for a completion rate of 16.6 per cent.’ Each respondent spent on average 11.2 minutes on the task. The final form of the survey ran for four weeks across September and early October 2021 with students from all the Universities Australia member universities taking part.
The report tells us that 10,000 students were selected from each of the 38 universities ‘to take part in the survey’, and thus 378,992 students were selected for the national survey. This total figure is 50,000 more than the 319,252 students targeted in the 2017 report. We find the same silence and non-response from today’s students that we found in the 2017 cohort. Thus, in the 2021-2022 Survey, we find a grand total of 43, 819 students or 11.6 per cent responding. That 11.6 per cent is a percentage of the 378,000 targeted for inclusion and must again be divided by one-third to get a percentage of the national student body. That is, the current 2022 survey relates to under 4 per cent of the tertiary students in Australia.
I simply want to ask: What does the silence of 96 per cent of Australian tertiary students mean? And why can two successive national surveys, marshalling the heavy guns of peak body Universities Australia, be faced by silence from roughly 90 per cent of those surveyed, let alone the whole cohort?
The interesting and unasked question is ‘What does this silence mean?’ It could easily have generated the headline ‘90 per cent of Uni Students Ignore Universities Australia Survey.’
Immediately and on cue, universities around Australia have released their pre-prepared response to the headline results, listing the good things done to change course since 2017. They are surfing the wave of change that they are driving.
But I remain interested in the silence of the statistical lambs.
The Australian Human Rights Commission’s (AHRC) Change the Course Report from August 2017 was weak statistically. Weakness began with failing to ask why the vast majority of students surveyed failed to respond or chose not to respond.
In 2017, pre-COVID, there were more than 1.3 million tertiary students in Australia. The AHRC and its agencies set out to survey, professionally, over 300,000 tertiary students at a total report-cost of around a million dollars. As in the 2022 survey, only about 30,000 of the 319,252 students responded in 2017. The big news could have been but wasn’t: ‘289,000 students made no response to Sexual Harassment and Assault Survey’.
The 2017 survey had a response rate of 9.7 per cent of the survey pool. I don’t want to make a technical comment about lack of statistical power but simply note that the silent majority outweighed respondents by almost ten to one.
What did concern me was that every subsequent figure and percentage for a rate of assault, and each stat-based conclusion of the Report, was in fact drawn from only 2.3 per cent of the student population; ie., from 30,000 of 1.3 million students. This was never really acknowledged, and the figures were shouted out as reflective of the whole.
The 10 percent assault rates referred to are thus 10 per cent of this 2.3 per cent pool, not 10 per cent of the 1.3 million students. It is there that the question of false extrapolation into the silence and non-response of students becomes an issue. in few words: the silent are recruited as supporters of the responders.
Buried — page 33 of the 2017 Report — where the reader must search for it as if for a needle in a haystack there is this (emphasis added):
The survey data has been derived from a sample of the target population who were motivated to respond, and who made an autonomous decision to do so. It may not necessarily be representative of the entire university student population.
Understatement. It is not representative. The AHRC needed to comment and talk candidly about why less than 10 per cent of those surveyed responded to their Survey. What do the tiny percentages that result really mean? No valid conclusions could be drawn from the data about the million plus who are not in the data field.
What do we in fact know about the 1,270,000 students who made no comment? Or even about the 270,000 in the Survey who did not respond? We can know nothing at all from the statistics presented.
Commentators used the numbers as if they applied non-problematically to everyone and to all. That was a mistake. The university sector has to date largely glossed over the matter. The slogan ‘one assault is one too many’ is correct but it also covers a multitude of statistical sins.
This is the kind of concern that drove Bettina Arndt and Miranda Devine, and perhaps Germaine Greer, to maintain the problem of sexual assault amongst students has not been properly identified let alone described or quantified. The non-response of the vast majority of students can be as much the news as the response of the few who completed the surveys.
The substantive questions raised by further, confidential, narrative-reporting of incidents of assault (mentioned in the 2017 Change the Course) raise different and deeply engaging matters. But these cannot be scoped by those who will never read the harrowing contents. Any and every direct report of assault and sexual assault made by a student on campus should be investigated. But investigated by whom? The media? The vice-chancellor? A committee of qualified people reporting to the VC? A head of residence? An external professional mediator? Or the police and courts? And at what trauma to participants? And to what outcome? And with what appeal process and at what cost and publicity to the participants? Each path has embedded challenges.
Anyone who has responded to specific, actual incidents and how they ‘track’ is fully aware of these challenges, or soon becomes aware of them.
The current National Student Safety Survey seems to say this about the statistical approach taken, given the completion rate of a mere 11.6 per cent or a non-response rate of 88.4 per cent.
To ensure survey results were as representative as possible of the student population, weights were calculated for each respondent. The approach to deriving weights involved calibrating to match population benchmarks for a range of respondent characteristics. These included the strata variables (gender at enrolment, year of study, residency, level of study) and three additional variables found to be related to non-response (age, country of birth and field of education). The weights were calculated separately for each institution and account for non-response bias and selection bias on the benchmark variables used to generate more precise survey estimates. (Page 83)
When I read this, I was reminded of lines by W. H. Auden in The Shield of Achilles from 1952. He wrote:
Out of the air a voice without a face
Proved by statistics that a cause was just
In tones as dry and level as the place:
No one was cheered and nothing was discussed.
Much more discussion is needed. The question of the silent ones needs to be asked.
Ivan Head holds a PhD from Glasgow University and studied Philosophy at UWA and Philosophy and Classical Indian Thought at Melbourne University. When Chair of a Human Research Ethics Committee constituted under the NHMRC, he was routinely made aware of the issue of statistical power and its absence in some research projects