Topic Tags:
0 Comments

The Approaching University Degree Bubble

Paul Oslington

Nov 01 2012

10 mins

There has been a great deal of recent hand-wringing about falling university entry scores that have accompanied the government drive to increase the number of students graduating from our universities. Greg Craven of the Australian Catholic University is right to point out, in a recent Australian opinion piece, that many published entry scores are fudged: by allowing students into courses in ways that maintain the published entry score, by creating boutique courses to generate inflated entry scores for the university and so on. These games work for universities because in a world of imperfectly informed students and parents the entry score is used a measure of course quality.

Economists recognise that the entry score is like a price. It reflects the number of places a university offers in particular courses and the demand for those courses. It doesn’t necessarily reflect the difficulty of the course (is science at University of Sydney that much harder than science at the University of Western Sydney?) nor does it reflect the social value of courses (as those who are concerned about the state of teacher education have been stressing lately).

Economists would also recognise that published entry scores are the score of the marginal or last student allowed into the course, unlike prices which are paid by all who purchase a product. Price is not a marginal but an average concept in a market with some degree of monopoly power. So there might be students with very high scores in courses with low published entry scores. Economists would also recognise the incentives for universities to segregate markets and price-discriminate under these conditions, and there is plenty of evidence of this happening.

Entry scores are also unlike prices in that they are not paid by the buyer. There is a sense among students of not wanting to waste your marks enrolling in a course with a low entry score, but there are no financial implications. One could imagine an alternative system where a student receives a voucher with a value reflecting their marks as well perhaps as measures of their educational disadvantage.

Under our current Australian system, rather than paying for the course the student accumulates debt to the government through the HECS system, which may never be paid if the student does not earn enough to meet the income-contingent loan repayment threshold, or if the student works overseas. And what they do pay is heavily subsidised by the government rather than the student, who is the buyer in this market to a significant extent.

Are entry scores the real quality issue? Greg Craven’s conclusion is that we shouldn’t worry about entry scores, because what matters is the quality of the output. Fair enough if we have a good measure of educational quality—but we don’t. The current obsession with learning objectives and graduate attributes is essentially about describing what universities are doing in language fashionable among the current regime of educational bureaucrats (for whom Max Corden’s classic essay “Moscow on the Molonglo”, in Quadrant in November 2005, should be required reading). Lots of meetings, paper, and box ticking. But as students and academics know, this guff has little to do with the quality of what goes on at the educational coalface.

Good academics have always reflected on what they are trying to achieve in courses, and pages of turgid bureaucrat-driven analysis of the course against high-sounding but usually vacuous statements doesn’t really help. I’ve never met a student who has actually read the guff in a course outline—at most they have a bit of giggle before turning the pages to the course content and assessment. There is no real evidence that the current obsession with process makes any difference to the quality of education. Student surveys that define quality in terms of process—whether the course had clear objectives and such things—do not count as real evidence. Reviews of the wonderful documents universities produce to describe their teaching quality assurance processes make for an easy box-ticking life for the bureaucrats, and help some feel that universities are being made accountable, but they don’t tell us much about what actually goes on.

The bureaucrats get away with it and manage to divert more resources from the coalface to themselves and their collaborators in the universities because we have no good measures of the quality of output. Professor Paul Frijters of the University of Queensland gave a wonderful paper in the “Academiconomics” session at last year’s Economic Society of Australia annual conference. He estimates that about twenty cents of every dollar that goes into higher education gets to the coalface—once the various levels of government and university bureaucrats and others have taken their cuts. I presented a paper on how some of these excesses of university bureaucrats might be restrained by a creating a quasi-market in administrative tasks, inspired by the work of Ronald Coase. Gigi Foster of the University of New South Wales presented detailed evidence of inflated grades for full-fee-paying overseas students. Philip Clarke, organiser of the “Academiconomics” session, from the University of Sydney, spoke on the randomness of the evaluation of research grant applications.

A suggestion by a colleague at another university that the examination papers of a randomly selected pass student and a distinction student be published in the newspaper induced panicked expressions from this colleague’s local administrators. If by some miracle this move towards transparency happened, it would likely be defeated in any case by an intensification of strategies currently used by some academics who are under pressure to pass students. Most academics’ consciences can’t abide putting a pass mark on an examination paper that is almost blank or perhaps worse is covered in scrawl completely devoid of the things that have traditionally justified a pass mark. To assuage conscience the academic may give subtle or not so subtle clues about the examination paper so students have the opportunity to compose or purchase an adequate answer in advance. Not good, but at least they have something in front of them on those dark nights of marking that they can tell themselves deserves to pass. Or they may resort to multiple-choice assessment where, given four answers to choose from, even the completely clueless student scores on average 25 per cent and may by pure luck end up with something close to the 30 to 40 per cent mark which seems to be about the going rate for a pass in the bottom end of our university system. Or there is group assessment. If the grade distribution still needs a bit of a boost then there is usually no difficulty in finding some basis for extra marks in the thick pile of special-consideration forms sitting on the academic’s desk.

If we haven’t got good measures of quality, where does this leave us? Perhaps we could take an economist’s look at the incentives in the system for maintaining quality. In a normal market the dodgy operators don’t last because people won’t keep parting with their money for an inferior product. However, this market discipline does not work well in the universities because the buyers are mostly not parting with their own money—their choices about courses are largely spending someone else’s money, as Andrew Norton’s recent Grattan Institute report points out. Perhaps this market discipline works for private providers, so we would expect the dodgier private providers to be driven out more quickly than dodgy universities. Hence the dodgies that survive are more likely to be within universities. And before the universities scream about the well-known problems of some private English-language colleges and hairdressing courses let’s be clear that these were largely immigration scams created by perverse government policy. Perhaps the market discipline works better at the top end of the university system than the bottom because students can transfer out of a course that they find is offering a poor quality education. Universities that are losing lots of students to other universities perhaps should be viewed with suspicion by quality regulators.

The most important incentives for maintaining quality are those the individual academic faces at the time they make decisions about how challenging to make a course, how hard they will mark assignments and exams, and whether they massage the marks to meet the demands of university administrators. Some universities set quotas—such as no more than 10 per cent of students are to fail a course. In which case, if 5 per cent of the students don’t turn up for the exam, then a student who turns up really doesn’t have to do much more than hand in their assignment (written by them or otherwise) and be able write their name (without assistance) legibly on the top of the examination paper. Students are pretty quick to work this one out and calibrate their effort in the course accordingly. As well as quotas there are recommended distributions enforced by threats from administrators that deviations will be a matter for the unfortunate academic’s next performance evaluation. Failed students obviously mean bad teaching. It has become widely accepted under the flickering fluorescent light of the dingy sessional staff room (to be contrasted with the palatial office of the administrators) that they are being paid to deliver a grade distribution, not an education, and that one’s failed students queuing outside the sessional’s administrator’s door are the surest way not to have one’s teaching contract renewed. In some universities the culture has become so corrupt that a new sessional who refused to divulge the examination paper was abused and threatened by students, and presumably punished in his teaching evaluations. Another who objected in a meeting to giving a credit to marketing students with a grade of 23 per cent soon had the Dean in his room pointing out the implications of such behaviour for his chances of contract renewal. Such harassed sessionals deliver a huge and growing proportion of courses, especially at lower-end universities.

Are university administrators or government bureaucrats evil people? Mostly not—just people responding to the incentives in a system where government funding is tied to passing students. We are all now familiar with that wonderful euphemism, the “student success rate”, which is sometimes merely a measure of the corruption of the educational process. The pressure to erode standards is even greater where full-fee-paying overseas students are involved—word quickly gets around the Indian recruitment agents if a university starts failing students. Questions most of all need to be asked of the politicians and senior bureaucrats who are responsible for such an incentive system and who look the other way when the issue of quality is raised. And perhaps also of ourselves who voted for such a system and feel so pleased when our children hold up their piece of paper on graduation day. One of the saddest things is seeing bright students who want to learn being short-changed at the bottom end of our university system.

By all means let’s have a system with less focus on entry scores, and strive to increase the opportunities for students from all backgrounds to get a university education, but we need to either come up with a better way of measuring the quality of the education graduating students have received or to remove the incentives to erode quality. Otherwise we may be dealing with a bursting low-quality university degree bubble, just like the sub-prime mortgage bubble that caused so much damage to our financial system not so long ago. Only this time it will be our young people’s futures on the line.

Paul Oslington is Professor of Economics at the Australian Catholic University and Vice-President (Academic) of the Economic Society NSW. Views expressed in this article should not be attributed to the university, or to the Economic Society.

Comments

Join the Conversation

Already a member?

What to read next