Inside Story

Has NAPLAN failed its most important test?

Uncertain goals and doubts about effectiveness have prompted a major reappraisal

Tom Greenwell 1 October 2019 2404 words

The debate about NAPLAN often conflates the test itself and the merits of making the results public. Lincoln Beddoe/iStockphoto


NAPLAN Online must have seemed like a great idea at the time. Australian schoolchildren in years 3, 5, 7 and 9 were already sitting the National Assessment Program — Literacy and Numeracy test each May, but the results weren’t coming back until September. Eight months into the school year, they were unlikely to be useful; four months after the tests were taken, they may well have been redundant.

Migrating the test online promised to speed up the turnaround while delivering another benefit. For students at either end of the learning spectrum, a one-size-fits-all test can indicate little more than the fact they are utterly overwhelmed or, at the other extreme, all over it. An adaptive online test could serve up increasingly tailored questions and provide a granular picture of what each child knows.

That was the theory. In practice, NAPLAN Online has been bedevilled by setbacks and snafus to the point that its very existence is in doubt. First, the rollout was repeatedly delayed. Then, after 15 per cent of schoolchildren sat the online version in 2018, it was revealed that the NSW education department had told its minister that their results couldn’t reliably be compared with those of students who had done the test the old-fashioned way.

It got worse. In May this year, around half of Australian students sat the test online — or tried to. Many were treated to a smorgasbord of technical glitches, from difficulties logging in, to connections dropping out, to the whole test freezing mid-answer or failing to register responses at all. Ultimately, 30,000 students had to resit, further complicating comparisons of results across schools and over time.

All this was a gift to NAPLAN’s longstanding critics. “NAPLAN really is a dud,” declared Maurie Mulheron, president of the NSW Teachers Federation. “We really need to slow the whole process down and review what kind of testing regime we want in this country.” But it wasn’t just the usual suspects joining in the chorus of condemnation, and the criticisms extended beyond the technical glitches. Former NSW education minister Adrian Piccoli took to Twitter to declare “NAPLAN is dead.” (His successor as minister, Rob Stokes, had last year called for the test to be scrapped.) Victoria’s education minister, James Merlino, endorsed the idea of a root-and-branch review, telling media, “We’ll be considering our future involvement with NAPLAN in the coming months.”

Three reviews came to pass. The first investigated the technical problems; the second looked at whether pen-and-paper results could validly be compared with the online tests. Third, and most significantly, the NSW, Queensland and Victorian governments instigated a comprehensive review of whether NAPLAN’s aims are being realised. “The review may lead to significant change or it may recommend scrapping NAPLAN altogether and replacing it with something new,” Merlino said when the terms of reference were released last month. An interim report is due in December with the final report to be released in June next year.


Public discussion of NAPLAN often conflates two things that are logically, at any rate, quite distinct: the nature of the test itself and the merits of making the results public on the My School website. We often think we’re talking about NAPLAN when we’re actually arguing about My School. The terms of reference for the three-state review suggest that, if it is done well, we could at last get some clarity.

The review’s first task will be to “determine what the objectives for standardised testing in Australia should be, given its evolution over time.” Is NAPLAN designed to promote “individual student learning achievement and growth,” for instance, or improvements in individual schools, or “system accountability and performance”? The key question here is whether one of the main purposes of the test is to provide “information for parents on school performance” via My School (and, if so, why has doing this for a decade seem to have done little to drive “school improvement”?).

When it was introduced in 2008, NAPLAN attracted little attention and even less controversy. In this respect (and others), it followed in the footsteps of its state-based predecessors — the NSW Basic Skills Testing Program, for instance, and the Victorian Learning Assessment Program. The 2006 decision by state and federal education ministers to establish a national assessment program, the development of the tests during the final years of the Howard government, and the inaugural NAPLAN test may have generated a few headlines, but they were hardly the stuff of animated conversations around Australian barbecues.

That all changed with the prospect that NAPLAN results would be published on a school-comparison website, enabling parents to choose the winning schools and encouraging the losers to lift their game.

The first sign came in August 2008, when the Australian reported that education minister Julia Gillard had met in New York with Joel Klein, the man who ran the city’s education system, to discuss his method of ranking schools from A to F, based on student test results. Schools that got an A or B received financial rewards; schools graded D to F were restaffed, restructured or closed.

Although Gillard made clear that she didn’t intend implementing a system of grades or introducing the accompanying carrots and sticks, Labor had announced during the 2007 election campaign that “publication of school performance information will form an integral part of federal Labor’s plan to improve literacy and numeracy.” Gillard reiterated that promise after the election, and clearly believed that much could be learned from Klein’s example. In November of 2008, Klein paid a return visit, praising the deputy prime minister profusely for her commitment to education reform. “The level of courage in a public official isn’t as rare as I sometimes thought,” he said.

At this point, NAPLAN started to attract attention in spades. At the annual conference of the NSW Teachers Federation in July 2009, for example, Gillard’s proposals were condemned as an attempt to “introduce inappropriate market competition mechanisms into the sphere of education and do away with any culture of cooperation between schools and teachers.” (A boycott of NAPLAN was only forestalled when Gillard made concessions on the presentation of results, the measurement of students’ social backgrounds, and the rights of third-party publishers.)

The launch of My School the following January precipitated headlines of the “how your school rates” variety across the nation. Millions of visitors descended on the site, giving it a legitimacy that its close ideological cousins, Grocery Watch and Fuel Watch, never attained.

Fame can change not only a person but also, it would appear, a national assessment program. With school reputations on the line and pressure cascading down from principals to classroom teachers to students, NAPLAN was now a high-stakes test. It would be the core element in any school’s marketing strategy, the main issue on every principal’s mind and the first item at many a school staff meeting. For apostles of choice and competition like Gillard and Klein, this was the point — and the secret to school improvement. NAPLAN results are a misleading way to measure and compare schools, say critics, and they are also so bedevilled by negative side effects that they do more harm than good.


Adjudicating this debate and determining the proper purpose of NAPLAN will require the three-state review to disentangle the test from the website that made it a household name. Specifically, the review is charged with assessing how NAPLAN aligns with the Australian Curriculum — a polite reference to the view that the test, turbocharged by My School, has led to a narrowing of what is taught in schools. The Australian Curriculum spells out seven “general capabilities,” of which numeracy and literacy, the subject of the NAPLAN tests each May, are just two. (The others are critical and creative thinking, personal and social capability, ethical understanding, intercultural understanding, and information technology capability.) Moreover, as the Gonski Institute’s Pasi Sahlberg has pointed out, “what is tested is only a subset of the broader areas of literacy and numeracy and an even smaller subset of the curriculum as a whole.”

The question before the review is whether the incentive to devote teaching and learning time to preparing for NAPLAN tests has intensified schools’ focus on maths and English at the expense of science, the humanities, languages, the arts and information technology; or whether prioritising strategies for answering multiple choice questions and coping with exam conditions has come at the expense of cultivating children’s capacity for higher-order thinking; or whether privileging a formula for writing a story in response to a random stimulus has come at the expense of teaching self-expression through poetry or giving kids the opportunity to interview a member of their family about a life-defining moment and write a piece of biography. These alternatives are not in themselves mutually exclusive, but in practice the curriculum is crowded and time devoted to NAPLAN preparation comes at a cost.

Another major matter the review will consider is the impact of the test on schools, students and the community. With NAPLAN receiving widespread media attention and schools under intense scrutiny, it isn’t surprising that impressionable young people are experiencing significant stress and anxiety around NAPLAN time. After all, in the case of year 3s, we’re talking about kids who might barely be able to tie their shoelaces being placed in quasi-exam conditions.

Whether or not the publication of NAPLAN results on My School has been positively harmful, what we do know is that after almost a decade it has delivered little in the way of school improvement. While the optimists point to small improvements in year 3 results, year 7s and 9s are now performing below the 2011 baseline in the writing test, and secondary-level scores for reading, spelling, grammar, punctuation and numeracy haven’t budged. International standardised tests indicate that in writing, maths and science, Australian students are, on average, well behind where their predecessors were a decade ago.

This shouldn’t come as a surprise. The “rare courage” that Joel Klein perceived in Julia Gillard was not applied to tackling the structural flaws in Australia’s education system. With the Howard government having presided over a massive escalation in federal funding for non-government schools, any transition to needs-based funding was hobbled from the start by Gillard’s stipulation that no school would lose a dollar in real terms; indeed, that commitment replicated one of the worst features of the Howard funding model. Providing necessary resources to public schools was subsequently delayed until 2019 (and has since been deferred until 2027).

Gillard didn’t go near the very different obligations falling on public schools, which must serve all-comers, and fee-charging non-government schools that can enrol (and expel) whom they wish. The division between schools whose students come mainly from disadvantaged backgrounds (and are often underfunded) and schools with large resource advantages and privileged student populations has only worsened, creating the perfect recipe for inequity and underperformance. By intensifying competition on a very uneven playing field, it’s likely that My School has made the structural weaknesses in our education system even worse.


So what are the alternatives? In a submission to the Council of Australian Governments in March, the Gonski Institute recommended that “the sole purpose of the national assessment and reporting system should be to monitor education system performance against the purpose of education, particularly on the issues of educational excellence, equity, wellbeing and students’ attitudes toward learning.” According to the institute’s director, Adrian Piccoli, “the current tests, where every student is tested in years 3, 5, 7 and 9, [should be] replaced with a sample-based test of students.”

This proposal would bring numeracy and literacy into line with science, civics and citizenship, and information technology, which are currently assessed with triennial sample-based tests. According to Piccoli, the publication of school-by-school results on the My School website would no longer be possible. “As a result, the high-stakes nature of the current national assessment program on both students and teachers would be dramatically reduced.”

One argument against a shift to sample testing is that NAPLAN results can facilitate candid conversations between parents and schools about children’s learning, in a way school grades and reports sometimes fail to. You don’t have to go far to hear an anecdote from parents who feel that NAPLAN results convey a reality about their child’s progress that has hitherto been shrouded in supportive platitudes. The three-state review could make a useful contribution by exploring how general this phenomenon is, whether there are alternative ways of anchoring school-based assessment in national standards, and how the validity and effectiveness of school-based student assessment might otherwise be enhanced.

As for what will actually improve the education we are providing to our nation’s young people, including their numeracy and literacy, the Australian has been enthusiastically exploiting declines in measured student performance to revive the argument that money doesn’t really matter. “$20bn Flop: Schools Fail to Lift Kids” ran a recent headline above an article by education writer Rachel Urban. “Critical literacy and numeracy skills of Australian students are languishing,” Urban wrote, “despite government funding for schools soaring by more than $20 billion over a ­decade.” The point was reiterated in the paper’s editorial that day and repeated the following week in a piece by former editor Chris Mitchell praising the conservative media outlets, like his, “that argued Australia is not getting value for its spending on schools.”

Mitchell, Urban and the Australian’s editorial failed to mention what the funding increases look like when they’re adjusted for inflation and population or, more importantly, which schools have actually benefited from them. This information is readily available. The prolific education policy analyst and former Productivity Commission economist Trevor Cobbold crunched the numbers in June, and it turns out that between 2009 and 2017 “total real income per student in public schools fell by $58 per student (–0.5 per cent) but increased by $1888 (17.8 per cent) in Catholic schools and by $2306 (15.1 per cent) for Independent schools.”

At a time when real per-student funding was cut in public schools, combined government funding for Catholic and independent schools increased by more than a thousand dollars per student, in real terms. And, as Cobbold observed, the combined current commitment from federal and state governments will only bring public schools up to 91 per cent of their School Resource Standard by 2027 (or even later in some jurisdictions). In other words, hundreds of public schools across the country are set to be significantly underfunded indefinitely. •