Inside Story

Citizenship for beginners

The Howard government made it harder for some nationalities to become citizens, and Labor has made it worse, writes Kerry Ryan

Kerry Ryan 16 April 2012 4433 words

Photo: Neerav Bhatt/ Flickr



THE 27th of January each year is a no-brainer for a newspaper editor with a little space to fill. Like the proud snap of a loving mother with her newborn baby captured a tick or so into the New Year, the late-January photo of a flag-waving new citizen outside a town hall somewhere on Australia Day is part of every summer.

This year, the Age went with a Sudanese family: Dad, Mum and four kids. The accompanying article told a story of more than two decades spent on the run from appalling violence, first in wars and then in refugee camps. Now that Abraham Biar Koul Biar has collected his Australian citizenship certificate with a bullet still in his arm, everyone in the family can proudly and happily call themselves Australian. All except his wife Achol, that is. She failed the citizenship test. “I don’t know computers well,” she told the Age.

Coming from a female refugee with a non-English-speaking background and young children, this is hardly surprising. Statistics obtained under freedom of information legislation from the immigration department make it even less so. They also show that the test has become harder for people like Achol to pass since Labor’s changes in October 2009.

Of the many arguments put forward by those who thought the citizenship test would be troublesome, the likelihood that women in these circumstances would struggle was one of the most emphatic. Amid the more urgent tasks of raising children and adjusting to a new home, a new country and a new life, English language classes and computers, no matter how accessible, understandably come low on the list.

Such an argument was never likely to stop parliament from introducing the test, however. Women from non-English-speaking backgrounds were, after all, the reason for the test in the first place. Or so says the Liberal frontbencher Andrew Robb.

In Black Dog Daze, a book about his long struggle with depression, Robb credits himself with being the initial driving force behind the citizenship test. He was motivated, he says, by what he had learned about refugee women who couldn’t speak English, many of whom he encountered in his role as parliamentary secretary of what was then called the Department of Immigration and Multicultural Affairs. The experience gave him flashbacks to his childhood in Reservoir, a northern suburb of Melbourne with a history of attracting immigrants.

Robb writes that many of his Italian school friends acted as interpreters for mothers who, despite many years in Australia, couldn’t speak English. His friends’ fathers, on the other hand, could; they had learnt it while working. Even now, years later, says Robb, many of these immigrant Italian women still can’t speak English and are unable to communicate with their grandchildren and great-grandchildren because the younger generations don’t speak Italian.

Robb’s wife, in her work as a volunteer at a medical centre in Melbourne, had also encountered refugee women, newly arrived expectant mothers from various parts of the globe who were “timid and apprehensive” because they couldn’t speak English. Most of their husbands could, writes Robb; again, they had learnt it on the job.

Robb decided that refugee women were not taking advantage of the government-funded English language lessons available nationwide, lessons with the potential to greatly improve their lives. Perhaps a gentle shove was in order. “I decided,” he writes, “that requiring a basic level of English competence to get citizenship would provide an enormous incentive to learn English.”

Robb’s idea was a shade grandiose, and more than a shade late. A basic level of English competence had been a requirement for Australian citizenship since 1984. Before that, the legislative requirement was for an “adequate” knowledge of English, a stipulation that had been in force since 1949 — as long as Australian citizenship itself. In any case, it was not the language requirement that was supposed to change with a citizenship test, just the way of determining whether a prospective citizen had reached it.

Prior to the written citizenship test, a candidate’s basic knowledge of English was assessed by a department official during the application interview. Questions were routine, and focused on the nature of the application and on personal details. The level of language difficulty did not extend far beyond How long have you been here? or What are your children’s names? A new test was needed, Robb decided, like those in Britain and the Netherlands. There, according to a discussion paper released by the Department of Immigration and Multicultural Affairs in 2006, “formal, consistent and objective” language tests operate with language proficiency requirements “defined in specific linguistic terms.” Everyone, it seems, nodded in agreement. Why wouldn’t they? Words like “formal,” “consistent” and “objective” make for a heady brew. Tests, especially ones with “specific” requirements, are wonderful things.

The announcement that there would be a new test for citizenship came at a joint press conference held by John Howard and Andrew Robb late in 2006. All applicants for Australian citizenship would soon be required to demonstrate a “working knowledge” of the English language and a basic understanding of Australian society, culture, values and history. The test, said Howard, was “about cohesion and integration.” Howard and Robb’s announcement was an anniversary gift; the Cronulla riots had taken place one year earlier, to the day, on 11 December 2005.

At that stage the details were sketchy. Robb knew nothing other than that the test would be computer-based and that thirty questions (later reduced to twenty) would be drawn randomly from a larger bank of two hundred. The test materials, he said, were yet to be written.

For John Howard, there was quite a bit to like about a citizenship test. The public was keen, so the research indicated, and while it was no Tampa or children overboard, it was a values debate at least. Howard was in electoral trouble and he knew that only a rabbit from a hat would stop Kevin Rudd, who after a week in the job was already looking like a big problem. Rudd played it cool. He had time, and nothing to gain by opposing a test that just might include a question on the meaning of “fair shake of the sauce bottle.”

The legislation sailed through both houses in the second half of 2007. Apart from one or two gratuitous swipes at Howard, mostly along party lines, only Liberal MP Petro Georgiou took a serious swing at it in the lower house. In the Senate, Andrew Bartlett of the Democrats and Kerry Nettle of the Greens led the dissent for their respective parties.

Georgiou, Bartlett and Nettle had plenty of friends outside of the parliament, however. Academics and lefties weren’t happy. They rarely are on such matters. In this case, however, their job was easy. Did someone say language test?

Australia’s policy-makers have, in living memory, shared a bipartisan willingness to use language tests for traffic control. While there are more recent examples, the baldest and most infamous example on record, the dictation test, was the central plank on which the Immigration Restriction Act of 1901 was built, underpinning the White Australia policy for six decades.

Comparisons between the citizenship test and the dictation test were overblown, however. The dictation test was about stopping people at the point of entry or discouraging them from showing up at all. Arguably, such a test is closer to turning back the boats. Citizenship candidates, on the other hand, are already in the country, and have been for at least four years. The citizenship test, like citizenship itself, is not about who comes, but who belongs.


IT IS around four and a half years since the citizenship test was launched at the beginning of October 2007. It was of little help to Howard, of course, and just two months later a new immigration minister, Labor’s Chris Evans, was in charge. From the very beginning, statistics showed that humanitarian visa entrants struggled to pass while those from the skilled and family streams had little if any trouble. In April 2008, responding to concerns about this apparent inequity, Evans announced a review to be headed by former diplomat Richard Woolcott. His brief was to investigate any unintended consequences arising from the test.

Woolcott was no fan of the citizenship test. Indeed, he is on record after his appointment as saying that it was unnecessary. He was more circumspect in his report to the government, but not much. In Moving Forward: Improving Pathways to Citizenship, one of the committee’s first “key findings” reads: “The present test is flawed, intimidating to some and discriminatory. It needs substantial reform.”

Woolcott’s report, based on extensive investigation and wide consultation, made thirty-four recommendations, the great majority of which — twenty-six — were supported by the government. The main recommendation was that a revised test should focus on the responsibilities and privileges of citizenship rather than on what Woolcott termed “useful knowledge” for integrating into Australian society. A diplomat to the core, he was being polite, and resisted references to the scorn heaped on the original test booklet Becoming an Australian Citizen, which was widely condemned for its jingoistic tone and content.

The review committee also spent considerable time trying to untangle the legislative requirements for demonstrating a “basic knowledge of the English language” and an “adequate knowledge of Australia and of the responsibilities and privileges of Australian citizenship,” two interminable, how-long-is-a-piece-of-string issues. It dealt with the latter by recommending that the civics be separated from the legend. In other words, the country’s notable achievers in sport, science and the arts, along with the stump-jump plough and the tourist attractions, while all “useful knowledge,” had no place in a test for citizenship.

The legislative requirement for a “basic knowledge of the English language” was not so easy. Woolcott and his team knew that Ingrid Piller and Tim McNamara, two prominent applied linguists, had analysed the language level of the test materials and determined that it was “out of the reach of a basic user of English.” They had drawn on the “basic user” levels in the Common European Framework of Reference for Languages, a tool designed for setting comparable standards of language learning, teaching and testing across the European Union. The framework describes six broad levels of language proficiency. Its lowest two levels, A1 and A2, describe a “Basic User” of a language in terms of language abilities. The upper level of basic user, A2, also known as the “Waystage,” describes learner abilities as “simple” and “routine” and in areas only of “immediate relevance.”

Throughout the debates before, during and after the introduction of the citizenship test, politicians, bureaucrats and academics offered a multitude of other ideas about what the appropriate language level for citizenship should be, using terminology like plain, simple and reasonable, or resorting to even more imprecise phrases like a practical command of the language, a commonsense level of English, a working capacity, sufficient for taking advantage of education and economic opportunities, or enough to read a memo or a safety sign, say hello to a neighbour or read a basic newspaper.

Mercifully, perhaps, Woolcott and his committee aligned with the Europeans, declaring that “a basic knowledge of the English language is having a sufficient knowledge of English to be able to exist independently in the wider Australian community and… resembles the A1/A2 level of English in the Common European Framework of Reference for Languages.” Though the definition presents no less of a headache, the committee’s recommendation was that the test materials be rewritten in this level of basic English.

The government endorsed the committee’s interpretation of basic English and acknowledged concerns that the language of the original test materials was beyond it. It then added these two sentences: “The government agrees with the committee that providing resources in plain English will support prospective citizens to prepare for the test. All related citizenship test resource material, including the resource book and questions, will be developed in plain English.”

Woolcott’s committee had made no such statement; the term “plain English” did not appear anywhere in its report. A “basic knowledge of the English language” had become “plain English” in the space of a paragraph. Whether intentional or not, this semantic shift was further evidence of the general confusion surrounding the language requirement for citizenship.

The revised Australian citizenship test was launched in October 2009. The new study booklet, Our Common Bond, was better organised and clearer about what would be tested, and most of the “useful” information from the original booklet was relegated to a non-testable section. The original test’s three mandatory questions, all of which had to be answered correctly in order to pass, were abolished, meaning that it was no longer possible to score 19 out of 20 and still fail. The critics were happy, or at least quieter than they had been the first time around. What most of them missed was the fact that the language level of the new booklet, which was now in “basic English,” “plain English,” or both, hadn’t changed to any significant degree. And along the way the government had increased the pass mark from 60 per cent (twelve out of twenty) to 75 per cent (fifteen).


THE initial impact of the changes on the pass rates of different groups was difficult to assess because statistics on the test were suddenly hard to come by. Information previously released in a relatively timely manner was now appearing much more slowly — so much so that by late last year the only way to extract the relevant data from the immigration department was by using freedom of information legislation.

Statistics released recently under FOI make it possible to compare pass rates of all nationalities for the original version of the test with those for the revised version. They show that the revised citizenship test is still flawed, is more intimidating for some applicants, and is even more discriminatory.

First, let’s look at the figures for the original test, which ran from 1 October 2007 to 18 October 2009. Among the countries of origin with 200 or more test “clients” (to use the department’s term), nine had average marks under 70 per cent for all test attempts: Afghanistan (59 per cent), Somalia and Burundi (63), Iraq (64), Sudan (65), Eritrea (66), Cambodia (67), Liberia (68) and Ethiopia (69).

In the revised version of the test, all but one of these nine countries recorded lower average marks, and some considerably lower, during the period from its inception on 19 October 2009 up to the end of June 2011. For these clients, and many more as well, the revised citizenship test was proving to be tougher. Coupled with the higher pass mark of 75 per cent, citizenship was even further away for these applicants.

Also revealing are the statistics covering the number of times clients from each country were taking the test. Under the original test it was relatively rare for the average number of tests taken per client to go above two. Among the exceptions were Tunisia’s seven applicants, all of whom passed the test eventually, who took twenty-five tests between them (at an average of 3.57 tests per client) during the test’s first nine months of operation. The only other countries to record averages over two tests per client during this period were Afghanistan (1013 clients took 2114 tests, or 2.09 tests each on average), Burundi (eighty-seven clients at 2.09 tests each) and Rwanda (thirteen clients at 2.08). In the period from 1 July 2008 until the old test ended on 18 October 2009, of countries with over fifty clients only Afghanistan and Iraq recorded more than two tests per client.

The revised test’s statistics are somewhat different. From its introduction on 19 October 2009 through to 30 June 2010, no countries recorded an average of more than two tests per client — an innocuous debut. But during the next twelve months, of the countries with more than fifty clients, Afghanistan headed the list at 3.78 tests per client. Afghanis were not alone in their struggle to pass the new test inside three attempts. Of countries with more than one hundred clients, Cambodia (331 clients at 3.18 tests each), Iraq (619 at 3.06) and Sudan (472 at 3.04) were hit almost as hard. In all, twenty-four nationalities, ten of which contributed more than one hundred clients, needed more than two tests per client to pass the revised citizenship test.

Remember that the citizenship test is about achieving social cohesion and integration. Remember too that the revised test is supposed to be serving those goals more effectively than the original.

It isn’t all bad news, however. Clients at the top end of the statistics have been served rather well by the changes. By getting rid of the mandatory questions in the revised test, those countries with average marks already comfortably above 75 per cent have had their chances of success enhanced. In the original test, for example, seven countries — Canada, the United States, Germany, Ireland, South Africa, Britain and Zimbabwe — appeared in the top ten performers (highest average mark) across all three periods of the original test. These seven countries recorded fail rates, calculated as number of failed tests per number of clients, ranging from 9.6 per cent (Britain) averaged across the three periods to 5.7 per cent (Zimbabwe). Under the revised test, these rates have dived, with six out of the seven countries recording average fail rates below 1 per cent across the two reporting periods. The exception is South Africa, which had a ratio of fails to clients in the revised test of 1.2 per cent and 2.3 per cent in the two reporting periods, down from 7.1 per cent in the original test.

The differences between the high and low achievers across the two test versions are even more pronounced when compared in terms of total fails. To do this it is instructive to look at the top ten and bottom ten countries (by highest and lowest average marks) for the three periods of the original test and the two periods of the revised test. The figures show that in the original test the ten lowest achieving countries (with more than 200 clients) provided between 44 and 48 per cent of the number of clients from the ten highest achieving countries (with more than 200 clients) across the three periods. The statistics show that from less than half the number of clients from the high achieving group, the low achieving group recorded 3.7, 5.6 and 2.2 times more fails. This disparity is, arguably, unsurprising. Some may even call it an acceptable difference. It is a test, of course, and tests always create winners and losers.

Anyone trying to argue an acceptable difference in the figures for the revised test, however, might find it a struggle. In the initial period of the revised test (19 October 2009 to 30 June 2010) the low achievers (1063 fails from 5048 clients) failed the test 12.7 times more than the high achievers (84 fails from 12,425 clients). In the second period, from 1 July 2010 to 30 June 2011, among the 16,592 clients in the top ten countries, there were just 207 failed tests. Compare this with the 8,748 failed tests from just 6,133 clients in the low achieving countries in the same period. In other words, from just over one-third of the number of clients, the low achievers provided more than forty-two times the number of fails than the high achievers. That’s quite a revision. Numbers like these point to either a spectacular success or an abject failure, with no middle ground anywhere it seems.

The higher fail rate among some nationalities has fed into the overall figures on new citizens. The department’s annual reports show that in the three years prior to the introduction of the citizenship test, the overall approval rate for citizenship applications was generally around 96 per cent or just above. In the 2007–08 reporting year, the same year in which the citizenship test was introduced, the approval rate dropped to 91 per cent before increasing slightly to 92.5 per cent in 2008–09. Then, in 2009–10, the year in which the revised citizenship test began, the approval rate dropped slightly to 92 per cent. In 2010–11, however, the overall approval rate for citizenship applications dropped to 87.9 per cent.

Despite making citizenship harder to acquire, the government has made some concessions. A list of people exempted from the test includes those over sixty or under eighteen years of age, and people with a “permanent or enduring physical or mental incapacity.” Our Common Bond has been translated into thirty-seven “community languages,” which is good news for applicants from those communities, assuming of course that they are able to read in the first place. There is also an assisted test option in which an officer of the department reads the questions aloud for those who have literacy problems.

More promising, perhaps, is the introduction of the option of a citizenship course as an alternative pathway to citizenship, which Woolcott called for in his report. Introduced in May 2010 and administered by the Adult Migrant English Program, “Our Common Bond: A Course in Australian Citizenship” is a twenty-hour, classroom-based course run over seven sessions. The course is available to those who have failed the standard or assisted test three or more times. Candidates are unable to choose to do the course; they must wait for the department to contact them. According to the department’s 2010–11 snapshot report, twenty-seven such courses have been held with 321 people, out of 355, passing the course.

It is worth remembering perhaps that all tests discriminate; all good ones anyway. That’s what they are designed to do, after all, and, for a couple of thousand years at least, they’ve been used with varying degrees of effectiveness and fairness to decide who gets what. It is also worth remembering that tests have been used for just as long for nefarious means — who gets their head chopped off or run through with a sword, for example.

The distinction is important because tests in general are given far too much credit as productive forces while their more deleterious effects are often papered over or ignored completely. Andrew Robb says his initial motivation for the citizenship test was to provide an incentive for migrants to learn English. Indeed, he credits the April 2006 speech in which he first raised the possibility of an Australian citizenship test as the cause of an almost immediate and “massive increase in the number of immigrant and refugee women enrolling in English language classes.” This is a bold claim, one that would have been served well by evidence. Is it possible perhaps that an increase may have been at least partially driven by increased overall migrant numbers at the time?


DURING a radio interview with John Howard in September 2006, Neil Mitchell of Melbourne’s Radio 3AW asked the prime minister if the proposed citizenship test would make it more difficult to become a citizen. “It won’t become more difficult if you’re fair dinkum,” replied the PM, “and most people who come to this country are fair dinkum about becoming part of the community. I think most people will welcome it.”

Later in the same interview, Howard took a call from Jane, a teacher with the Adult Migrant English Program, who was concerned about her students, most of whom were refugees from Sudan and Afghanistan at the start of a long journey to even basic literacy. Howard interjected to tell Jane that the test would be applied after four years and that by then a person should have a “reasonable level of proficiency” in English. Jane didn’t get to finish her initial question and was given no opportunity to reply to the PM’s remarks. Perhaps she would have said, as Petro Georgiou would subsequently, that migrants are acutely aware of the need to learn English to improve their lot in Australia and that a test stacked against them was an odd way of goading them into it.

Howard’s comments reflected an almost universal ignorance, or perhaps lack of interest, among politicians, bureaucrats, academics and the public about what it takes to learn a language from scratch as an adult, as well as how languages are taught and how they are tested.

Consider just one sentence from Our Common Bond: “Anzac Day is observed on 25 April each year.” Such a sentence might be described as “plain” or “basic.” Teaching it ought to be a breeze. To “observe,” for instance, is a relatively straightforward concept: in its most common usage it means to “look at” or “watch closely.” Imagine teaching it in relation to a national holiday to a room full of adults with little or no English who want to know why people “look at” a holiday. Then, imagine telling them that if “Christmas Day” is switched for “Anzac Day” and “December” for “April,” it sounds a tiny bit strange. Why? Not sure. And as a hand goes up to ask whether people “observe” New Year’s Day, Good Friday and on and on, think about how many government-funded language teaching hours might be spent on semantics in the pursuit of passing the citizenship test that could be used on more concrete, “basic” English of the kind that would probably be more useful to them in their daily lives, and would have the added advantage of matching the legislation.

Andrew Robb told journalist Laurie Oakes in September 2006 that the government would be taking a commonsense approach to the test and the citizenship language requirements. But that was what the existing interview test did, and it was replaced by an ill-conceived, ill-designed, ill-directed and expensive new test that short-changes those already short-changed. In the interests of social cohesion and integration we now have a test that is withholding security and a vote from a group of people who probably need them both more than most. •