Inside Story

In a bubble on the web

What happens when the internet finds out what we like, asks Jason Wilson

Jason Wilson 12 October 2011 2473 words

Author Eli Pariser at last year’s Pop Tech conference. Kris Krüg/Pop Tech

The Filter Bubble: What the Internet Is Hiding From You
By Eli Pariser | Penguin | $32.95


RECENT events, including the Arab Spring and the mass release of diplomatic cables by WikiLeaks, have tended to strengthen the view that digital media technologies are engines of freedom. In shutting down the internet to hobble popular protests, corrupt regimes have shown that they fear its subversive power. And the American politicians and commentators who called for Julian Assange’s prosecution, imprisonment and even assassination seemed to confirm that our protean information environment is troubling the powerful in Western democracies as well.

In each of those cases, digital tools were used to outmanoeuvre governments’ apparatuses of censorship. As authors like Brian McNair have rightly pointed out, the internet has been central to a process whereby elites – in politics and the media – have lost control over the way information circulates. This may seem to be cheering news for those who think about censorship in the most conventional way – as something that repressive states inflict on private organisations and individuals. But what if the self-same tools used by revolutionaries and whistleblowers are eroding our privacy? What if the digital platforms used by new social movements have a fundamentally atomising effect? What if the most profound threats – in terms of privacy and censorship – are features of the most commonly used internet services?

In The Filter Bubble: What the Internet Is Hiding From You, Eli Pariser provides some discomfiting answers to these questions. Pariser is not a stereotypical internet wowser – he’s a self-identified geek who helped build MoveOn.org from a start-up to a paradigm-shifting online activist organisation. Arguably, the kind of activism defined by MoveOn was decisive in the election of the current US president. (You can see that argument made in the 2008 documentary, MoveOn: Storming the Gate.) He has been thinking about and working with the political potential of the internet all of his adult life. He is neither a cranky media nostalgist nor an old-school radical: he doesn’t dismiss the role of the internet in organising effective political action. Indeed, in the game of online activism, he is a major, innovative player.

But he is also a liberal democrat, and his enthusiasm for technology is tempered by concern about both the rights of the citizen and the health of democratic debate. The touchstone thinker Pariser keeps returning to in The Filter Bubble is John Dewey, whose ideal of democracy had at its heart the idea that citizens, through deliberation and communication, could develop an idea of themselves as “individually distinctive member[s] of community.” Pariser’s question is whether the distinctiveness of individuals is being used, in effect, to pry communities apart.

Pariser argues that the increasing personalisation of the internet is isolating us from one another and from unexpected or uncomfortable information, and it’s doing this by collecting information in ways we may not be aware of. His focus is largely on the giants whose services most internet users in the West use each day: the leading social network, Facebook, and the search behemoth, Google. By customising our experience of the internet, Pariser argues, social media platforms and search engines are filtering out material and people we disagree with, reducing our chances of meeting with unfamiliar ideas, and reducing the opportunities for deliberation that have always underpinned the hopes for the internet as a democratic tool.

Pariser shows how the drive to personalisation began with the best motives. In his 1995 book, Being Digital, MIT futurist Nicholas Negroponte dreamt of a personalised newspaper, the “Daily Me,” which would filter news into a concise, relevant, tailored experience. This was a somewhat utopian response to the already-apparent danger of information overload. Although early attempts were rudimentary, the idea of putting users at the centre of their own internet experience has become dominant and personalisation far more sophisticated. It can be discerned in a range of ubiquitous tools and services, from RSS to collaboratively filtered news services and social media.

Personalisation has come to define the success of the largest internet companies, which have crafted their versions of the Daily Me with different emphases and mixed motives. In its quest to deliver the most relevant results, Google’s search algorithm has grown in complexity (so much so, according to Pariser, that its own engineers no longer wholly understand it) and its range of services has expanded in an effort to feed more information about the user back into the customised search. If any company has taken the effort to find out about its users’ tastes and interests further than Google, it’s Facebook, which not only offers customised information feeds and personalised recommendations within its own walls, but also has an open ambition to put its own tools and information-gathering at the heart of the web.

Of course, these and other forms of personalisation rely on a massive harvesting of personal information about users. The more internet companies know about you – your likes and dislikes, your relationships, your hobbies, your age, your education and your home town – the better they can tailor their service. But this information is also useful in other ways – for one, it makes it possible to target advertising more closely. There’s a view, shared by Pariser, that the real customers of personalised internet services are the advertisers to which they offer targeted advertising opportunities. The users are the product.


FOR the uninitiated, the amount that individual companies know might seem chilling. On the basis of the information you give in searches and through services like Gmail, Google can make a decent guess about everything from your neighbourhood and your level of education to your income and your state of health, and it can cross-match this information with trillions of bytes of data about other users. Even when you’re not logged into your Google account, the company is trying its best to customise your anonymous search using your location and search history, and even the characteristics of your browser and computer.

Facebook, meanwhile, asks users to hand over a lot of information (and many comply) and then refines its picture of each of us according to our behaviour on the site. The company also gathers information about users, and networks of users, far beyond the main site, as they recommend or comment on the growing number of sites that are integrated with the service. Both companies are thinking more ambitiously about how they can develop these massive caches of data, and use them as the basis of what can only be termed artificial intelligence. Beyond the land of the internet giants, more obscure companies – providing information for everything from credit ratings to marketing campaigns – store data on private individuals with no accountability, little oversight, and no access for the citizen.

Apart from the obvious concerns raised by all of this, Pariser makes a further, and telling, argument about identity. Facebook supremo Mark Zuckerberg insists that a singular, “transparent” online identity is inevitable and desirable. But Pariser counters this, pointing out that a lot of what we do online is performative, and that human identity is adaptable, plastic. Our personalities are not consistent; our behaviour changes according to context. Personalisation can’t capture this, and this is precisely why it’s an attack on privacy. As Pariser points out, “one of the most important uses of privacy is to manage and maintain the separations and distinctions between our separate selves.” What he calls Facebook’s “bad theory of identity” means that their personalisation doesn’t work as well as it might. This isn’t a tragedy in itself, but it raises much broader questions about the assessments being made about us on the basis of the data that often-unaccountable commercial organisations have collected.

The old saw about people having the choice not to engage with these services if they are uncomfortable with their privacy policies falls apart pretty easily. This presupposes that users are able to make a fully informed, rational choice about the nature of the service, and neither Facebook nor Google stops children signing away their privacy. It’s difficult to imagine being able to hold down a modern information-based job without using at least one of Google’s services, and the massive uptake of Facebook means that an increasing amount of social interaction in Western nations takes place within its walls.

More serious still, Pariser points to Facebook’s habit of retrospectively rewriting its terms and conditions to allow more information about users out into the public domain. With Zuckerberg in charge, you’re likely to get more than you signed up for. If you finally decide to opt out after a breach too far, there’s a worrisome lack of clarity about what happens to the information you have already handed over. To this we might add: is this really the best we can do in relation to privacy? Opt in and give away everything to a relatively unaccountable corporation, or stay out?


THE other adverse consequence of personalisation is the titular “bubble” it creates around us. As our information diet is increasingly tailored, the likelihood of coming across unfamiliar, unwanted or serendipitous information decreases. The engines of personalisation learn from what we click on, and from what we “like” or share, and use this information to refine our customised diet. This means we get more of the same. This is not some future possibility – even now, two users running the same search terms on Google will receive different results, sometimes markedly so. Increasingly, we are stuck in feedback loops where our own habits confine us to a narrower horizon of sources and kinds of information.

Potentially, this means that we will increasingly be steered towards people we already interact with, opinions we agree with, and fields of knowledge we are familiar with. Elements of the rich profile that these services have constructed will also play a part – when searching for which university or restaurant to attend, we will receive answers commensurate with our gender, income, education and suburb of residence. We will less frequently be challenged by the unexpected.

The dangers here are many. Unexpected or unfamiliar information is recognised as a key element in processes of creativity and innovation, but we could also miss out on getting better information because a service has developed a flawed picture of us. Most worryingly of all for Pariser, we may find that we are less often confronted by opinions, or people, who we disagree with, or by facts about the world that are nasty, complex or unpleasant. Through our natural inclination towards distraction and entertainment, we may reinforce a tendency for difficult material to be shut out of the bubble. Pariser dubs this the “friendly world syndrome,” and it is, functionally speaking, a form of soft censorship. You don’t have to burn books any more to limit the circulation of difficult or untimely ideas.

Indeed, we find here a resonance between Pariser and some academic authors who think about the way in which censorship is now embedded in the very technologies that appear to offer us unprecedented choice. Books like Raiford Guins’s Edited Clean Version: Technology and the Culture of Control show that, although overt state censorship has been complicated and undermined by global information networks, increasingly what we see, hear and read can be limited by functions built into media technologies. Rather than the state, most censorship is now carried out by means of filters hardwired into consumer hardware and software. To a large extent, censorship has ceased being the business of a centralised state and has become something that is enacted silently and unremarkably in our living rooms and offices.

Pariser’s book gives us part of the larger story of our time, the fragmentation of shared media experience that necessarily means a fragmentation of publics in mediated democracies. Political scientist Markus Prior approaches this from a different angle when he talks about our “post-broadcast democracy,” in which deregulated, fragmented communications systems mean that some people can avoid political information altogether, and others can gorge on it. “To news junkies, politics has become a candy store,” says Prior. “Others avoid news altogether. Political involvement has become more unequal, and elections more polarised as a result.” Whereas before, a less “efficient” media environment meant almost all of us had at least a tenuous grasp on public events, now it is possible to be completely disengaged from official politics.

In the United States, low-information voters, who tend to be less politically committed, have a propensity not to vote. Sometimes, disengagement is accompanied by an ill-informed cynicism about the entire system of government. This encourages parties to use more partisan messages in order to get supporters out to vote, and it also encourages more partisan media coverage of politics, as media organisations seek niche audiences. In Australia, where people are compelled to vote, low-information voters with weaker political commitments tend to be the target of simplistic national election campaigns that define them as “swinging voters.” The arrival of pay and digital television, the proliferation of private media devices and experiences, and the massive expansion of the internet have had a cumulative impact much like Google’s personalisation algorithms writ large. They have allowed us to retreat into private worlds reflecting our already-formed preferences.

The Filter Bubble does a very good job of diagnosing the extent of these problems. Perhaps this is why Pariser’s proposed solutions seem rather weak by comparison. Some consumer activism; a call to start using technologies in ways that produce surprises; and a plea for companies themselves to take a more curatorial role, to try to engineer serendipity and the occasional confrontation with difficult things, and to offer users more control over their information. Given the mass uptake of the technologies in question, the ideological commitments of big players like Zuckerberg, and the massive incentives on offer for perfecting personalisation, none of this seems likely to shift the momentum of the changes he describes. Appealing to the better angels in the natures of software engineers, as Pariser does elsewhere in the book, also seems forlorn. But it’s very difficult to come up with any alternative solutions.

What’s likely is that a continued retreat into private, customised, friendly worlds will come at a cost for our democracies, our politics and the culture of our institutions. Unless we can adapt, politics as we have known it will be increasingly difficult to conduct. Increased personalisation, after all, implies a diminution of that which is public. •