Inside Story

The tight-lipped champions of free speech

The social media giants say they’re dealing with online predators, but they really don’t want to talk about it

Ginger Gorman 22 March 2019 2223 words

Facebook’s headquarters in Menlo Park, California. SiliconValleyStock/Alamy


On 9 February last year I gave evidence to a Senate committee hearing into the adequacy of Australia’s existing cyberbullying laws. Deep in the bowels of Parliament House in Canberra, alongside journalist and academic Jenna Price and reputation manager and chief executive Liza-Jayne Loch, I sat at a table facing the senators. The three of us were representing the non-profit volunteer organisation Women in Media. We read from our prepared statements and answered questions.

Directly after our evidence, representatives from Facebook and the non-profit Digital Industry Group Inc., or DIGI, were due to have their say. Noticing Mia Garlick, Facebook’s director of policy for Australia and New Zealand, I walked up to introduce myself. She was surrounded by a wall of mostly female staff.

“I just gave evidence,” I said, smiling.

“I heard your evidence,” she said, staring straight at me. She was not smiling.

I’m writing a book about cyberhate, I said, and would like to interview her. Could I have her business card? She said she didn’t have one on her.

“What’s the best way to get in touch, then?” I asked. “Can I get your email address?”

My paper and pen were poised. But she didn’t start spelling out her email address. She paused and mumbled something about how I could get it from Jenna Price. “She’s got it,” Garlick snapped, and it was clear our conversation was over.

This brief interaction turned out to be a marker of what was to come. Beyond public relations spin, it’s hard to get any real, in-depth and on-the-record answers from the social media companies about how they are tackling cyberhate. Despite their insistence on being platforms for and champions of free speech — “Twitter stands for freedom of expression for everyone!” — they are hell-bent on controlling the message and remain largely unwilling to be held to account.

Lawyer Josh Bornstein believes social media companies should be liable for cyberhate, and if they were we’d see a radical change in their behaviour. “If they had a duty of care, then I think their corporate behaviour would be very different.” Similar calls for legislated duty of care are coming from Europe. Quite simply, this legislation would put the onus on tech companies “to prevent reasonably foreseeable harms.” And if they didn’t, they would be liable.

Predictably, social media companies aren’t crazy about the idea. As Facebook’s submission to the Australian Senate hearings states: “Given the strong commitment of industry to promote the safety of people when they use our services, we believe that no changes to existing criminal law are required. If anything, we would encourage the Committee to consider carve outs from liability for responsible intermediaries.” In plain English, Facebook isn’t just seeking the status quo — it’s suggesting exemptions from prosecution.

Under the current system in Australia, social media companies are viewed as “partners” of the Office of the eSafety Commissioner, which was established by federal parliament in 2015. But when it comes to cyberbullying they can be given legally binding notices or fines if they don’t comply with requests from the commissioner. “We haven’t had to use our formal powers once,” says commissioner Julie Inman Grant, who spent more than two decades working in policy and safety roles for companies like Microsoft, Twitter and Adobe. She believes that the system of treating social media companies as partners and giving them “the benefit of the doubt” is working. “They don’t want bullying. They don’t want that happening on their platforms.”

Encouragingly, the eSafety Office claims it has a 100 per cent compliance rate when it comes to getting cyberhate taken down. This isn’t to say that Inman Grant thinks the social media companies are at the top of their game when it comes to addressing online harassment. “They’ve made some incremental changes… but they haven’t been monumental changes,” she says. She sees the current historical moment as a “tipping point” of public anger, and is adamant: “We need to see meaningful transparency or radical transparency, if you want to call it that. Online safety is not a destination, it’s a constant journey.”

One of the things the eSafety Commissioner talks passionately about — and I have come to agree with — is a concept she calls “safety by design.” Her idea is that instead of being retrofitted after the damage is done, tech platforms must be engineered to protect us from the get-go. This strikes me as just like cars fitted with mandatory seatbelts and airbags in case we crash while driving them.

To explain what happens in the absence of safety by design, she takes the example of Facebook Live — for her, a “perfect example” of the failure to implement safety by design. When Facebook Live came onto the market, two similar services, Periscope and Meerkat, were already in operation. It should have been clear from their example that user safety would be a major issue. “Why did it take almost a dozen live-streamed rapes, murders and suicides for them [Facebook] to say, ‘Okay, well, we’re going to hire 3000 moderators’? They were so focused on getting out to market and gaining market share that they didn’t do the proper risk mitigation and risk management and try and build those in.”

Facebook responded to this criticism via email, stating: “All new features — including Facebook Live — go through rigorous internal review involving specialist privacy, safety and security teams.”

Neither Facebook nor Twitter agreed to nominate a representative for me to interview on the record. To give Facebook its due, the staff did attempt to answer my direct questions, and had ongoing correspondence and phone calls with me over many weeks. However, both platforms were guilty of providing long, prepared written statements that often smacked of public relations spin. Twitter directly addressed only two issues I raised with them — the first regarding the purchase of advertisements to perpetrate cyberhate, and the second related to outsourcing moderation overseas. My other thirteen questions, based on the comments and experiences I’d gathered for my book, were not directly answered by the platform.

At various times I wrote to both Twitter and Facebook, expressing my frustration at their insistence on tightly managing the message. After months of highly controlled communication with me, a Facebook staff member — who insisted on not being named — asked me why the media doesn’t accurately report on what Facebook is doing in regard to user safety; I nearly laughed.

In part my response reads: “This is all about how much trust is in the bank — which is low at the moment due to issues you are already aware of.” (We had discussed Facebook’s data breaches, and how dimly the public viewed them.) Getting accurate coverage, I explain, is more likely “if you have those human relationships with journalists” and allow leadership to be interviewed because “this gives the appearance of being in open and honest communication with the public.”


After three months of corresponding with Facebook, the company offers me a meeting with Mia Garlick. This is not the interview I’ve requested but it’s better than nothing. Perhaps Garlick won’t remember our last encounter, but on the day I’m nervous. I put on more makeup than usual — my sister fondly calls this my war paint — along with shapewear and a floral shirt.

Most of the building’s occupants are listed in the foyer of this nondescript high-rise in Sydney’s CBD, but not Facebook. Unless you know the address, it’s not easy to find. Beyond the big wooden-framed doors on the eighteenth floor, it’s like a separate universe. Deliberate funkiness. There’s a big wooden “f” on the wall, in Facebook’s signature font, surrounded by fake grass. A clear glass vase of orange orchids stands on the beech-coloured reception desk. Behind the desk is a high wall covered in a huge, modern mural suggesting flowers and vegetation. The phone doesn’t stop ringing.

The receptionist asks me to electronically sign a five-screen-long non-disclosure agreement. This form effectively stops me sharing “confidential information” gleaned in the upcoming meeting. She gives me a green bottle of sparkling water and directs me to a blue-and-grey couch with colour-coordinated throw pillows of different sizes.

Finally, I’m collected from reception by a public relations representative and shown to a boardroom behind glass doors. Floor-to-ceiling windows look over the city. Garlick greets me. She’s wearing a black cardigan with a white top underneath, and a chunky blue ring on her right hand with a wide resin bracelet to match. Her hair slightly slicked back. Startling blue eyes.

There’s a note of tension in the room and I crack a bad joke about having slept badly because of drunk kids in the city. Both Garlick and the PR person laugh politely and visibly relax. The pair of them reiterate the message they’ve given me via email: Nothing is quotable.

And that’s a crying shame because, one by one, Garlick graciously answers every single question that I have. In detail. Unlike the prepared statements Facebook sends me both before and after this meeting, this face-to-face conversation shows Garlick to be authentic. She’s passionate about her work and believes in it. She’s thoughtful and well informed. Her answers — which I’m unable to share, of course — go a long way to making Facebook’s case in relation to what the company is actually doing about cyberhate.

Once a journalist, always a journalist — even in a situation where you can’t quote. During the meeting I take down more than a thousand words of notes. Towards the end of our allocated time, I’m told that if I wish to have these same questions answered officially, I need to send them again via email (for the third time).

Facebook brands itself as a place “where people from all over the world can share and connect.” Clearly, journalists are not the people they have in mind. On the contrary, this rigmarole leaves me with a distinct and unprovable hunch: this unwise attempt at containing the media — and, by proxy, the public — must be deliberate. Is it a directive from the California head office? What other sensible reason could there be for this behaviour?

After witnessing the company’s responses at the Senate hearings and then communicating with it at length myself, it’s hard not to conclude that this obstruction is by design: There’s nothing to see here. Move along.

Facebook, for example, has consistently denied being a publisher, instead claiming “we are in the business of connecting people and ideas.” “On Facebook,” the company submitted to the Senate, “people choose who to be friend [sic] with, and which Pages or Groups to follow. Consequently, people make a decision about the types of content that they can see in their News Feed… We do not write the posts that people read on our services.”

The issue of whether social media companies are or aren’t publishers is a thorny one, and it’s being debated around the world. The reason for this is that traditional publishers — like newspapers and TV stations — can be held responsible for false, misleading or malicious content shared on their platforms.

Writing in the Guardian, Emily Bell, director of the Tow Center for Digital Journalism at Columbia University, articulated what many people already believe. Social media companies are publishers because they “monetise, host, distribute, produce and even in some cases commission material.” Teasing out why this is a particular problem when it comes to news and information, she continues: “By acting like technology companies, while in fact taking on the role of publishers, Google, Facebook and others, have accidentally designed a system that elevates the cheapest and ‘most engaging’ content at the expense of more expensive but less ‘spreadable’ material.”

Of course, profit is a major factor in tech companies. As Frank Pasquale, Professor of Law at the University of Maryland and author of The Black Box Society explains: “Very often, hate, anxiety and anger drive participation with the platform. Whatever behaviour increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

Of course, when I repeatedly put this to Facebook, staff there furiously deny it. One spokeswoman says to me: “Those comments are insulting to the thousands of people who’ve come to work every day [at Facebook] for the last fourteen years — policy people, leadership, safety experts, engineers — to make the platform safer. It’s bad long-term business if people have had a bad experience on our service. If people don’t find Facebook useful, they are not going to use it. Our long-term future is to continue delivering a service that people enjoy and feel safe using.”

Still. Even Twitter’s current CEO, Jack Dorsey, understands his platform has been failing to keep people safe. In a series of tweets from 2017, he said, “We see voices being silenced on Twitter every day. We’ve been working to counteract this for the past two years… We prioritised this in 2016. We updated our policies and increased the size of our teams. It wasn’t enough.” •

This is an edited extract from Troll Hunting, published last month by Hardie Grant.