Inside Story

Privacy by design

Books | Badly designed technologies can trap users and thwart their understanding, argues lawyer–scientist Woodrow Hartzog. Good design can do the opposite

Megan Richardson 4 July 2018 2184 words

Nowhere to hide: Facebook’s Mark Zuckerberg arriving at a joint hearing of the Commerce and Judiciary Committees in Washington in April. Pablo Martinez Monsivais/AP Photo

Privacy’s Blueprint: The Battle to Control the Design of New Technologies
By Woodrow Hartzog | Harvard University Press | $67 | 322 pages


In City of Bits the great urban designer William Mitchell prophesied that we will become cyborgs in the age of increasing automation, gradually ceding control over our bodies and our mental states in the face of the enormous organising ability and power of machines. Writing in 1996, he didn’t use the language of “privacy” but instead talked about how the ability to form and maintain an identity of our own would be a casualty of the mechanisation of every aspect of our lives, including how our basic information (those “bits”) is processed.

“Privacy” has become an umbrella term for this idea in some quarters, in the sense of describing the “rules and norms [governing] personal information,” tethered to notions of “trust, obscurity, and autonomy,” as Woodrow Hartzog puts it in his fine new book, Privacy’s Blueprint. I might have reservations about the vocabulary: as the New Yorker’s Louis Menand wrote recently, for many the word “privacy” still signifies that “it’s nobody’s business,” and we may need a better language to talk about “the freedom to choose what to do with your body, or who can see your personal information or who can monitor your movements and record your calls.” Even so, I can’t help but be impressed by Hartzog’s central argument: that we should stop thinking of machines as inevitable threats to privacy (and, I’d add, to identity more generally) and start thinking about their capacity to enhance our freedom. The trick is good design, and it is here that legal policy-makers should focus more attention.

In Western democracies we have become inured to the idea that new technologies will enhance some freedoms while inhibiting others, and accustomed to seeing the law pulled in to even up the balance when social norms and markets fail. The roots of this approach go back a long way. The printing press might have enabled newspapers, magazines, books, pamphlets and other literature to proliferate in the centuries after its invention — contributing to what Jürgen Habermas calls the “public sphere” — but it also gave control of the dissemination of knowledge to individual printers and publishers, provoking battles over the control of personal information.

When the Grub Street press published revealing letters, stories of clandestine encounters and outrageous biographical claims for its eager audiences, calls for legal intervention inevitably followed. Legislators and judges crafted (or extended) a range of responses, including statutory copyright, a common law property right in unpublished works, and defamation law. By the early nineteenth century, one of the new measures — the equitable action for breach of confidence — was already being thought of as a means for protecting personal secrets from unwanted publicity.

Soon, these legal protections were expanded to deal with new visual technologies such as engraving and etching, which were especially popular in the Victorian era. In that era’s leading privacy case, Prince Albert v Strange, both the property right in unpublished works and breach of confidence were relied on to stop publication of an unauthorised exhibition and accompanying catalogue of domestic etchings made by Queen Victoria and Prince Albert that had been obtained by the defendant, William Strange, using “surreptitious and improper” means. In this case, Lord Cottenham LC expressly referred to “the right [to] privacy” as “the right invaded.”

But it was the remarkable rise of photography from the mid nineteenth century, expanding the possibilities for precisely recording intimate visual displays, that had the most tangible effect on modern thinking about the law and privacy (and identity more broadly). “Instantaneous photographs” — no doubt a reference to George Eastman’s popular portable, easy-to-use camera, invented in the 1880s — along with “newspaper enterprise” and the public’s taste for “gossip,” prompted Samuel Warren and Louis Brandeis to argue in 1890 that “the right to privacy,” a right to be “let alone” as they termed it, required an urgent legal response. Many of the reforms to privacy law during the twentieth century can be traced back to that rallying call.

Nevertheless, as Hartzog observes, the situation seems to have become unbalanced by the apparently open design of the internet and related technologies in the late twentieth century moving in to the current century. He is not the first one to make this observation. In 1999, in Code: And Other Laws of Cyberspace, Harvard Law School’s Lawrence Lessig identified the internet’s powerful open architecture as a design feature with radical consequences for those seeking to control the circulation of personal information, including through traditional copyright, defamation and privacy laws. In 2010, Cornell Tech’s Helen Nissenbaum argued in Privacy in Context that technologies of “pervasive surveillance, massive databases, and lightning-speed distribution of information across the globe” pose serious threats to privacy. And George Washington University Law School’s Daniel Solove has repeatedly demonstrated how the massive sharing of personal information permitted by the online environment is only partly and inadequately ameliorated by the law.

What is interesting, however, about Hartzog’s work is his focus on what he calls “abusive design.” He focuses on the confusing, manipulative and generally untrustworthy forms of conduct that have grown up around institutional dealings with personal information, aided by current and emerging technologies. In the past, legal attempts have been made to deal with the most egregious problems of untrustworthy conduct, but these actions have typically been launched after the event and have relied on updated versions of the traditional doctrines, supplemented by twentieth-century legal innovations such as consumer protection, data protection standards and a general-purpose tort of negligence crafted and refined in response to the exigencies of the postwar and especially post-computer era. The question is whether these laws and doctrines can still work in the twenty-first century.

As Hartzog points out, some positive signs can be found in the US Federal Trade Commission investigations of the practices of Facebook, Google and other companies, relying on section 5 of the Federal Trade Commission Act, which proscribes unfair or deceptive practices in trade. Further, Data Protection Commission inquiries in Britain and Europe have relied on data protection laws that may have suddenly become much stronger with the implementation of the European Union’s General Data Protection Regulation, or GDPR, in May this year. Cases launched by individuals and through class actions — including Vidal-Hall v Google in Britain, Fraley v Facebook in the United States and Jane Doe v Australian Broadcasting Corporation in Australia (all eventually settled) — have also served to reinforce the idea that older laws and doctrines geared to controlling unwanted publicity, careless behaviour and deceptive practices may still be useful in the digital age.

And so it goes on: revelations of Sony’s alleged negligence in failing to protect employees against hacking of its system in 2014 (possibly in protest at its planned release of The Interview, a film mocking the North Korean dictator Kim Jong-un); the spectacular compromising of the dating website Ashley Madison’s advertised promise of secrecy for its members, by a 2015 hack; and now the Cambridge Analytica debacle. All of them have added to questions about whether individuals can withstand covert manipulation even in voting decisions, and prompted further official inquiries and further class actions. But the fact remains that the new communications technologies of the 1990s and 2000s have exacerbated the tensions between fostering free, transparent information and debate, on the one hand, and maintaining space for individuals to control “their” personal information, on the other. And along the way they have also worked to undermine trust.


So what is Hartzog’s answer to this problem? As a professor of computer science as well as law at Northeastern University, he is refreshingly concerned with how technologies work as well as with their effectiveness in communicating their functions to users. As he puts it, “people should design objects that have visible clues as to how they work.” Here he makes excellent use of the wonderful insights of design researcher Donald Norman, who helped to pioneer the idea of human-centred design in the 1990s and continues to write on the topic — and particularly Norman’s argument that “well-designed objects are easy to interpret and understand” in contrast to “poorly designed objects” which can be “difficult and frustrating to use. They provide no clues — or sometimes false clues. They trap the user and thwart the normal process of interpretation and understanding.”

Moreover, as a lawyer, Hartzog is also thinking about the interactions of design and law and the prospect of creating a “design agenda for privacy law.” The language suggests that his focus will be on what former Ontario privacy commissioner Ann Cavoukian has labelled “privacy by design.” This idea — which is becoming popular in privacy, consumer protection and data protection circles (and is now explicitly enshrined in the GDPR) — that law should foster privacy by design by placing obligations on those responsible for technologies is an example, I would suggest, of human-centred design, given the large numbers of people who claim they want more privacy and data protection than they currently receive.

Not surprisingly, Hartzog adopt a very expansive idea of what privacy by design might amount to in practice. He notes that all kinds of technological design, from “privacy settings” to cookies, can promote or inhibit the ability of users to control how their information is collected, stored and used. Technologies can also communicate their functions in more or less open and transparent ways. He argues convincingly that law-makers could do more to take account of these features, working across “a spectrum of soft, moderate and robust responses” to achieve their regulatory goals. And he generously acknowledges that regulatory agencies such as the US Federal Trade Commission are already adopting strategic approaches to regulating technology companies, for instance by encouraging them to develop technological solutions and trying to ensure that they “are being honest with people.”

Arguably, the same might be said of the GDPR’s provisions for “data protection by design,” notice and consent, and the “right to be forgotten,” which (to an extent at least) also encourage those in control of the technology to elaborate their regulatory standards. And, even before the GDPR, we have the example of Google’s development of a technological system to deal with de-indexing requests prompted by the European Court of Justice’s “right to be forgotten” decision in Google v AEPD and Costeja González, which ruled that “inadequate, irrelevant or excessive” data could be subject to de-indexing on application of the data subject. Nevertheless, as Hartzog illustrates compellingly with numerous examples of absent, inadequate or failed efforts at regulation, there is still great scope for improvement.

Australia has much to learn from this richly informative and insightful book. Our regulatory agencies could benefit from paying close attention to Hartzog’s arguments for more creative, strategic approaches to the challenge of regulating to achieve socially desirable outcomes in the current technological environment, and from reading about the experiments that are already taking place in other jurisdictions. These include the work of the US Federal Trade Commission under a provision that is not so very different from section 18 of the Australian Consumer Law, proscribing misleading or deceptive conduct in trade: a provision that in other contexts has become a staple technique for ensuring that minimum levels of transparency and trust are maintained.

Indeed, given the central problem Hartzog is describing, the Australian Competition and Consumer Commission should arguably be taking a more central role in regulating markets that are concerned with the exchange of personal data for goods or services. Perhaps if Australians ever do get the consumer data rights the government has promised in response to the Productivity Commission’s recommendations in its recent data availability and use report, we will see the ACCC taking a more active role. Then we would not have to depend so much on a Privacy Commission armed with a rather limited and qualified Privacy Act to do most of the work of a data protection authority.

But we shouldn’t forget the traditional role of the common law in regulating markets for personal data, and the instrumentality of individuals and groups in pursuing their own protection, including via the rapidly growing technique of the class action. Privacy scholars in Australia (myself included) have spent a lot of time agonising over whether we should have a statutory privacy tort. But, as Hartzog points out, old-fashioned values such as being truthful, taking due care, and generally acting in ways that inspire trust and confidence are still important in dealings with personal information in the current technological environment. Our traditional doctrines may still have much life in the hands of determined individuals willing to pursue claims individually or collectively, supported by clever lawyers. And Australian courts, which have shown themselves adept at developing “their” law creatively to meet new situations and circumstances, may still play a central role in protecting the qualities of being human in the brave new world of automation. ●

To order a copy of Privacy’s Blueprint: The Battle to Control the Design of New Technologies with a 15 per cent discount click here. Use discount voucher code BCLUB18 at the checkout to apply the discount.