If it seems like we’ve been hit by a deluge of information on Covid-19, that’s because we have. The journal Nature suggests that the number of research papers on Covid-19 is doubling every two weeks, and by yesterday the World Health Organization’s repository of global literature on Covid-19 included more than 15,000 items. That’s quite apart from amateur and professional journalism, not to mention social media, where 630 million tweets with the hashtag #Coronavirus or #Covid19 have appeared (though the daily rate has dropped from its peak of twenty million on 13 March to around five million).
This information overload provides fertile ground for misinformation and conspiracy theorists. An unholy alliance is emerging of anti-vaxxers, China hawks and gun-toting libertarians, ready to seduce the credulous and the disaffected. Marlon Brando’s Johnny in The Wild One said it best: asked “What are you rebelling against?” he replied, “Whadda you got?”
One driver of this truly revolutionary explosion of scientific literature has been the changing ecosystem of academic publication. When journals as printed and bound artefacts lost their salience, their publishers moved online, relying on legacy reputations buttressed by prestigious editorial boards. It didn’t take long for pay-to-publish outfits to emerge, recognising there was money to be made by exploiting the publishing imperative among the oversupply of university staff. Distinctions between legitimate and predatory publishers became increasingly hard to navigate.
But in the last few years the environment has changed again. Bibliometric and altmetric measures of citations and impact, combined with legitimate publishers’ databases, are increasingly used to determine access to research funding and academic promotion.
If peer-reviewed and quality-controlled journals are the high-end retail outlets for these research products, the warehouses are the preprint servers. In the natural sciences, the arXiv (pronounced archive) has been around for thirty years. In other fields, these repositories are much newer. MedRxiv, where much of the Covid-19 literature has appeared, was launched less than a year ago. With its sister repository, bioRxiv, it lists 3172 Covid-19 articles, a figure that’s growing rapidly: a quick count shows twenty-nine articles added on 10 May and forty-three the day before.
These preprint servers are not a complete free-for-all. They are hosted by reputable institutions and are moderated, at least to a degree, and sorted into relevant subject categories. But the articles are posted before peer review, and many will never make it through that process. The urgency of slowing the Covid-19 epidemic and staving off deaths makes it very tempting to scour these servers for the latest research, but that comes with the risk of spurious results and junk science.
Artificial intelligence, or AI, proposes a way of identifying the best of this research. Scite.ai is a new tool powered by machine learning that trawls through mountains of scientific literature and not only counts the number of citations in other papers but also tracks whether subsequent mentions support or contradict the original paper. Some papers are widely cited because they are the best example of what not to do — Scite.ai enables a rapid sifting of right from wrong.
Galen and Avicenna would have recognised trial and error as basic to the scientific method. AI makes possible a major shift in this paradigm: for example, chemists can now use retrosynthesis methods to deconstruct and then reconstruct molecules, and potentially engineer drugs with very precise targets — blocking virus replication, for example.
The contrast couldn’t be greater with the fabled serendipitous breakthrough that dominates how we imagine drug discovery. (“On the morning of Friday 28 September 1928 Alexander Fleming finds that the mould growing on a petri dish accidentally left on a shelf kills bacteria, and so penicillin is born.”) AI wants to do away with this image altogether. The slightly ominous-sounding BenevolentAI is a case in point: setting its algorithms loose on a vast database of potential drugs, it pinpoints arthritis drug baricitinib as the most promising compound to combat SARS-CoV-2, and propels it into human trials.
The same shift is happening in vaccine development. Ever since Edward Jenner discovered that a dab of cowpox could be used to fight smallpox, vaccination has worked with two basic strategies: using either a killed version of the virus in question or live virus altered enough to cause only the immune reaction and not the full-blown disease.
For SARS-CoV-2, new techniques are in play to engineer vaccines from first principles, creating a vaccine that inserts itself into a particular site in the molecular pathway to disrupt the virus’s colonisation of the host DNA.
In the early AIDS days, gene-based techniques were in their infancy. I vividly remember a conversation in 1993 with a friend who, with a mix of hope and desperation, was betting his last throw of the dice on gene-splicing techniques he had heard about from Canada. He never got to try them.
At the time, antibody tests using a pinprick of blood were the stalwarts of HIV diagnosis. PCR assays, which amplify genetic fragments from a sample, were useful for confirming borderline cases, but they were cumbersome and prohibitively expensive.
That world has been turned on its head. PCR and other gene-amplification methods are the readily available go-to options for a test. Once the virus was isolated it could be plugged into the machines and reliable tests set up in a matter of days, as Victoria showed. Antibody testing has proved much more difficult, partly because we are still learning about the nature and timing of the antibody response to Covid-19.
To be useful, diagnostic tests need to meet two thresholds that point in different directions: sensitivity, when the test is fine-tuned enough to detect the virus if it is there; and specificity, when the test reacts to the virus in question and not to similar signals. Insufficient sensitivity will produce false negative results; insufficient specificity will produce false positives. Fast and dodgy operators thought they could bang together an antibody test for Covid-19 and rush it to market, but getting that balance right turned out to be much trickier than expected.
Remarkably, CRISPR technology — the gene-splicing technique that enables a small slice of DNA to be cut out and replaced — is coming to the rescue. Thirty years ago this was the very definition of cutting-edge science. Today, a CRISPR-based test stands ready to transform Covid-19 diagnostics, with the promise of a test simple and cheap enough for home use.
When the human genome was first fully described after a thirteen-year, multimillion-dollar project, it was hailed as the dawn of a new era of precision medicine. But gene therapies didn’t start rolling out the door, and the hype faded. Maybe it was a slow burn.
Covid-19 is perhaps the first pandemic with a genomic response: from epidemiology to diagnostics, to therapeutics, to vaccines, the virus’s genome has been front and centre. This pandemic is the crucible in which these genetically based and rationally designed approaches fuelled by AI will prove their mettle — or not.
For an alternative to these ponderings on science, here are a couple of great reads from the last few days.
Rutger Bregman’s new book Humankind is out in English next week, and as a teaser he offers the uplifting tale of a real-world Lord of the Flies in which a group of shipwrecked boys descended not into chaos but rather into amiable cooperation.
One of the smartest of development economists, Dani Rodrik, has considered what a better globalisation could look like. When Australia has not been too busy being Washington’s poodle, it has been a leading advocate of a rules-based global order. Those rules will need to be redrawn in a post-Covid world, and Rodrik provides a good pointer. •