Andrew Martin 

The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse

They don't look like Guardians Of The Galaxy-style superheroes. But, says Andrew Martin, the founders of the Centre for the Study of Existential Risk may be all that stands between us and global catastrophe
  
  

Martin Rees, Huw Price, Partha Dasgupta and Jaan Tallinn in the Great Court of Trinity College
Astonomer Royal Martin Rees (with his back to camera), philosophy don Huw Price (right), economist Partha Dasgupta (left) and Skype co-founder Jaan Tallinn in the Great Court of Trinity College, Cambridge. Photograph: Jon Tonks for the Guardian Photograph: Jon Tonks/Guardian

Cambridge, some time after the end of term. Demob-happy undergraduates, dressed for punting and swigging wine from the bottle, seem not so much to be enjoying themselves as determinedly following rites of passage on the way to a privileged future. I am heading towards the biggest, richest and arguably most beautiful college: Trinity. Of the 90 Nobel prizes won by members of Cambridge University in the 20th century, 32 were won by members of Trinity. Its alumni include Isaac Newton, Wittgenstein, Bertrand Russell and six prime ministers.

The porter's lodge is like an airlock, apparently sealed from the tribulations of everyday life. But inside the college, pacing the flagstones of what is called – all modesty aside – Great Court, are four men who do not take it for granted that those undergraduates actually have a future. They are the four founders of the Centre for the Study of Existential Risk (CSER), and they are in the business of "horizon scanning". Together, they are on alert for what they sometimes call "low-probability-but-high-consequence events", and sometimes – when they forget to be reassuring – "catastrophe".

At their head is a 72-year-old cosmologist, Martin Rees. The honorifics jostle at the start of his name: he is Professor Martin Rees, Baron Rees of Ludlow, OM FRS. He is the Astronomer Royal, a fellow of Trinity, formerly a master of the college and a president of the Royal Society. In newspaper articles, he is often described simply as Britain's "top scientist". In 2003, Rees published a book called Our Final Century. He likes to joke that the reason his book was published in the US as Our Final Hour is because "Americans like instant gratification". In the book, he rates the chances of a "serious setback" for humanity over the next 100 years at "50-50". There is an asteroid named after him – 4587 Rees. I can't help thinking, in light of his apocalyptic concerns, that it would be ironic if 4587 Rees crashed into the Earth.

But these four men are less concerned with acts of God than those we have created ourselves: the consequences of being too clever for our own good. They believe there is a risk that artificial intelligence (AI) will challenge our own. In a talk at a TED conference, Rees invoked another danger: that "in our interconnected world, novel technology could empower just one fanatic, or some weirdo with the mindset of those who now design computer viruses, to trigger some kind of disaster. Or catastrophe could arise from some technical misadventure – error rather than terror."

Rees proudly introduces his colleagues. There is Jaan Tallinn, a meditative Estonian computer programmer and one of five co-founders of Skype. There is a courtly Indian economic theorist, Professor Sir Partha Dasgupta ("Partha's very concerned with inequalities across time," Rees says). And there is Huw Price, a laid-back philosophy don – specifically, the Bertrand Russell professor of philosophy at Cambridge.

The group originated in 2011, when Price and Tallinn met at a conference on time in Copenhagen. Two weeks later Price, who had just taken up his philosophy post, invited Tallinn to Cambridge to meet his new colleague, Rees; all three shared a concern about near-term risks to humanity. "Fate," Price recalls, "was offering me a remarkable opportunity." After a two-year gestation, the CSER gets properly up and running next month. The first of a dozen post-doctoral researchers will be taken on, some of whom will be embedded with science and technology firms. There will be seminars on synthetic biology, decision theory and AI. Already there have been meetings with the Cabinet Office, the Ministry of Defence and the Foreign Office.

As the salutary clock of the Great Court looms behind them, the irresistible image of our leading brains uniting to save the planet: X-Men: The Last Stand, The Four Just Men, Guardians Of The Galaxy. Between photographs, Rees and Dasgupta chat about the relationship between facts and prejudice in global warming forecasts, and I wonder if they ever talk of anything other than the end of the world.

Before we met, I was sent a vast amount of reading material, including a paper touchingly described by Dasgupta as "somewhat informal", but still containing much algebra. Most strikingly, the material included four worst case possibilities:

1 The disaffected lab worker

In which an unhappy biotech employee makes minor modifications to the genome of a virus – for example, avian flu H5N1. A batch of live virus is created that can be released via aerosol. The lab worker takes a round-the-world flight, stopping off at airports to release the virus. The plausibility of this scenario is rated as "high", and "technologically possible in the near term". As the CSER men note: "No professional psychological evaluation of biotech lab staff takes place." A similar leakage might also happen accidentally, and I was sent, as a matter of urgency, an article from the Guardian about how researchers at the University of Wisconsin-Madison had modified strains of bird flu to create a virus similar to the 1918 Spanish flu that killed 50m people. The project was condemned as "absolutely crazy" by the respected epidemiologist Lord May.

2 Termination risk

In which pressure to stop climate change results in the adoption of stratospheric aerosol geo-engineering. Global warming is checked, but CO2 levels continue to rise. The geo-engineering then ceases, perhaps as a result of some other catastrophe, such as world war. This triggers what is called "termination risk": the sticking plaster removed, the warming gets much worse, quickly. Half the Earth's population is wiped out. I was advised that geo-engineering appears possible in the near term, but the scientific consensus is against adopting it.

3 Distributed manufacturing

3D printing is already used to make automatic weapons. These weapons can work, but are liable to explode in the user's hand. Still, the refinement of these techniques may allow nanoscale manufacture of military-grade missiles. "This would require a range of technological advances currently beyond us," I was told, "but believed by many scientists to be possible."

4 All of America is turned into paper clips

In which AI undergoes runaway improvement and "escapes into the internet". Imagine a computer swallowing all the information stored in Wikipedia in one gulp and generally gaining access to everything human-made. (The already-emergent "internet of things" means that, increasingly, devices can communicate between themselves; our homes are becoming more automated.) This rogue machine then uses human resources to develop new and better technologies to achieve its goal. I was given the for-instance of a paper clip making software that turns the whole of America, including the people, into paper clips. This is "not technologically possible in the next 20 years. Estimates range from 20 years to 300 years to never. But the potential negative consequences are too severe not to study the possibility."

This is what these four men are up against.

Rees works from rooms overlooking the cloistered Nevile's Court, which contains the Wren Library, which in turn contains two Shakespeare First Folios. He is small, dapper, silver-haired, and offsets his doomsday scenarios with a puckish humour. He invited me to sit on the couch next to his desk, "where sometimes I sits and thinks, and sometimes I just sits". As I wondered about this quote – Winnie The Pooh? – Rees was off, speaking so rapidly and softly as to be almost thinking aloud. "On a cosmic timescale, human beings are not the culmination, because it's taken four billion years for us to emerge from protozoa, and we know the solar system has more than four billion years ahead of it." Over the next half-hour, he tells me that we are "the stewards of an immense future", and that we have a duty to clear the looming hurdle presented by technological advance. "A few crazy pioneers – and we wish them good luck – might tunnel through the period of danger by establishing colonies in outer space, but nowhere out there is as comfortable even as the South Pole, so we have to solve the problems here."

He moves easily from such vertiginous concerns to survival on the micro level. For example, those weirdos or fanatics leveraged by technology. He believes that "bioterror probably won't be used by extremist groups with well-defined political aims – it's too uncontrollable. But there are eco-freaks who believe there are too many humans in the world." He argues that bio-engineering and AI have "an upside and a dark side. A computer is a sort of idiot savant. It can do arithmetic better than us, but the advances in software and sensors have lagged behind. In the 1990s, Kasparov was beaten at chess by the IBM computer, but a computer still can't pick up a chess piece and move it with the dexterity of a five-year-old child. Still, machine learning is advancing apace."

This brought us to the American futurist, Ray Kurzweil, a man there would be no point in inviting to dinner at Trinity. He is said to live on 150 pills a day, hopeful of surviving until what he calls "The Singularity" – the point at which humans build their last machine, all subsequent ones being built by other machines. A merger of man and machine will then offer the prospect of immortality for those who would prefer not to die. Rees considers Kurzweil "rather wild".

Rees recalled a lecture in which he (Rees) discussed one of the supposed routes to immortality: cryonics, the freezing of the body with a view to future resurrection. Rees had said he would "rather end his days in an English churchyard than a Californian refrigerator". It turned out that someone in the audience had paid £150,000 to have his body frozen; another had paid £80,000 to have just his head frozen – and both were indignant. "They called me a deathist," Rees recalls, laughing, "as if I were actually in favour of death."

I say I was disturbed to discover that Kurzweil is now a director of engineering at Google. "Yes," he says, "but to be fair to Google, they're grabbing everyone in this area who's outside the tent and pulling them into the tent." Does he detect a faultline between gung-ho Silicon Valley and more sceptical Europeans – the old world versus the new? He does not. "They have a can-do attitude, and they've a lot to be proud of." He stresses that CSER wants to work with the technologists, not against them.

A clock chimes: time for lunch – one good thing about Trinity is that it is nearly always time for a meal in the Great Hall. My dining companion is Professor Huw Price. Price grew up in Australia, hence – perhaps – his small gold earring. As I settle down to my quiche, he tells me that a year or so after CSER came together, he realised there might be a tie-in between the kind of philosophical questions he'd been pursuing and AI questions. Last February, he visited the Machine Intelligence Research Institute in Berkeley, California, "where they are trying to make sure that AI that begins with human-friendly goals will stay friendly when it starts to improve itself. Because the computers of the future will be writing their own programmes." I stop him right there. "Why should we let them do that?"

Price seems slightly taken aback by the question. "Well, imagine any scenario where more intelligence is better – in finance or defence. You have an incentive to make sure your machine is more intelligent than the other person's machine." The strategy of these machines, he continues, would depend on what they thought other machines running the same software would do. I interpose another "Why?" and Price takes a long drink of water, possibly processing the fact that he has an idiot on his hands.

These machines would all be networked together, he explains. "Now, if a machine is predicting what another machine with the same software will do, it is in effect predicting what it [the first machine] will do, and this is a barrier to communication. Let's say I want to predict whether I'm going to pick up my glass and have another drink of water in the next five minutes. Let's say I assign a probability of 50% to that. Assigning a probability is like placing odds on a bet about it. Whatever odds I'm offered, I can win the bet by picking up the glass and having a drink. Assigning probabilities to my own acts – there's something very fishy about that."

This leads to the question of how to make the machines see that cooperation might be the rational option. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. He explains: "Two prisoners are charged with a crime. They're held in separate cells, and there's no communication between them. They're separately told that if neither confesses, they both get six months. If they both confess, they both get five years. If one confesses and the other doesn't, the one who confesses goes free and the other gets 10 years. So each would be better off confessing, whether or not the other confesses. But the best outcome for both is if they remain silent." For this to happen, each prisoner would have to predict that the other will act in their mutual interest. So the goal is to build this facility into self-programming machines in order to forestall monomaniacal behaviour. To avoid America being turned into paper clips.

By now we are back in Rees's rooms. Price seems to have the run on them, and I am reminded of the Beatles in Help! – all of them living in the same house. Rees returns as I trepidatiously ask Price, "Why can't we just turn the machines off?" There is a mournful silence. "That's not quite so easy when you're talking about a global network," Rees says. "We won't be able to turn them off,' Price adds, "because they're smarter than we are, and they're controlling all the switches and all the hardware."

But he concedes that the machine intelligence people at Berkeley have given some thought to this. "One of the strategies is to make sure it [the self-improving machine] is perfectly isolated. I think they call it the oracle model." So, an advisory superintelligent machine; a consultant. "But perhaps," Price muses, "it can do things to persuade humans to give it more direct connection with the world." "Bribery?" I gasp, excited at this new possibility. Price nods. "If it knows enough about human psychology."

Ask Professor Dasgupta for his worst-case scenarios, and he will politely suggest that "these have already happened – in Sudan, in Rwanda". We speak in the Fellows' Parlour, which is less chintzy and sherry-stained than the name suggests. But still there are deep leather armchairs, oil paintings and dainty coffee cups, and I know what Dasgupta means when he says, "We here are having a tremendously good time – those of us who are lucky."

Still, we are "disturbing nature". In sub-Saharan Africa and South Asia, depleted wetlands or forests might cause starvation; they could also trigger viruses, sectarian conflict and over-population (couples having more children to compensate for low survival rates). Dasgupta is concerned about sustainable development. He remains extremely forbearing when I say this has surely been a buzzword for many years. "I think you are right. A lot has been written on the matter, but much of it remains unfocused. What should be sustained, when you think about human welfare, not just now but tomorrow and the day after tomorrow? Turns out it's not GDP that's important, not some notion of social welfare, not life expectancy."

The key criterion, Dasgupta says, is a notion of wealth that includes natural capital. "It's about getting your economics right. We are supposed to be economists; governments are run by economists, but a whole class of assets are missing from their dataset." He is concerned with what he calls "inequality across time", the effects on future generations of our short-sightedness about natural capital. He speaks as a man with three children and five grandchildren. Professor Price also has two grandchildren. When he brought Jaan Tallinn to dinner at Trinity, other diners queued up to congratulate him on founding Skype, since it enabled them to stay in touch with their children and grandchildren.

I mention this to Tallinn and he says, "I sometimes joke that I can take personal responsibility for saving one million human relationships." Tallinn has six children himself. He is 42, and does not look old enough to have six children. When I first saw him in the Great Court, I took him to be a postgraduate, and one of the more modestly dressed ones. He has an interesting and charming manner. When asked a question, he will pause, apparently going into a dream state, muttering, "Yes… thinking." He will then make a rather formal pronouncement. He says things like, "The term 'singularity' is too vague to be used in a productive discussion." He is very fond of the word "heuristics".

Tallinn part-funds a number of horizon-scanning organisations, including a couple at Oxford University, and the Machine Intelligence Research Institute at Berkeley. He explains the challenge of getting rich people to donate to the study of technical risks. "Your evolutionary heuristics come back to the idea of a future roughly similar to what it is now. You give to the community as it is now, to benefit a similar community in the future."

People can't imagine the technological future, in other words, and I tell him I have difficulty with the idea of America being turned into paper clips. Can he come up with a less surreal example of AI run riot? Another pause. Then Tallinn says it might be easier for me to think of AI being tyrannical about control of the environment. "I was born behind the Iron Curtain," he says, "and I remember heated discussions about large-scale terra-forming projects, such as reversing the direction of the river Ob, or putting up large reflectors into space to heat up Siberia." That did the trick. I could see the danger of entrusting such work to AI.

Tallinn says that, for trouble to occur, "The machines don't have to have the opposite interests to ours. We don't exactly have the opposite interests to chimpanzees. However, things are not looking up for the chimpanzees, because we control their environment. Our interests are not perfectly aligned with theirs, and it turns out it's not easy to get interests aligned."

I thought about this as I travelled home on the train, which was full of people playing with their smartphones. I had suggested to Tallinn that people were in love with technology, so believed their interests were perfectly aligned with it. He said: "There is a feedback loop between human values and those technologies. If you create something that improves human life, people will reward you for it, but this is not a universal law of physics. This is something that applies at the start of the 21st century. But artificial intelligence is not going to care about the human market. At the moment, the human is in the loop. That can change."

 

Leave a Comment

Required fields are marked *

*

*