How dangerous is AI?


SOURCE: PROSPECTMAGAZINE.CO.UK
DEC 24, 2021

This year’s Reith Lectures force us to confront the unknowability of the hyper-intelligent machine

The future of humankind beyond this century arguably depends on how we manage two abstract resources: energy and information. Information technologies present an even bigger unknown than the climate crisis, both in terms of opportunities and hazards. Better computing resources will help in solving all manner of problems. But as these resources are increasingly boosted by the algorithms designated today as “artificial intelligence,” we need to wonder too what manner of machine mind we are building.

That’s the question asked in this year’s Reith Lectures, which were delivered by British computer scientist Stuart Russell of the University of California at Berkeley. Russell has been one of the most prominent voices from within the discipline calling for consideration of how AI can be developed safely and responsibly, and in his 2019 book Human Compatible he laid out his vision of how that might be achieved.

The first lecture, which I attended, was hosted at the Alan Turing Institute in the British Library, named after the mathematician who is often credited with launching the whole notion of “machine intelligence” created by digital computing. Turing’s 1950 paper “Computing machinery and intelligence” argued that the goal of making a machine that “thinks” was both feasible and desirable. But Turing also recognised that success would have profound implications for human society. “Once the machine thinking method had started, it would not take long to outstrip our feeble powers,” he said at the time. “If a machine can think, it might think more intelligently than we do, and then where should we be?” At some stage, he concluded, “we should have to expect the machines to take control.”

That wasn’t a new fear (if indeed fear it was). The annihilation of humankind by robots was portrayed in Karel ?apek’s 1920 play RUR, which introduced the Czech word robota (meaning labourer or serf) into the vocabulary. Turing himself cited Samuel Butler’s 1872 utopian novel Erewhon, in which machines are banned, lest we end up “creating our successors in the supremacy of the Earth.”

In 1965 the computer scientist Irving John (“IJ”) Good, one of Turing’s colleagues during his wartime code-breaking work at Bletchley Park, said that if we succeeded in making an “ultraintelligent machine” that could “far surpass all the intellectual activities of any man,” there would be an “intelligence explosion.” Such a device, said Good, would be “the last invention that man need ever make.” And if movie fantasies like the Terminator series are to be believed, it could end up being the last invention we ever can make.

Such disaster-mongering is not confined to Hollywood. In 2014, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race,” while Elon Musk has warned that AI could be “our biggest existential threat.” There is no particular reason to give their views much credence, as neither is an expert in AI. But Russell treads a careful path in his lectures in asking how real such dangers are, being neither the over-optimistic Pollyanna nor the doomsaying Cassandra.

Today’s AI is not obviously a forerunner of some apocalyptic Terminator-style Skynet. Most artificial intelligence of the kind used for, say, analysing big data sets, image and voice recognition, translation and solving complex problems in finance, technology and game-playing, is based on the technique called machine learning. It uses networks of interconnected logical processing units that can be trained to detect patterns in the input data (such as digitised images). Each training example elicits a slight readjustment of how the units in the network “talk” to each other, in order to produce the correct output. After perhaps several thousand such examples, the system will reliably (if all goes well) generate the right output for input data that it hasn’t seen before: to positively identify an image as a cat, say. For a game-playing AI such as DeepMind’s AlphaGo, which defeated a human expert player in 2016, the correct output is a move that will increase the chances of winning.

As such examples show, this approach can be extremely powerful. Language-translating AI such as the popular algorithm provided by Google is of course far from perfect—but it has come a long way, thanks in particular to an advance in the technology around 2015, and now generally supplies a serviceable result. Combined with AI voice recognition, such systems are already enabling real-time translation of spoken text.

Similar articles you can read