It all started with the headline over an entry in Charlie Warzel’s Galaxy Brain newsletter in the Atlantic: “Where Does Alex Jones Go From Here?” This is an interesting question because Jones is an internet troll so extreme that he makes Donald Trump look like Spinoza. For many years, he has parlayed a radio talkshow and a website into a comfortable multimillion-dollar business peddling nonsense, conspiracy theories, falsehoods and weird merchandise to a huge tribe of adherents. And until 4 August he had got away with it. On that day, though, he lost an epic defamation case brought against him by parents of children who died in the 2012 Sandy Hook massacre – a tragedy that he had consistently ridiculed as a staged hoax; a Texas jury decided that he should pay nearly $50m in damages for publishing this sadistic nonsense.
Warzel’s newsletter consisted of an interview with someone who had worked for the Jones media empire in its heyday and, as such, was interesting. But what really caught my eye was the striking illustration that headed the piece. It showed a cartoonish image of a dishevelled Jones in some kind of cavern surrounded by papers, banknotes, prescriptions and other kinds of documents. Rather good, I thought, and then inspected the caption to see who the artist was. The answer: “AI art by Midjourney”.
Ah! Midjourney is a research lab and also the name of its program that creates images from textual descriptions using a machine-learning system similar to OpenAI’s Dall-E system. So someone on the Atlantic had simply typed “Alex Jones inside an American office under fluorescent lights” into a text box and – bingo! – the illustration that had caught my attention was one of the images it had generated.
You could, say, ask for a portrait of Shrek in the style of the Mona Lisa or Jane Austen as an astronaut
It turns out that the Atlantic is not the only established publication in which the Midjourney tool’s work has appeared. The normally staid Economist, for example, deployed it recently to produce its 11 June cover. This is significant because it illustrates how rapidly digital technologies can make the transition from leading edge to commodification. And as they do so, new fears and hopes rapidly emerge.
Dall-E (the name is a geeky combination of the Pixar character Wall-E and Salvador Dalí) was derived from OpenAI’s pioneering GPT language models, which can generate vaguely plausible English text. Dall-E basically swaps pixels for text and was trained on 400m pairs of images with text captions that were “scraped” from the internet. (The carbon footprint of the computation involved in this process is unconscionable, but that’s for another day.)
When GPT-3 appeared, it sparked a new instalment of the “augmentation v replacement” debate. Was the technology just the thin edge of a sinister wedge? GPT-3 could be used to “write” boring but useful text – stock market reports, say - but it could also generate noxious and apparently credible disinformation that would slip through the moderation systems of social media platforms. It could be used to augment the capacities of busy and overworked journalists or to dispense with them entirely. And so on.
In the event, though, some of the steam has gone out of the GPT-3 controversy (though not out of the question of the environmental costs of such extravagant computing). However much sceptics and critics might ridicule human hacks, the crooked timber of humanity will continue to outwit mere machines for the foreseeable future. Journalism schools can relax.
Dall-E might turn out to be a less straightforward case, though. As with GPT-3, its appearance generated intense interest, perhaps because while most people can write text, many of us cannot draw to save our lives. So having a tool that could enable us to overcome this disability would be quite a boon. You could, say, ask for a portrait of Shrek in the style of the Mona Lisa or Jane Austen as an astronaut and again it would do its best. So one can view it as a welcome augmentation of human capability.
But there is also the “replacement” question. It turns out that it was Warzel himself who had used Midjourney’s bot to create an illustration rather than getting one from a copyrighted image bank or commissioning an artist to create an image. Big mistake: an artist spotted the caption and tweeted their shock that a national magazine such as the Atlantic was using a computer program to illustrate stories instead of paying an artist to do that work, thereby giving other publications the idea of doing the same. Before you could say “AI”, Warzel found himself playing the villain in a viral tweetstorm. Which was painful for him, but maybe also a salutary warning that publishers who give work to machines rather than creative artists deserve everything they get.
What I’ve been reading
Electric Vehicles Are Way, Way More Energy-Efficient Than Internal Combustion Vehicles is a sobering summary from the Yale Climate Connections project.
The Efficiency Movement is a marvellous essay by Rob Miller on how all modern societies have been shaped by their worship of efficiency.
The Nautilus site has a fascinating article about the evolutionary mysteries of the menopause.