John Naughton 

For all the hype in 2023, we still don’t know what AI’s long-term impact will be

As with the printing press and the dotcom boom, initial frenzy and speculation obscures the lasting legacy of new technologies
  
  

The Nvidia HGX H100 chip
The Nvidia HGX H100 chip, designed for generative AI, is being bought in huge quantities by companies such as Microsoft for $30,000 each. Photograph: AP

“Innovation,” wrote the economist William Janeway in his seminal book Doing Capitalism in the Innovation Economy, “begins with discovery and culminates in speculation.” That just about sums up 2023. The discovery was AI (as represented by ChatGPT), and the speculative bubble is what we have now, in which huge public corporations launch products that are known to “hallucinate” (yes, that’s now a technical term relating to large language models), and spend money like it’s going out of fashion on the kit needed to make even bigger ones. As I write, I see a report that next year Microsoft plans to buy 150,000 Nvidia chips – at $30,000 (£24,000) a pop. It’s a kind of madness. But when looked at it through the Janeway lens, ’twas ever thus.

“The innovations that have repeatedly transformed the architecture of the market economy,” he writes, “from canals to the internet, have required massive investments to construct networks whose value in use could not be imagined at the outset of deployment.” Or, to put it more crudely, what we retrospectively regard as examples of technological progress have mostly come about through outbreaks of irrational exuberance that involved colossal waste, bankrupted investors and caused social turmoil. Bubbles, in other words. In recent times, think of the dotcom boom of the late 1990s. Or in earlier times, of the US railway boom of the 1850s onwards in which no fewer than five different railway lines were built between New York and Chicago. In both bubbles, an awful lot of people lost their shirts. But, as the economist Brad DeLong memorably pointed out in his 2003 Wired article Profits of Doom, “Americans and the American economy benefited enormously from the resulting network of railroad tracks that stretched from sea to shining sea. For a curious thing happened as railroad bankruptcies and price wars put steady downward pressure on shipping prices and slashed rail freight and passenger rates across the country: new industries sprang up.”

So the lesson of history in relation to tech bubbles is this: what things will be left after the bubble bursts? Because they always do. Which neatly brings us back to the current madness about AI. Sure, it’s wonderful that it enables people who are unable to string sentences together to “write” coherent prose. And, as Cory Doctorow observes, it’s great that teenagers playing Dungeons & Dragons can access an image generator that creates epic illustrations of their characters fighting monsters – even if the images depict “six-fingered swordspeople with three pupils in each eye”. And that the tech can do all of the other tricks that are entrancing millions of people – who are, by the way, mostly using it for free. But what of lasting value will be left? What will the historians of the next century regard as the enduring legacy of the technology?

At the moment, it’s obviously impossible to say, not least because we always overestimate the short-term impacts of novel technologies while grossly underestimating their long-term effects. Imagine someone trying to assess the civilisational impact of printing in 1485, 40 years after Gutenberg printed his first Bible. Nobody knew then that it would undermine the authority of the Catholic church, fuel the thirty years’ war, enable the rise of what became modern science and the creation of new industries and professions, and even, as the cultural critic Neil Postman observed, change our conceptions of childhood. Put bluntly, print shaped human society for 400 years. If this machine-learning technology is as transformative as some people are claiming, its long-term impact may be just as profound as print has been.

So where might we look for clues as to how that might play out? Three areas are worth thinking about. The first is that the technology, flawed as it is at present, looks like providing a significant augmentation of human capability – a new kind of “power steering for the mind”. But of course that also means the augmentation of warped minds. Second, there’s the question of how sustainable it will be, given its insatiable demand for energy and natural as well as human resources. (Remember that much of the output of current AI is kept relatively sanitised by the unacknowledged labour of poorly paid people in poor countries.) Third, how quickly – if ever – will it make economic sense? At the moment there’s an assumption that public, government and tech-industry exuberance will automatically translate into widespread deployment of, and real returns on, the stupendous costs of running the machines. If you believe observers such as the boss of Accenture, the global consultancy company, that might turn out to be wishful thinking. “Most companies,” she said this month, “are not ready to deploy generative artificial intelligence at scale because they lack strong data infrastructure or the controls needed to make sure the technology is used safely.” Yep. So here’s hoping for a more realistic new year!

What I’ve been reading

Chips with everything
A Ball of Brain Cells on a Chip Can Learn Simple Speech Recognition and Math is an article on the site Singularity Hub. It’s a startling claim but seems to be backed up by a paper in Nature.

Chile’s struggle
Phenomenal World has a fascinating account by Ignacio Silva Neira of Chile’s fraught quest for a new constitution: Constitutional Odysseys.

Yuletide immemorial
Why We Celebrate Christmas On December 25th is an interesting essay on 3 Quarks Daily. It’s all to do with something that happened 4.5bn years ago.

 

Leave a Comment

Required fields are marked *

*

*