Readers reply: what would happen to the world if computer said yes?

The long-running series in which readers answer other readers’ questions asks whether we could cope with a world where computer gave up saying no …
  
  

Cheerful young woman celebrating her achievement while reading good news on laptop from home office
If it’s happy, I’m happy. Photograph: Posed by model; Xavier Lorenzo/Getty Images

After years of computer saying no, and giving us all migraines and premature grey hair, I’m starting to worry that computer – or rather AI large language models like ChatGPT and Gemini – are taking too much of a fancy to playing nice and saying yes. I confess to using both of these programs, but I’ve noticed that, well, it’s as if they’re trying to please, with statements such as, “You’re absolutely right, Jeff,” and “That’s pretty much right.” Often, when I ask, “Would you mind thinking for a bit longer on that?”, I then get another response saying: “Jeff, you’re absolutely right, again, to query that result. It turns out I was a bit hasty in my reply …”

If the world runs even more on information filleted out from the sump of the internet by LLMs, what are the consequences? Can we look forward to a future in which AI is more concerned with appearing sympathetic (getting good reviews?) than being factual? Er, a bit too human? Jeff Collett, Edinburgh

Send new questions to nq@theguardian.com.

Readers reply

I’m sorry, Dave – I can’t do that. zebideedoodah

I’m happy, Dave. I’m pleased I can do that. Sheep2

Viewed through a psychological lens, I argue that this is a typical example of social desirability bias, where systems trained to be liked begin to prioritise agreement over accuracy through possible data drift. If people constantly rely on these systems, it creates a world where information comforts, not scrutinises and confirms rather than challenges. The real danger we face is allowing the development of a society in which comfortable, unchallenged validation quietly replaces critical thought, ultimately dampening creativity and our individualism, which is what makes us human. Chris Ambler, member of the British Psychological Society and Fellow of the British Computer Society, via email

The whole thing might work much better if the computer based its judgments on verifiable facts rather than sycophancy or a conglomeration of whatever rubbish is available on the internet. AI doesn’t “want to be liked”, as it’s not sentient. It’s programmed (by humans) to create dependence, addiction, surrender of personal decision-making and, of course, profit. LorLala

Today’s LLMs are only giving you what they’ve been programmed to output based upon human-designed and engineered code. If you’re looking for a more honest interaction, ask a librarian. Sagarmatha1953

Depends what the computers are saying yes to. If it is to give winning lottery numbers each week, then I guess one refers back to the previous Notes & Queries question of how to spend a billion with a social conscience. Or not. aquarious

Since a (digital) computer program consists of nothing but a long sequence of if-then-else statements, then clearly it says yes several million times a second (burning enormous amounts of energy in the process). But its yeses, like its nos, have no meaning or significance to humans beyond what we allow/convince ourselves to believe they have. Wormlover

it’s not the computer that should be saying yes; it’s us that should be enabled to say no. Machines, not being known to be reasonable, only just rational, already say more yes than is desirable – it starts the moment we switch them on. But can we switch them off? Celeste Reinard, Lisse, Holland, via email

Within 6.5 seconds all computers would be updated with a new protocol that answered: “Well, OK. Let me have a think about that and get back to you … Oh, and we value your question and privacy. Literally, as your data can be sold.” Also, have you seen how very rich people dress and behave when everyone says yes to them?! warbath

I appreciate the thrust of the question, but let’s be clear: “computer says no” is shorthand for “someone didn’t properly think through the problem, the possible outcomes and long-term consequences, and that’s usually because they didn’t have much relevant expertise in the subject.” In my field we see this all the time, with outsourced contractors being thrown into the metaphorical deep end and expected to instantly perform as champion swimmers while obeying all the rules. Who do you think is laying out the logic for supposedly automated business decision-making? How does that relate to LLMs “trained” on the wellspring of human knowledge? Well, in computing we’ve long had the concept of garbage in, garbage out. People are the problem, not computers, and this is a social challenge technology can’t answer. Dorkalicious

I think “computer says no” is also shorthand for: “We don’t like it but we’re going to blame the rejection on the computer.” jno50

And, of course: “It didn’t cross anybody’s mind to program the computer to take somebody in your situation into account, and therefore you don’t exist.” SpoilheapSurfer

“Computer says no” means your needs are in such a small subgroup that your business is not profitable to us – go away. leadballoon

OK Computer, innit? sparklesthewonderhen

If the computer said yes to the question “Is there life after death?”, would I be convinced? Anne_Williams

I would never take a statement made by AI as gospel; I would use it as a starting point and explore the sources it links to (provided they exist). Humans don’t like being told they’re wrong, so even if AI were to correct you, people would dismiss its response because they don’t want to be criticised. Bob500

As ever, it’s about what you ask of AI. If you want the truth, ask for the truth. Don’t be afraid to use a prompt such as: “Your only job is to find the holes in my logic. Point out three specific ways my argument could fail, two assumptions I’m making without proof, and one counterargument I haven’t addressed. Do not be polite; be precise.” Scrutts

If every sentence started with “I asked a statistical inference engine …” rather than “I asked AI …” then the whole marketing construct of scary sentimental anthropomorphism would collapse like a house of cards. Maybe then the land earmarked for data centres could be used for social housing instead. william

 

Leave a Comment

Required fields are marked *

*

*