The other evening I heard two 20-somethings coming out of their office and chatting about their evening plans:
“Oh yeah, I’m going to spend the evening building something in Claude.”
It was a wonder they didn’t chest bump. Of course I immediately judged them – thankfully not out loud – You’re not building anything. Claude is.
It was like hearing tech bros years ago bragging how they’d just minted an NFT: misplaced pride for producing something of dubious value for minimal effort, yet still attracting disproportionate kudos for the ‘achievement’.
Think of the children! #
I was on my way to a meetup where product people were voicing their worries about the perceived downsides of generative AI:
“I’ve been offloading my critical thinking, weighing up of evidence and decision making to genAI for so long, I don’t even know if I can do any of these things without genAI anymore.”
“I’m afraid I’m falling behind / missing out on the genAI boom and all the jobs seem to require more years of experience in genAI than genAI has existed.”
“When I go in to present, my impostor syndrome kicks into high gear because I used genAI to create the content and I’ve not internalised it.”
“Won’t someone think of the children?! They’re entering the world of work without the ability to think for themselves.”
It would appear that the group wasn’t far off the mark. According to research, genAI is indeed making us dumber:
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.
Gerlich, M. (2025) ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’.
While LLMs [Large Language Models] offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.
Kosmyna, N. et al. (2025) ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task’.
Yet contrary to the view that Gen Z is uniformly embracing its status as ‘AI natives’, oblivious to the downsides, they are far from the mindlessly complicit group the older generations believe them to be. Abby Binder writes about the contradictory feelings young adults are experiencing while growing up in a world where use of genAI is simultaneously encouraged and shunned:
“Young people are still learning what kinds of thinking are rewarded, what counts as effort, what gets labeled as cheating, what gets labeled as ambition, and how much of themselves they are expected to outsource in order to keep up.”
Binder, A. (2026) ‘Let’s Talk About AI Shame’, Wait, Are We Okay?, 6 April (accessed: 2 May 2026).
They find their experience of the inherent contradiction of genAI upsetting:
“I feel like I’m falling behind because I’m not using AI.” – 17-year-old in California
“There’s this unspoken rule — if you’re using AI, you’re ‘less than’ everyone else.” – 20-year-old in Maine
Binder (2026)
Is it any wonder Gen Z are so conflicted about the use of genAI when even their schools are positively encouraging its use? There is a glimmer of hope, however. Writing for The Verge, Janus Rose highlights how some US students have been railing against their universities for “corrupting” with AI the remaining few places young adults have to “explore and wrestle with human thought.”
Maybe we should be less worried on behalf of the younger generations and focus a bit more on the part we’re playing. After all, we’re the ones either actively foisting AI products on society, even if it’s with the best of intentions, or being passively complicit in the normalisation of AI by using it habitually for our own purposes. Maybe we should be worrying more about our own contradictory behaviour? If we’re all so worried about the negative effects of genAI on us and everyone else, why do we find it so hard to wean ourselves off it?
Oh crap. Are we all addicted to AI? #
I’m not the first person to suggest that Big Tech uses the same playbook for growing their customer base that crack dealers use: give them enough free(-ish) samples to get them addicted, then jack the prices up. (For the record, it’s already happening.)
Similarly, I’ll concede that not every AI user has surrendered their critical thinking, and that some people (reportedly among the neurodiverse in particular) are using genAI extremely beneficially. All that said, the norm for many is to use genAI less like a sparring partner that challenges their thinking, and more like an eternally-willing subordinate, which wholly endorses their thinking and to which they can delegate less appealing tasks.
Nevertheless, despite genAI seemingly bypassing many of the usual prerequisites for trust – competence (its ability to perform the task effectively and correctly), contract (doing what it agreed it was going to do, and not, like, deleting your production database and backups without warning) and communication (explaining its actual ‘thinking’, rather than mimicking it – for the most part, and counterintuitively, users still seem to trust genAI implicitly. It was almost as if the use of genAI for some had become unhealthily compulsive. And I’d seen similar behaviour before.
The most lucrative gaming machine in the UK #
It’s 2008 and a nondescript doorway on Great Portland Street in London leads into a darkened room. On one wall is an array of screens showing various species of animal racing. A bored-looking attendant sits in the corner in a kiosk with bullet-proof windows. In that otherwise unremarkable betting shop was the most lucrative gaming machine in the UK – for the gambling company, that is. I spent a day watching quietly while punter after punter fed thousands of pounds in crisp notes into the machine while playing virtual roulette, blackjack or some other video game involving a leprechaun. Over the hours I watched, people won the occasional jackpot, but overall the House remained well ahead.
In the UK at least, gaming machines such as these are required to have a prominent sticker on them detailing the average ‘return to player’ – in other words, what percentage of their money a player will get back, averaged over time. Unsurprisingly – for this is how legitimate gambling companies make their money – the return to player for this particular machine was 97 percent. Over time, for every £100 put in, the machine would keep £3. Ignoring for a moment that this means the player will always eventually lose, £3 in £100 doesn’t sound too bad for a bit of fun, right?
Wrong. The return to player is averaged over many individual transactions. What it doesn’t tell you is how volatile the machine is. If return to player over time is a straight-line average that descends gently, the volatility is how much above and below that line is the return on any given transaction. In other words, most of the time the machine will take all of your money when you lose a bet, and very occasionally it will pay out, sometimes handsomely. When and how much it pays out are as unpredictable as its high-specification random number generators will allow.
The volatility profile means that the majority of players will simply be pumping their money into a machine that keeps most of it and, every now and again, one lucky punter will win a payout that perhaps covers their previous losses. If the same punter were to play continuously for a very long time, then their wins and losses would average out to 97 percent of the money they’d put in. It’s right there on the side of the machine.
Regardless of knowing the machine’s return to player, you’d think that someone repeatedly pumping money into a machine that rarely paid out would stop and cut their losses. But that’s not what I observed.
Wait – where’s loss aversion gone? #
Loss aversion is a behavioural bias first described by Daniel Kahneman and Amos Tversky in their research paper from 1979. It’s more likely you’ll have read about it in Kahneman’s seminal book, Thinking, Fast and Slow (or the many books and articles that refer to it). Simply put: we feel losses more keenly than wins so we try to avoid them, even if the potential loss is outweighed by the potential win.
Several of the punters I spoke to described their belief that they understood the pattern of the machine. Some felt that the machine would eventually make a large payout once it had accumulated enough winnings, and they just had to keep feeding it money (incurring further losses) to get it to that point. It was a bit like they were experiencing FOMO (fear of missing out) because they felt a win was ‘due’ to come in a spin or two. (To be clear, the machine did not operate like a pressure valve – the size and frequency of its payouts were functionally random.)
Other punters ascribed moods to the machine, that it was ‘feeling generous’ that day because it happened to have paid out a few times. Whichever way they rationalised it, the strange thing was that the punters’ belief they would receive a substantial payout in the near future completely overrode their loss aversion, so they kept feeding in money (and for the most part, losing).
Other behavioural factors were playing a part. Players can often attribute losses to external factors, while attributing gains to their own skill. This reframing causes them to keep playing despite losses. Annie Duke also talks about this trait in poker players in her book Thinking In Bets.
Gamblers tend to place more or larger bets in an attempt to recover their previous losses. This loss-chasing behaviour combines with the feeling of being close to winning. Slot machines often play on this ‘near-miss effect’, such as getting two cherries and an orange in a line, and it encourages further loss-chasing.
Gaming machines also make use of what’s known as a ‘variable ratio reinforcement schedule’. The random frequency of the payouts, even small ones, condition us to keep playing. Also at random intervals, the machine will give the player additional means of interacting with the game, such as nudges (shifting the slot machine reels up or down to try to line up a winning row of symbols) or free spins. This reinforces the player’s belief that they are in control of the game and can influence the outcome. We may be nudging the reels, but the machine is nudging our behaviour. Great.
Our brain chemistry can be unhelpful also. Gambling can trigger increased testosterone and reduced cortisol in some people. Testosterone can cause a ‘win at all costs’ mindset, while lower cortisol makes us less sensitive to losses. Similarly, the sensory inputs in the gambling environment (flashing lights, ‘happy’ machine bleeps and so on) can trigger increased dopamine. This in turn alters how surprised we become at the difference between the reward we expected and the one we actually received. In short, the gaming machine conditions us to incorrectly expect wins more than losses, despite the evidence to the contrary.
How we behave is always a response to a complex interplay of competing signals, and the design of gaming machines often exploits these and other psychological effects. Alas, particularly for problem gamblers, responses that are ordinarily fine in moderation can become heavily skewed, leading to seemingly irrational behaviour, often without the individual realising there’s a problem to begin with.
These factors are also at play with AI #
As generative and agentic AI currently stand, most (but not all) commercial large language models have been trained on human-generated content on the internet. We should all be aware that all this content has varying levels of bias and factual accuracy, and that this is all baked into the LLM during training. We also should know that AI chatbots and agents have an error rate (where they fail to perform the task requested), as well a propensity to confabulate (make up or ‘hallucinate’) information confidently as if it were truth.
What this baked-in bias and error rate should be telling us is that the ‘return to player’ for genAI is is nowhere near 100 percent. And just as with the volatility of results on the gaming machine, some people get good results depending on the task, others don’t. (Guess which group shouts loudest about it on social media?)
Aside from the broader sense of FOMO that we get from constantly seeing our peers seemingly benefit from using genAI, when we’re actually using genAI for ourselves, we sometimes have that sense that a perfect result is just a prompt or two away, and that by stopping now we’d miss out on it.
Related to this effect is the immediate dopamine hit we receive from the act of ‘creation’ without the usual associated cognitive effort. Like the tech bros in the intro, we feel ownership and a sense of achievement for what we’ve built straightaway (even though the LLM did all the building), and any inaccuracies, confabulations or bugs are swept aside as a problem for the future. (Economists call this ‘temporal discounting’.)
The sense of control is one area where gambling and genAI use diverge slightly. With gambling, control is illusory – rarely do the players have any real control over the outcome, despite the choices the game occasionally offers. In contrast, genAI users are the ones issuing the prompts, and for the most part the LLM’s output corresponds broadly to what was prompted. However, even then the output contains variability:
“… sometimes the output exceeds your expectations, other times it needs you to reword or rework something, to make the process absorbing in a way that can feel almost compulsive. This is exactly the same sort of psychological structuring we see in slot machines … .”
Otter, M. (2026) ‘‘Good enough’: what vibe coding reveals about our changing relationship with thought‘, The Psychologist, The British Psychological Society (accessed: 2 May 2026).
In other words, vibe coding or repeatedly interacting with a genAI chatbot triggers different levels of reward at varied intervals. As Otter suggests, this is “something close to a textbook variable ratio reinforcement schedule.”
Given the psychological similarities between gambling and genAI use, it’s no wonder we’re have these conflicting feelings that genAI is adversely affecting our cognitive ability, yet we just can’t escape the lure of its convenience.
This is fine #
One way we may choose to rationalise our use of AI is to delineate the types of task we choose to delegate outright. On one side, there are tasks for which there may not be a precise, objective result, where we’re happy to compromise on accuracy. A transcript of a a conversation needn’t be 100% accurate as long as it’s not egregiously wrong. Similarly, the connections an LLM finds in vast quantities of information may not all be valid, but it can identify them far more quickly than a human could.
On the other, there are the tasks that require thoughtful engagement with a problem so that we can internalise the nuance, learn from earlier mistakes, and reach conclusions based on a combination of our knowledge and personal experience. In these cases it’s simply not practical to share the full extent of your human context with an LLM, not least because we lose something in the translation of our thoughts to prompts. Language is great an’ all, but it’s still an imprecise medium for exchanging the contents of one person’s brain with someone or something else.
Some people already restrict their interaction with genAI to that of a sparring partner: getting it to try and spot the things they’ve missed within the confines of the specific context they’ve shared. The LLM doesn’t need to be right, it just needs to prod us into thinking more carefully for ourselves. Ironically genAI would seem to be more useful when it’s the one doing the prompting.
This is all well and good if we can remain disciplined. However, it’s difficult to overlook the temptation of simply requesting an answer from genAI, and usually receiving something plausible or seemingly insightful. (It’s never insight, it’s inference.) And to cap it all off, we get a similar dopamine hit for completing the task, even though it wasn’t us doing the hard work.
“When a non-programmer launches a product after a weekend of AI-assisted work … they are not misrepresenting themselves cynically. They genuinely feel like builders. The AI has granted them access to an identity, a sense of competence, that previously required years of effortful skill acquisition to earn.”
Otter, M. (2026)
Our brains love a shortcut. It’s why we’re at the mercy of so many cognitive biases, as Kahneman and Tversky showed us. The appeal of taking the quick and easy way to an answer without the usual associated cognitive effort is hard to ignore. Moreover, it takes willpower not to give into the temptation, so it’s doubly hard. (It’s been a rough Monday, I’ll get the AI to do the thinking for me just this once… .)
It doesn’t help that the world of work seldom values people taking the time and effort to think deeply about anything. GenAI appeals to business leaders who value quantity over quality: don’t think, just deliver. ‘Do more with less’ is the go-to mode of operating for those in strategic purgatory and dire financial straits. But regardless of how quickly we are able to dispatch pointless busywork using genAI, more will arrive to feed the machine. Why spend time carefully crafting words you hope will appeal to the intellect of the recipient when their AI helper is only going to summarise them and eliminate all nuance? Why have teams of people with opinions and principles working when they feel like it, when instead you could have rafts of AI agents working unquestioningly and round the clock?
Somehow I’m still optimistic #
I am more optimistic than that.
There always have been and always will be places where meaningless work and mindless conformity is rewarded. We will continue to mock them without mercy.
The kind of people who are questioning whether they’ve got a problem with genAI are already on the first step to doing something about it. We were all addicted to social media once, yet when it became sufficiently toxic we (mostly) disengaged from it and found healthier ways to interact.
It may not feel like it, but it’s still relatively early days for genAI. We don’t know whether the bubble will pop with a loud bang or a quiet phut. It might be that we’re forced to go cold turkey once reality finally hits and makes genAI prohibitively expensive for all but a handful of companies. Or maybe we’ll resume valuing those with the ability to think critically.
Even if none of those things happen, I look back to the example being set by Gen Z. Sure, they’re struggling the same as everyone else to figure out their contradictory relationship with genAI, but what they have in their favour is fire in their bellies and a growing aversion to the soulless AI slop of their peers. In the words of the Oberlin College Luddite Club to their school administrators:
“[E]ven one semester of accepted (even encouraged) chat-bot use will jettison our student body down a lazy, irredeemable tunnel of intellectual destruction … we will not stand by and witness the further atrophying of our liberal arts education. Rather than strengthening Silicon Valley, we build our own skills and generative sweat.”
Rose, J. (2026) ‘The more young people use AI, the more they hate it‘, The Verge, 30 April (accessed: 2 May 2026).
The kids are alright. And that fills me with hope for the rest of us.
References and further reading #
Al-Obaydi, L.H. and Pikhart, M. (2025) ‘Artificial intelligence addiction: exploring the emerging phenomenon of addiction in the AI age’, AI & SOCIETY, pp. 1–17. Available at: https://doi.org/10.1007/s00146-025-02535-z.
Beres, D. (2025) AI Is Not Your Friend, The Atlantic. Available at: https://www.theatlantic.com/magazine/2025/12/ai-companionship-anti-social-media/684596/ (Accessed: 5 May 2026).
Binder, A. (2026) ‘Let’s Talk About AI Shame’, Wait, Are We Okay?, 6 April. Available at: https://abbybinder.substack.com/p/lets-talk-about-ai-shame (Accessed: 2 May 2026).
Chen, Y. et al. (2025) ‘Effects of generative artificial intelligence on cognitive effort and task performance: study protocol for a randomized controlled experiment among college students’, Trials, 26, p. 244. Available at: https://doi.org/10.1186/s13063-025-08950-3.
Ciudad-Fernández, V., Von Hammerstein, C. and Billieux, J. (2025) ‘People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct’, Addictive Behaviors, 166, p. 108325. Available at: https://doi.org/10.1016/j.addbeh.2025.108325.
Claburn, T. (2025) OpenAI defends Atlas as prompt injection attacks surface. Available at: https://www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/ (Accessed: 30 October 2025).
Delfabbro, P., King, D. and Parke, J. (2023) ‘The complex nature of human operant gambling behaviour involving slot games: Structural characteristics, verbal rules and motivation’, Addictive Behaviors, 137, p. 107540. Available at: https://doi.org/10.1016/j.addbeh.2022.107540.
Duke, A. (2018) Thinking in bets: making smarter decisions when you don’t have all the facts. New York: Portfolio/Penguin.
Eddie, T. (2023) ‘The Evolution of Trust in the Age of the Internet’, Medium, 3 April. Available at: https://thoughtrealm.medium.com/the-evolution-of-trust-in-the-age-of-the-internet-5b04f2c70133 (Accessed: 30 October 2025).
Ehrlich, V. (2024) ‘Generative AI for Neurodivergent Employees: Boosting Workplace Inclusivity | The Bloom Shift’, The Bloom Shift, 24 September. Available at: https://missionbloom.substack.com/p/generative-ai-and-neurodivergent (Accessed: 5 May 2026).
Ehtesham, H. (2025) ‘Dopamine Loops and LLMs: How AI Addiction is Hacking Your Brain’, All About AI, 10 June. Available at: https://www.allaboutai.com/resources/dopamine-loops-and-llms/ (Accessed: 30 October 2025).
Gabriela, S. (2025) ‘Exploring the Cognitive and Emotional Drivers of Generative AI Overuse’.
Gerlich, M. (2025) ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’, Societies, 15(1), p. 6. Available at: https://doi.org/10.3390/soc15010006.
Gillespie, N. et al. (2025) Trust, attitudes and use of artificial intelligence, KPMG. Available at: https://kpmg.com/nz/en/insights/ai/trust-attitudes-and-use-of-ai.html (Accessed: 25 November 2025).
Kahneman, D. (2011) Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kosmyna, N. et al. (2025) ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task’. arXiv. Available at: https://doi.org/10.48550/arXiv.2506.08872.
Krikorian, R. (2025) The Validation Machines, The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2025/10/validation-ai-raffi-krikorian/684764/ (Accessed: 5 May 2026).
Li, Y. et al. (2025) ‘Warmth, Competence, and the Determinants of Trust in Artificial Intelligence: A Cross-Sectional Survey from China’, International Journal of Human–Computer Interaction, 41(8), pp. 5024–5038. Available at: https://doi.org/10.1080/10447318.2024.2356909.
Otter, M. (2026) ‘Good enough’: what vibe coding reveals about our changing relationship with thought, The Psychologist. The British Psychological Society. Available at: https://www.bps.org.uk/psychologist/good-enough-what-vibe-coding-reveals-about-our-changing-relationship-thought (Accessed: 2 May 2026).
Prakash, P. (2025) ‘The Psychological Impact of Generative AI: Addiction, Anxiety, and Automation’, Medium, 25 May. Available at: https://medium.com/@pranavprakash4777/the-psychological-impact-of-generative-ai-addiction-anxiety-and-automation-f218be48c883 (Accessed: 30 October 2025).
Ronksley-Pavia, M. et al. (2025) ‘A scoping literature review of generative artificial intelligence for supporting neurodivergent school students’, Computers and Education: Artificial Intelligence, 9, p. 100437. Available at: https://doi.org/10.1016/j.caeai.2025.100437.
Rose, J. (2026) The more young people use AI, the more they hate it, The Verge. Available at: https://www.theverge.com/ai-artificial-intelligence/920401/gen-z-ai (Accessed: 2 May 2026).
Seaman, K.L. et al. (2018) ‘Individual Differences in Loss Aversion and Preferences for Skewed Risks Across Adulthood’, Psychology and aging, 33(4), pp. 654–659. Available at: https://doi.org/10.1037/pag0000261.
Shang, X., Duan, H. and Lu, J. (2021) ‘Gambling versus investment: Lay theory and loss aversion’, Journal of Economic Psychology, 84, p. 102367. Available at: https://doi.org/10.1016/j.joep.2021.102367.
Shojaee, P. et al. (2025) ‘The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity’. Available at: https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf (Accessed: 5 May 2026).
Simon, F., Kleis Nielsen, R. and Fletcher, R. (2025) Generative AI and news report 2025: How people think about AI’s role in journalism and society, Reuters Institute for the Study of Journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society (Accessed: 25 November 2025).
Srey, V. (2024) Loss Aversion and Gambling: How Psychology Influences Your Betting Behavior, YOUGAMEHUB. Available at: https://www.yougamehub.com/post/loss-aversion-and-gambling (Accessed: 30 October 2025).
Üveges, I. (2025) ‘From the ELIZA Effect to Dopamine Loops – AI and Mental Health’, Constitutional Discourse, 2 June. Available at: https://constitutionaldiscourse.com/from-the-eliza-effect-to-dopamine-loops-ai-and-mental-health/ (Accessed: 30 October 2025).
Warzel, C. (2025) AI Is a Mass-Delusion Event, The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/ (Accessed: 5 May 2026).
Zhou, T. and Zhang, C. (2024) ‘Examining generative AI user addiction from a C-A-C perspective’, Technology in Society, 78, p. 102653. Available at: https://doi.org/10.1016/j.techsoc.2024.102653.

