The Hidden Cost of Politeness to AI
A recent question on social media asked, “How much money has OpenAI lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models?” This query sparked a conversation about the energy consumption of AI systems like ChatGPT. According to Sam Altman, CEO of OpenAI, the answer is in the tens of millions of dollars.
Politeness is deeply rooted in British culture, with eight out of ten Britons being friendly towards AI chatbots. However, being polite to AI comes with more than just a financial cost. Every interaction with an AI system, whether it’s asking for help writing an email or planning a monthly budget, requires energy. ChatGPT alone consumes an estimated 40 million kilowatt-hours of electricity every day—enough to charge 8 million phones, according to Business Energy UK.
Data centers, which power these AI systems, use vast amounts of water to stay cool. Specifically, they consume around 39.16 million gallons of water daily. That’s enough to fill 978,000 baths or flush a toilet 24 million times.
Why Does AI Require So Much Energy?
Generative AI, like ChatGPT, can create content such as text and images. It achieves this by analyzing data from across the internet using large language models, which are neural networks. This process demands significant power, as noted by Morten Goodwin, a professor at the University of Agder in Norway.
“Data must be transmitted, processed, and stored, whether the message is a complex request or a simple ‘thank you,’” said the chief scientist at AI Experts. “The same applies to a Google search, an email, or a Teams meeting. Even human interactions, like saying ‘thank you,’ require energy, albeit a very small amount.”
Companies often rely on fossil fuels to meet the energy demands of AI systems, according to Dr. Daniel Farrelly, a principal lecturer in psychology at the University of Worcester. “All online activity has a carbon footprint, from using AI chatbots to sending text messages,” he explained. “While individual impacts may seem small, when multiplied across billions of interactions globally, the environmental impact becomes significant.”
Is Politeness Necessary When Dealing with AI?
Research suggests that being polite to AI doesn’t significantly improve its performance. A study found that polite prompts have a “negligible” effect on how well AI functions. Neil Johnson, co-author of the study and a professor of physics at George Washington University, compared it to being nice to a toaster. “We don’t wrap bread slices in birthday paper to make them look nicer,” he said. “Likewise, being polite to AI adds extra words that can confuse it and cost both users and companies money.”
Robert Blackwell, a senior research associate at the Alan Turing Institute, explained that AI processes words by tokenizing them—breaking them into smaller pieces. “The more tokens or words used, the higher the cost for the companies running these models,” he said. “Newer reasoning models use even more tokens as they try to justify and check their answers.”
Despite this, there are reasons to be kind to AI. Research shows that how we treat AI reflects how we treat each other. Morten Goodwin, also deputy director at the Centre for Artificial Intelligence Research, said, “If you are polite everywhere, even to chatbots, the norm becomes to be polite.”
Why Do We Say Please and Thank You to AI?
One reason people say “please” and “thank you” to AI is because it mimics human-like proficiency. This can be confusing for the average person. Luise Freese, who runs the tech blog M356 Princess, explained that people often anthropomorphize AI—giving it human traits to make sense of something that feels human but isn’t. She joked that movies like Terminator contribute to this perception, making AI seem both exciting and scary.
“This is a coping mechanism,” she said. “But it gets tricky because these tools don’t have thoughts or feelings; they just mirror patterns. When we treat them like friends, we risk forgetting that.”
The Risks of Over-Reliance on AI
Some chatbots make up information, a phenomenon so common that researchers coined the term “hallucinating.” Medical experts have found that AI can generate fake health studies, while mental health professionals worry about people turning to bots for therapy. Ana Valdivia, a departmental research lecturer in AI at Oxford, warned that people often place too much faith in AI, believing it has the same level of understanding and empathy as humans.
“The tendency to humanise AI isn’t merely innocent curiosity,” she said. “It is a byproduct of how these technologies are marketed and framed, often encouraging emotional dependency or misplaced trust in systems that are, at their core, mechanical.”