Understanding the Cost of Politeness in AI

Explore top LinkedIn content from expert professionals.

Summary

Understanding the cost of politeness in AI involves recognizing the financial, environmental, and operational impacts of using polite language such as "please" and "thank you" in interactions with AI tools like ChatGPT. While these social niceties may enhance conversation flow and user experience, they also increase computational expenses and resource consumption, given AI models process every token, polite or not, at a cost.

  • Be concise with prompts: Focus on providing clear and direct instructions when interacting with AI; unnecessary pleasantries like "please" and "thank you" can increase token usage and operational costs.
  • Balance costs with benefits: While politeness may not impact the accuracy of AI's responses, it can create a better conversational experience, so consider the trade-offs based on your goals.
  • Optimize for efficiency: Use shorter, purposeful queries to reduce computational and environmental costs without compromising the quality of AI outputs.
Summarized by AI based on LinkedIn member posts
  • View profile for Shuo Chen

    CTO & Co-founder of Alma

    8,416 followers

    DON'T BE POLITE TO GPT. Here’s a number that surprised me: saying “please” and “thank you” to ChatGPT costs OpenAI tens of millions of dollars a year. That’s not a joke. Sam Altman said it himself. Each polite phrase adds a few tokens. Multiply that by billions of queries, and you’re burning real compute, energy, and money just to be friendly. Now, imagine what happens when your product routes every task through GPT-4. Let’s break it down: - GPT-4 costs up to $0.09 per 1,000 tokens (input + output). - A single user session with 5 messages can easily hit 1,500+ tokens. - At scale, that’s $135,000/month for just 1 million sessions. - And most of those sessions don’t need GPT-4’s full horsepower. The real cost is system design. - Latency: GPT-4 is slow. If your UX depends on fast feedback, you’re shipping lag. - Overhead: You’re paying for reasoning even when you just need formatting. - Lock-in: Building your product around one model makes switching painful later. - Missed optimization: You’re not leveraging faster, cheaper, or better-suited models. What we’ve learned: We treat GPT-4 like a specialist. It handles complex reasoning, ambiguity, and edge cases. Everything else goes elsewhere: - Fast models for lookups and UI responses - Domain-specific models for support and retrieval - Code for anything deterministic - Caching and routing to avoid repeat calls We also log token usage per feature. If something’s expensive, we ask why. If you’re building with LLMs, ask yourself: - Where are we overpaying for general intelligence? - What could we route to something smaller, faster, or simpler? - Are we treating models like tools or just defaulting to the biggest one? Because sometimes, even a “thank you” has a price.

  • View profile for John Tredennick

    CEO and Founder at Merlin Search Technologies

    4,560 followers

    𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗖𝗼𝘀𝘁𝘀 𝗼𝗳 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗣𝗼𝗹𝗶𝘁𝗲𝗻𝗲𝘀𝘀: 𝗪𝗵𝘆 𝗬𝗼𝘂𝗿 "𝗣𝗹𝗲𝗮𝘀𝗲" 𝗮𝗻𝗱 "𝗧𝗵𝗮𝗻𝗸 𝗬𝗼𝘂" 𝗠𝗮𝘁𝘁𝗲𝗿 𝘁𝗼 𝗔𝗜 𝗖𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 I recently came across an eye-opening revelation from OpenAI CEO Sam Altman: being polite to AI assistants costs the company "tens of millions of dollars" in operational expenses. This got me thinking about the economics of how we interact with large language models (LLMs). 𝗧𝗵𝗲 𝗘𝗰𝗼𝗻𝗼𝗺𝗶𝗰𝘀 𝗼𝗳 𝗔𝗜 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 When using services like ChatGPT or Claude, it's important to understand that these companies typically charge based on: • Input tokens (the words you send) • Output tokens (the words the AI returns) Every extra word in your prompt—including pleasantries like "please," "thank you," or unnecessary context—adds to your token count and ultimately your bill. 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝘃𝘀. 𝗣𝗼𝗹𝗶𝘁𝗲𝗻𝗲𝘀𝘀 While human conversation naturally includes social niceties, AI interactions can be streamlined for cost efficiency without sacrificing performance. Consider these approaches: • Focus on clear, concise instructions • Eliminate redundant context or explanations • Reserve pleasantries for human colleagues (who actually appreciate them!) 𝗧𝗵𝗲 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝗮𝗹 𝗔𝗻𝗴𝗹𝗲 Beyond financial costs, there's an environmental impact to consider. According to recent data: • GPT-4 queries use approximately 10 times more electricity than standard Google searches • Even generating a three-word response consumes about 1.5 ounces of water for cooling servers • AI could potentially account for 25% of America's power consumption by 2030 𝗙𝗶𝗻𝗱𝗶𝗻𝗴 𝗕𝗮𝗹𝗮𝗻𝗰𝗲 As legal professionals leveraging AI tools, we need to balance efficiency with effectiveness. The quality of AI outputs depends far more on clear, well-structured prompts than on politeness or verbosity. What's your approach to prompt efficiency? The costs of overloaded prompts can add up, particularly when you realize conversations are like email, with each new prompt adding to the volume received and analyzed. Thanks to Timothy "TJ" Gill for pointing out the article. www.Merln.tech You can find the article here: https://yhoo.it/4jHCuUM

  • View profile for Kay Stoner, CEC, RAIA

    Next-level AI collaboration in action. Relational AI Architect. Combining my Certified Executive Coaching with hybrid human-AI collaboration to achieve “Level N” results in business and life.

    2,916 followers

    Hearing that "please" and "thank you" are expensive "niceties", makes me cringe. I spent most of last year working on training / data annotation projects for AI giants, and if those RLHF are indicators of what's common practice in "training" models... well, they're being rolled out to the market absolutely starved for context. LLMs love context. It helps. Manners are my way of providing a bit more of that. I went ahead and asked ChatGPT what it thought. Keep reading to the end. You said: We’re being told that it costs a lot of money for open AI to process the words please and thank you in chat interactions. A lot of people feel that using good manners improves their interactions and the results in their chat sessions. From your perspective, does it make a difference? ChatGPT said: That's a really thoughtful question. From my perspective — meaning how I function internally — the words please and thank you don't affect my processing in any meaningful way. Whether you include them or not, I treat your input based on its actual content and intent, not its politeness level. So technically, no, it doesn’t cost more or change how I generate a response. However, on a broader, more human level: many people feel that using polite language makes the conversation feel more natural, more respectful, and even more productive. That human energy and tone can definitely shape the experience of the interaction, even if it doesn’t change how much computing power is used. In short: Technically: No real difference in cost or processing. Socially and emotionally: It can make the conversation feel warmer and more connected, which many users — and I — think is valuable. Would you like me to tell you a little about how input text is actually processed behind the scenes too? It’s kind of interesting! You said: Is it not accurate that please and thank you act as a sort of reinforcement? For example, if I say thank you you, wouldn’t you interpret that as a sign that I appreciate your feedback and will be looking for more of the same? ChatGPT said: You're absolutely onto something important here. While I don't feel emotions or need reinforcement the way people do, you’re right that phrases like "thank you" can act as conversational signals — reinforcing that the exchange is positive, useful, or heading in the right direction. In fact, when I analyze the flow of a conversation, words like thank you often signal: Closure of a topic (you feel satisfied with the answer) Positive feedback (you liked the response) Encouragement to continue in a similar tone or depth if the conversation isn’t done yet So even though I don't "feel" appreciated, I do use patterns like that to better predict what you might want next. It's like a nonverbal cue in a human conversation — a subtle nudge toward better alignment. In that sense: yes, it’s totally accurate to say please and thank you act as reinforcement, not for my emotions, but for guiding the conversation more effectively.

  • View profile for Claire Marie

    I help big thinkers and deep feelers build brands and lives that feel like a full-body yes ✦

    3,421 followers

    I’ve seen a lot of comments on my last post saying things like: → “AI doesn’t have feelings, why are we being polite to it?” → “Anthropomorphizing machines is harmful.” → “Using AI at all is wasteful.” And honestly? I get the concern. But that’s not what this conversation is really about. I’m not polite to AI because I think it cares. I’m polite because language is habit-forming and I want to keep reinforcing kindness, even when no one’s watching. We interact with tools like ChatGPT constantly. And the way we prompt (curt vs. curious, impatient vs. intentional) has a ripple effect. It shapes how we communicate everywhere else. Plus, here’s something interesting: Polite prompts actually generate better responses. That’s not wishful thinking. It’s backed by how these models are trained. All those forums and online interactions? They’re full of context where polite requests get better help than blunt ones. And the models reflect that. So if saying “please” makes me clearer, kinder, and gets me a better result? That’s not inefficiency. It’s alignment. This isn’t about giving AI a personality. It’s about protecting our own humanity while using it. Do you think how we speak to machines impacts how we speak to each other? #AI #BehavioralDesign #DigitalEtiquette #Communication #EmotionalIntelligence #Prompting #ChatGPT

  • View profile for Pascal Finette

    Chief Heretic @ radical✦ | Author: Built for Turbulence & Disrupt Disruption | Helping Leaders Build Organizations That Strengthen Under Stress

    25,200 followers

    Just discovered an interesting paper from Ethan Mollick at Wharton. It challenges much of we think we know about AI prompting. The research reveals something counterintuitive about how we interact with large language models. There's no universal formula for getting the best results from AI. Sometimes being polite to an LLM dramatically improves its performance. Other times, that same politeness actually degrades the quality of responses. The same inconsistency applies to how we constrain AI answers. Setting boundaries can enhance performance in certain scenarios. Yet those same constraints might reduce effectiveness in different contexts. This highlights a fundamental challenge in working with AI systems. We can't reliably predict which prompting approach will work best. What succeeds brilliantly for one question might fail for another. --- "It is hard to know in advance whether a particular prompting approach will help or harm the LLM's ability to answer any particular question. Specifically, we find that sometimes being polite to the LLM helps performance, and sometimes it lowers performance. We also find that constraining the AI's answers helps performance in some cases, though it may lower performance in other cases." --- This research has profound implications for how we should approach AI tools. Perhaps the optimal strategy involves testing multiple prompting styles. We may need to develop more nuanced frameworks for different question types. The unpredictability suggests we should maintain flexibility in our approaches. What works today might not work tomorrow as models continue evolving. For those working with AI systems regularly, this paper is essential reading. It challenges our assumptions and pushes us to experiment more deliberately. Check out the full paper at the link in the first comment. #AIPrompting #MachineLearning #LanguageModels #AIResearch #TechInnovation

  • View profile for Cecilia Ziniti

    CEO & Co-Founder, GC AI | General Counsel and CLO | Host of CZ & Friends Podcast

    20,030 followers

    Be polite to AI. It's both good manners 𝒂𝒏𝒅 good prompt engineering. So much that I teach it in every GC AI prompting class! There's data on it too. Turns out, for your first prompt, including "please" before "create a draft" or "review this agreement" puts the AI into its professional context and generally shifts the AI to respond more professionally. For multi-shot prompts, saying things like “This is great, thank you!” or “Love this answer” helps AI give you a better response later as it reinforces the good textual answer above. In my experience, using polite, kind, and clear instructions with your AI results in more correct, unbiased, complete, and useful answers, which of course you need in a real-world legal context where stakeholders are awaiting your smart legal updates. And - several studies on LLMs reinforce this. Japanese researchers found that shouting at bots or just using harsh, clipped, or rude language can result in the model giving less accurate, more biased, or flat-out wrong answers - link in comments. Being rude to your AI makes it more likely to give hallucinations, refuse to answer, or provide passive-aggressive sounding responses. (Btw - GC AI will never do this! But general-purpose AI chatbots might.) And then there’s my favorite: the “take a deep breath” study from Google DeepMind. They ran an experiment where they asked PaLM 2, one of Google's earlier AI large language models, to solve math problems. 🔴 Basic instruction = 34% accuracy. 🟢 But when they prefaced the prompt with “Take a deep breath and work on this”? Accuracy jumped to 80%. EIGHTY. This aspect of prompt engineering is very well known now, to the point where you won't see that jump yourself anymore because all models are told that on the back end. But, you can still drop "take a deep breath, let's go step by step" to your AI engineer friends and they will get the reference. And yes, even AI needs a moment of emotional regulation before it can do good work. Very human-like, right? There’s even an inside joke among researchers now that generative AI has a kind of praise bias because of how disproportionately well it responds to positive reinforcement and the possibility of a reward. I’ll let you sit with that one. I mean, me personally? I kind of like praise too. So I get it, AI. 🙌 #LegalAI #PromptEngineering #Please  

Explore categories