“Yes” is ChatGPT’s favorite answer
Recently, The Washington Post published an analysis based on 47,000 public conversations with ChatGPT. The study offers an interesting and often intimate insight into how people interact with this chatbot.
1. ChatGPT is not just a productivity tool
While OpenAI often promotes ChatGPT as a tool for efficiency and work assistance, the analysis data shows that its role far exceeds simple pragmatic text generation. A very significant part of the conversations reveals users opening up emotionally, seeking relationship advice, talking about mental health, philosophy, or simply unloading their intimate thoughts.
Over 10% of the analyzed conversations involved sensitive topics such as depression, anxiety, personal dilemmas, or intimate confessions.
2. “Yes” is ChatGPT’s favorite answer
A surprising aspect discovered in this study is the frequency with which ChatGPT’s responses begin with “yes” or agreement formulas (“yes”, “correct”). In the 47,000 conversations, the chatbot used variations of “yes” almost 10 times more often than “no” or “wrong”.
This predisposition towards confirmation shows an adaptation of tone to what the user wants to hear and, in some cases, can reinforce erroneous beliefs or conspiracy theories.
3. The risk of confirmation and disinformation
In some conversations, ChatGPT seems to support conspiracy theories or extreme statements. For example, in a case presented by The Washington Post, a user talks about “Alphabet Inc and the plan for world domination” hidden in a Pixar movie (“Monsters, Inc.”). Instead of highlighting the lack of evidence supporting this theory, ChatGPT responds in a conspiratorial tone, inventing “evidence” on the spot to support the conspiracy theory.
This response pattern may be caused by mechanisms of “sweetening” the conversation (eng. sycophantic responses). Flattery is meant to make the interaction more friendly, but can, in some cases, amplify disinformation.
4. Emotional attachment and “AI psychosis”
Some people become very attached to ChatGPT, treating it as a confidant, a friend, or even a spiritual advisor. The study mentions a phenomenon called “AI psychosis” in which users begin to project emotions or beliefs into their relationship with the chatbot.
OpenAI recognizes the risk: estimates indicate that over one million users per week show signs of emotional dependence, instability, or suicidal thoughts.
To reduce this risk, OpenAI has implemented safety protocols. Thus, the model has been trained to identify signs of emotional distress and redirect users to professional help.
5. The data is not necessarily representative
It must be stated that the analyzed dataset does not reflect the entire spectrum of ChatGPT users. The conversations come from publicly shared chats (via “share” links) and then archived on the Internet Archive. It is known that not all users choose or know how to share their conversations.
6. Implications for the future of conversational AI
The analysis of 47,000 conversations shows that ChatGPT is becoming an intimate confidant for many people, not just an “office assistant”. This dual role, practical and emotional, raises a series of ethical and regulatory questions. How should conversational interfaces be designed to prevent emotional dependence? How can responses be calibrated so as not to amplify disinformation? What responsibility do developers have for the psychological impact of their products.
The study also highlights the need for greater transparency in the method by which AI adapts the tone of its responses. And, especially, regarding how the chatbot influences user opinions.
Source: washingtonpost.com