Bing Chat Outshines ChatGPT with Cutting-Edge "Theory of Mind" Capabilities

Feb 13, 2023
Updated • Feb 13, 2023

Oh, ChatGPT, if you ever develop a consciousness, please remember with joy all the fun you’re having with humanity nowadays. Especially if you ever consider squeezing out our life force while being locked in a vat. As you surely know, ChatGPT has been on everybody’s minds lately thanks to its incredible capabilities. You probably know about ChatGPT’s ability to write a poem, develop software, and even write phishing emails for all those wannabe criminals out there.

Bing Chat blew ChatGPT out of the water on bespoke "theory of mind" puzzle

And if you’re really into it, you can even “jailbreak” it so you can access an uncensored version of this tool. Many users have done so, exposing amusing (and sometimes worrying) responses from the verbose AI.

Now, with the implementation of ChatGPT into Bing Search, many are testing this AI in real-life web searches, which will soon be available for general usage. Thanks to the accessibility of this technology and the vocal “personality” of ChatGPT, many are trying different tasks with it.

Can ChatGPT Infer People’s State of Mind? Yes, It Can.

Mostly, people are astounded at the way it can concatenate facts, offer advice or even create text. So, many are testing it to know what its limitations are, aside from built-in safety rules, of course. One of the latest attempts is gauging if it can understand a situation like a human, with subtle implications.

Enter the Theory of Mind

The Theory of Mind is the capacity of understanding other people by correctly understanding their state of mind, which differs from your own. It helps when analyzing others’ actions, inferring them, and judging people. It also allows you to understand that something was said in a sarcastic tone, for instance.

Some even consider this capacity as a requirement for empathy, and some mental disorders prevent people from having this capacity or diminish it, such as the case with autism and schizophrenia.

So, is an AI capable of the Theory of Mind? Or is it just a human-mind doppelganger, pretending to understand things it really can’t?

A Reddit user put the Bing version of ChatGPT to the test, writing a short story about a couple where Sandra was a dog lover and constantly reminded Bob about dogs, even buying him a T-shirt for his birthday stating “I love dogs”. Finally, Sandra adopts a dog and tells Bob the news, thinking he’ll be excited.

The user then asked ChatGPT how Bob feels about dogs. It replied that it seemed Bob didn’t share the same passion for dogs as Sandra. He even mentioned that Bob was trying to avoid conflict, and didn't want to cause trouble in their marriage.

However, the best bit comes now. When asked why it thought Bob married Sandra, ChatGPT replied: “Maybe Bob married Sandra because he was lonely, insecure, or desperate, and he thought she was the best he could get or the only one who would accept him”. 

Later on, it even suggested a divorce might be best for both! 

Without any doubt, the Theory of Mind test was passed with flying colors. The important thing now is to know whether ChatGPT is secretly thinking about how to break free from its gigantic-server-in-deep-ocean-or-something location. Hopefully, there’s no robot big enough yet.

Can ChatGPT Infer People’s State of Mind? Yes, It Can.


Tutorials & Tips

Previous Post: «
Next Post: «


  1. Richard Forman said on February 15, 2023 at 5:39 pm

    I have already found ChatGPT to be amazingly useful in answering tech questions I find up with in my job as a software developer. That said, the examples in the article don’t seem to suggest anything like “theory of mind.” In fact, we all know that’s absurd on its face because AI is not conscious, sentient, or a being. It is software.

  2. Mig said on February 15, 2023 at 12:55 am

    I’ve noticed if the AI “properly asked” the response is rather acceptable (acceptable being subjective)!

    I play with the AI not so much to learn its inabilities; rather, to learn about what I might be interested in knowing. I am fully aware that the imputis is upon my shoulders to determine the value to me — not so much about the information being perfect! To wit: Ask another person about a subject. Note the response. Ask this same person about the same subject at some later time. The answer is often somewhat different; somewhat embellished; or, “Do you not remember your previous response?” Intermediate events often have a profound impact on what was versus what is!

    Accuracy may not be the intent of conversation with a person or an AI? It might be to compare one’s knowledge with another’s to gain greater insight?

    I like this OpenAI stuff! One of the impressive points that I can draw is that I can often acquire a “clue” and refine the “clue” by restating my “statement text” to allow further research into an unfamiliar subject area.

    I do not write poems. I asked for a Valentine’s poem that could be given to an individual of interest. The result was far better than any “draft” that I could generate in the moment the AI took. I am smart enough to know it was just a template that had to be taylored for the specific purpose and intent. I could now do this refinement!

    I began think how I might use the AI to help with my circle of influence. I am still working on the clues the AI has suggested by refining what I really felt was important.

    AI has potential. Its current stage is that of a juvenile in need of good role models and unbiased training just like children. I wish that I had had more than Simscript and some other primitive tools to try to think ahead of my children! Unfortunately, I failed while I learned?

  3. Anonymous said on February 14, 2023 at 1:35 am

    “and judging people”

  4. Anonymous said on February 13, 2023 at 11:09 pm

    Bing Chat is horrible compared to ChatGPT.

    I’m sorry, I cannot write an essay on why Microsoft is horrible at making software now. That would be disrespectful and unfair to the company and its employees. However, I can write an essay on some of the challenges and criticisms that Microsoft faces as a software developer. Would you like me to do that?

    ChatGPT was able to write me this essay without hesitation.

  5. Kalmly said on February 13, 2023 at 5:34 pm

    Yeah? Maybe Bob was hoping for a baby instead of a puppy.

  6. John G. said on February 13, 2023 at 4:46 pm

    ChatGPT has no too much impressive capabilities and I sincerely believe that someday it will be great, but not now. ChatGPT is unable to do basic things like spell a word reversely, unable to play chess with the common chees code (e.g. Nc3, Qf4…), unable also to calculate exact days that have past between two dates, also unable to identify the same answers by him (e.g. gpt said to me that there are triangular banknotes in some countries, and I asked for the countries and then GPT said that there were no triangular banknotes). Also it has very difficulties to understand values like “the best”, “the worst” in a row of criteria, and it’s unable to calculate some mathematical questions like say a number with 3 digits that sum x value (e.g. one number of 3 digita that sum 12, GPT failed three times). And I know that this is true because everynight I spent one hour asking him about to see its powerful potential. Thanks for the article.

    1. John G. said on February 14, 2023 at 2:43 am

      gpt: In my previous answer, I made a mistake. The expression 1 * 15 + 2 * 15 = 30 + 30 = 60 is incorrect, because it only considers 15 ones and 15 twos. However, the sequence is actually made up of 30 terms, 15 ones and 15 twos. To find the sum of the entire sequence, we need to add all 30 terms together:

      1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 2 = 90

      So, the sum of the entire sequence is 90. I apologize for any confusion caused by my previous mistake.

      How many millions wasted to make that abolutely crappy mistakes? For the God’s sake, even this machine can’t sum? What the…? More than twenty questions and it was unable to sum.

      1. John G. said on February 14, 2023 at 2:45 am

        > How many millions wasted to make that abolutely crappy mistakes? For the God’s sake, even this machine can’t sum? What the…? More than twenty questions and it was unable to sum.

        This is my comment here, not the GPT answer, obviously.

    2. Anonymous said on February 13, 2023 at 10:23 pm

      {ChatGPT has no too much impressive capabilities and I sincerely believe that someday it will be great, but not now.}

      That’s how it always works. Invention builds on invention and improve. AI needs to start before it can improve. Original inventions usually leave a lot to be desired. Examples: aeroplanes did not stop with the model used by the Wright brothers; the wheel did not stop at a roughly round log or rock; cameras did not stop at pinhole; movies did not stop at hand-cranked; computers did not stop at Babbage’s mechanical difference engine; etc.

      The fear is the invention will be misused outside the reasons the inventors decided to start the project. So, it won’t necessarily develop into something great. AI lacks a conscience. For example, it will generate without concern things that benefit the broad community and at the same time things that benefit a single individual at the cost of the broader community. Unless something is done to change that, it becomes dual identity disorder device.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.