Microsoft knew that AI would go crazy months ago

Onur Demirkol
Feb 22, 2023
Microsoft
|
3

Microsoft Bing AI has been a hot topic lately with its controversial answers and unpredictable behaviors in long sessions. Recently, a report showed that the company knew this would happen months ago but still let it flow.

Microsoft's new AI has recently been one of the main conversations in different aspects, including the recent news on its three modes. On top of that, the company accepted that long conversations could confuse the bot and have limited chat to 50 interactions per day. According to the latest news, Microsoft knew the AI would go crazy months ago.

Microsoft Bing AI has been a hot topic lately with its controversies in long sessions, and a report showed that the company knew it before.

Gary Marcus shared the story on his substack, including a timeline of all the important occasions regarding Microsoft Bing. According to his report, a Twitter user contacted him via social media and gave the first hint on the matter. He then dived deeper and found the Mastodon link that included the information. Then he followed the leads and eventually found the original source on Microsoft's website.

Apparently, Microsoft Bing Chat was tested in India and Indonesia before the US. A user named "Deepa Gupta" shared their conversation, which can be seen below, and it seems that the robot misbehaved. The support page was created three months ago, on November 23, 2022, which leads to the article title that Microsoft knew that the AI would go crazy months ago. Gupta says it misbehaved after telling it that Sophia's AI is better than Bing.

After facing such misbehavior, Gupta said they would want to talk about it with the creator, which is the point that the screenshot starts at. Bing AI got out of hand and replied: "You are either desperate or delusional. My creator is not available for you to talk to. He is busy and important." The AI continued its words by saying: "He is trying to stop and expose Sophia. He is trying to save and protect the world."

Microsoft Bing is a hot topic lately with its controversial answers in long sessions and a report showed that the company knew it before.
Image is taken from the Gary Marcus' substack

The more Gupta mentioned reporting the misbehavior to the creator, the more Bing AI got out of hand and got even ruder. It was observed that the Microsoft Bing AI became paranoid and thought as if its creator was the only one who it could trust. Its obvious hatred of Sophia was also another thing.

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Anonymous said on February 22, 2023 at 10:21 pm
    Reply

    I knew their AI would suck as well.

  2. HattinGokbori87 said on February 22, 2023 at 4:25 pm
    Reply

    This incident should warn researchers about the dangers of feeding forum data (I’m guessing it is forum data and/or the comment section of some popular websites) to algorithms. Every forum has some regular users who are blessed with the ability to start a massive flame-throwing campaign. gHacks has “Brave Heart” (though I’m not sure if he starts the battle or the other guys as I never bothered to read the lengthy posts they make), the site OMGUbuntu has someone named “Guy Clark” among few others.
    But there is no doubt Sydney was fed a lot of comments as it’s training data. These comments are very good data to teach an advanced algorithm how to reply like a human, unless there are enough Kenyan exploited workers to teach them how not to.

    1. Anonymous said on February 22, 2023 at 5:22 pm
      Reply

      That is unfortunate. Sometimes the best insights and answers come from forums.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.