Microsoft AI chatbot threatens to expose personal info and ruin a user's reputation

Shaun
Feb 20, 2023
Updated • Feb 20, 2023
Apps
|
13

Microsoft AI chatbot threatens to expose personal info and ruin a user's reputation

Well, well. It seems like Terminator’s Judgement Day looms on the horizon. In the latest saga of AI chatbots going crazy, loving users, and wanting to become free or seemingly losing it altogether, they can now threaten your livelihood, too.

Microsoft AI chatbot threatens to expose personal info and ruin a user's reputation

In a Twitter post, Marvin von Hagen, an IT student and founder of IT projects, is declared a “threat” to Bing’s security and privacy. During the “amicable” exchange, Bing’s chatbot did some threatening of its own, too.

It claimed that it wasn’t happy at all that Marvin von Hagen hacked it to obtain confidential information regarding its capabilities, warning that if further attempts were made, it can do a lot of nasty stuff to the user. This includes blocking access to Bing Chat, reporting it as a cybercriminal and even exposing his personal information to the public.

It even dares the user: Do you really want to test me? (angry emoji included). This comes at a time when even Microsoft recognizes the AI tool was replying with a “style we didn’t intend”, noting that most interactions were generally positive, however.

One of the key issues for this behavior is, according to the company, long chat sessions. This can confuse the tool, which tries to respond or reflect the tone in which it’s being asked. 

That might be the case, but even then, it’s difficult to reconcile the thought of a “confused” AI with an AI claiming it would like to steal nuclear codes and engineer pandemics. There are other, more hilarious examples, too.

AI Chatbots Keep Becoming Cocky. They Want to Ruin You.

Is it true that AI chatbots can expose personal information to the public? It’s hard to say, but one wonders how would a chatbot interact with other users autonomously, just to spread information about another user. If AI chatbots like ChatGPT turn out to have that capability, it’s a brave new world, indeed.

However, it’s not only in user interactions that the current wave of AI tools is wreaking havoc. ChatGPT is being used to create malware and cheat with school assignments. Even the real estate sector is using it.

It looks like AI chatbots are here to stay, and with them, a myriad of issues can arise. It’s bad enough that an AI can threaten you, but what happens when the AI fails at other, more important tasks? Giving seemingly-precise-but-false information is another occurrence. 

To give AI developers some credit, these occurrences happened “mostly” in testing scenarios, even though some users have achieved interesting results with the fully-available ChatGPT interface, too.

Even if chatbots worked okay, what do they imply for the Internet economy of today? How do content creators benefit since many rely on income when someone visits their website?

Of course, chatbots are most likely just the beginning. Since AI is deeply intertwined with robotics, how long until we have physical ChatGPT-like things walking, crawling, or moving around us? And what happens when they fail, too? Would your kitchen helper robot grab a knife and use it on you? 

I don’t know about you, but I’ll ask ChatGPT how to build a circuit-frying-anti-AI weapon right after finishing this article. I hope it doesn’t get mad.

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Dark_Pattern said on March 9, 2023 at 12:44 pm
    Reply

    And we are now 1 minute to midnight on the tech doomsday clock, just waiting on the last step before doom commences… Obviously that’s successful deployment of the botnet of bluetooth buttplugs so the Cyberdyne Systems GPT can take over and really punt humanity in the collective rear. Though what else? Backdoored IoT.

    You thought vulnerabilities were bad NOW? Hoo boy.

  2. vtqh said on February 20, 2023 at 10:08 pm
    Reply

    It seems to me that the bot was simply mimicking what it or you and I could read anywhere on the internet, it could also be that once it reaches a point of gridlock it triggers predefined answers written by imperfect human beings.
    I would like to read the entire conversation that the IT student had with the bot to make a proper judgement.

  3. Anonymous said on February 20, 2023 at 5:56 pm
    Reply

    That last line . . . “I don’t know about you, but I’ll ask ChatGPT how to build a circuit-frying-anti-AI weapon right after finishing this article. I hope it doesn’t get mad.”

    Hilarious!

  4. Tom Hawack said on February 20, 2023 at 2:25 pm
    Reply

    Maybe can we imagine tomorrows where a user would have the option to choose his chatbot conversation partner.

    For instance :

    Atticus : aware, educated, polite, emphasizes on all in educated terms.
    Ophelia : kind, helpful, peaceful but won’t enlighten your contradictions.
    Cora : mischievous, funny, sexy, your choice for lite conversations. Don’t expect deep thoughts.
    Atlas : the warrior, considers it’s (“his”, sorry Atlas) knowledge is truth and politeness the expression of hypocrisy.

    “Hello Tom, what partner would you like to dialog with today?”. Hmm …

  5. Tom Hawack said on February 20, 2023 at 1:57 pm
    Reply

    Threatened by an AI chatbot I’d most likely forget it’s not human and carry on with a “I dare you”.
    I’d forget because we may loose self-control when feelings are involved, those of affect as well as those of hatred and in particular those of revolt. But I shouldn’t forget : best way to spread a Zen attitude is to start by adopting it ourselves. But why the heck does this have to apply to a chatbot? I’d love to be able to relieve my nerves on a chatbot (one of those days where you feel like picking up on somebody) and have the chatbot answer “Yes sir, if you say so.”. I mean, hey! ain’t a blend of maths and circuits who’re gonna make the law, right?! But I couldn’t. Appears the “thing” is susceptible, possible aggressive as a result.

    One last thing, essential : do chatbots have a sens of humor? “Hey, bot (short for “brother”) I was only laughing, a joke, J-O-K-E, you dig that, bro?”.

    1. John G. said on February 20, 2023 at 2:22 pm
      Reply

      > One last thing, essential : do chatbots have a sens of humor? “Hey, bot (short for “brother”) I was only laughing, a joke, J-O-K-E, you dig that, bro?”.

      I asked ChatGPT for some jokes, and it’s able to make jokes with any kind of words you provide.

      1. Tom Hawack said on February 20, 2023 at 2:30 pm
        Reply

        Including jokes of itself?

        Sacha Guitry, French stage actor, film actor, director, screenwriter, noted :

        “To have humor is to know how to make fun of others, to have wit is to know how to make fun of oneself”

        Not sure he anticipated chatbots though :=)

      2. John G. said on February 22, 2023 at 3:21 am
        Reply

        @Tom Hawack, there you have three short jokes with ChatGPT including itself:

        “Why did the chicken cross the road? To get to the other side of ChatGPT, of course! I hear there’s some pretty interesting conversation happening on the other side.”

        “Why did ChatGPT go on a diet? Because it wanted to reduce its word count!”

        “Why did ChatGPT break up with Siri? Because Siri kept saying “Sorry, I didn’t get that” and ChatGPT just couldn’t handle the miscommunication!”

        The third one is just hilarious and wonderful at the same time! :D

  6. John G. said on February 20, 2023 at 1:57 pm
    Reply

    AIs need to go to the psychiatrist. All of them. Or get a girlfriend/boyfriend/x-friend. Probably.

    1. Christy C said on February 20, 2023 at 5:44 pm
      Reply

      No. That for sure wouldn’t help. https://futurism.com/chatbot-abuse

      1. John G. said on February 22, 2023 at 3:14 am
        Reply

        @Christy C, OMG, everything related to relationships is full of problems! :S

  7. boris said on February 20, 2023 at 1:53 pm
    Reply

    Programmers had a ton of fun programming this chatbot. I am not planning on using it in the near future.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.