Watch out for these ChatGPT scams

Eray Eliaçik
Jul 29, 2023
Updated • Jul 27, 2023
Misc
|
0

ChatGPT scams are, unfortunately, on the rise. In a world driven by technological advancements, artificial intelligence has emerged as a groundbreaking innovation, transforming how we live, work, and interact. Among the numerous AI breakthroughs, language models like ChatGPT have opened up a realm of possibilities in natural language processing. While these models have been predominantly utilized for positive purposes such as aiding businesses, enhancing customer support, and bolstering creativity, there exists a darker side to their potential.

As with any powerful tool, there will always be those who seek to exploit it for nefarious purposes, and ChatGPT is no exception. The rise of scammers and malicious actors capitalizing on the AI revolution to deceive and manipulate unsuspecting individuals is a concerning phenomenon that demands our attention. In this blog post, we delve into the underbelly of how people are exploiting ChatGPT and other AI language models for scamming, shedding light on the deceptive tactics they employ, the consequences faced by victims, and the ongoing efforts to combat such abuse.

It is crucial to recognize that while AI offers immense promise for positive change, it also presents unique challenges and ethical considerations that must be addressed to ensure a safe and trustworthy digital environment.

Join us as we explore the dark side of ChatGPT and gain insights into the measures being taken to strike a balance between innovation and safeguarding users from malicious AI misuse.

Popular ChatGPT scams

The world of scams and fraud has seen a significant evolution with the integration of AI, particularly language models like ChatGPT, and here are the most used ways:

ChatGPT-generated email scam

Emails have been used to spread malware, extort victims, and steal sensitive information, earning them a bad reputation as a scamming medium. Email scammers are now using ChatGPT in an attempt to trick unsuspecting receivers.

A number of news organizations raised alarms about an increase in ChatGPT-created phishing emails in April 2023. Criminals increasingly use chatbots to craft malicious phishing emails because of their capacity to produce material on demand.

Let's say a cybercriminal wants to commit crimes against people who speak English but isn't fluent in the language. They may create flawless phishing emails free of typos and grammatical mistakes with the help of ChatGPT. These expertly crafted messages are likelier to fool their targets because of the air of legitimacy they exude.

Basically, fraudsters might save time and effort by employing ChatGPT when creating phishing emails, which could lead to an increase in the frequency of phishing attacks.

Malicious ChatGPT browser extensions

While browser add-ons are incredibly useful, they may also be exploited as Trojan horses by criminals to install malware and steal sensitive information. This con works just as well in ChatGPT as it does anywhere else.

There are certain trustworthy ChatGPT-specific add-ons (like Merlin and Improved ChatGPT) that can be found in your browser's app store, but this is not the case for all add-ons. In March of 2023, for instance, a fake ChatGPT plugin known as "Chat GPT for Google" spread quickly across many mobile platforms. This malicious ChatGPT plugin was spreading and stealing information from thousands of Facebook users as it went viral.

The extension's name was chosen deliberately to cause misunderstanding, as it sounds very similar to the legitimate ChatGPT for Google service. Several people installed the add-on without verifying its authenticity, believing it was secure. The add-on was, in reality, a covert channel for planting backdoors in Facebook accounts and gaining illegal admin access.

Fake third-party ChatGPT apps

Cybercriminals frequently disguise their malicious software as legitimate-looking downloads with trusted brand names like ChatGPT. Though not a new idea, these malicious programs have long been used to spread malware, steal information, and spy on users. These malicious apps are now using ChatGPT's popularity to spread themselves.

It was discovered in February 2023 that attackers had created a fake ChatGPT program that might spread malware on Windows and Android devices. Bleeping Computer reported that malicious actors used OpenAI's ChatGPT Plus to deceive users into downloading a free version of the normally pricey program. These cybercriminals' true goal is to steal passwords or release malicious software.

In order to avoid downloading dangerous software, it is important to research a program's history before installing it. If an app's security can't be verified, it doesn't matter how enticing it may look. Download apps only from reliable app shops and always check out user reviews beforehand.

Malware generated by ChatGPT

Concerns that AI may make it easier for criminals to commit scams and attacks on the internet have generated a lot of discussion about the intersection of AI and cybercrime in recent years.

Since that ChatGPT may be used to create malware, these worries are not unwarranted. Malicious code was quickly being written with this widely-used tool shortly after its release. In early 2023, a post on a hacker site referenced an outbreak of Python-based malware that was reportedly written using ChatGPT.

Although the malware wasn't particularly sophisticated, and no extremely harmful malware, such as ransomware, has been identified as a ChatGPT product as of yet, the ability of ChatGPT to craft even simplistic malware programs opens a door for those who wish to enter cybercrime but lack significant technical expertise. This newfound skill offered by AI might become a major obstacle in the not-too-distant future.

Phishing sites

Phishing attacks frequently take the shape of fake websites designed to steal sensitive information. If you utilize ChatGPT, you may be vulnerable to phishing attacks. Let's say you've found what appears to be the real ChatGPT homepage. Just provide your name, email address, and other information to establish an account. If the website is malicious, the information you input may be stolen and used in dangerous ways.

On the other hand, a fake ChatGPT employee can email you and say that your account has to be verified. In order to complete the purported verification, you may be directed from the email to a website.

The trick with this ChatGPT scam is that clicking the link might compromise your account by taking whatever information you provide, including your password. Your ChatGPT account might be compromised at any time, giving hackers access to your private prompt history, account details, and other data. Knowing how to spot phishing schemes is crucial for protecting yourself from falling victim to cybercriminals.

The ChatGPT subscription scam

Unfortunately, ChatGPT is not immune to the scammers that plague other subscription-based platforms. Fake ChatGPT membership offers are becoming increasingly common as a method of fraud. There is a possibility that you may find a tempting internet deal for a free or heavily reduced subscription to ChatGPT Plus.

The advertisement or link takes you to a page that seems like OpenAI's real homepage but is actually a fake. To get the "special deal," you must complete a form with your name, address, and credit card information. Yet as soon as you do, malicious actors will have access to your private information.

Fake social media messages

There is a plethora of possible ChatGPT scams on social networking sites as well. Inauthentic users may try to contact you through direct messaging or post on the official ChatGPT account. To enter a sweepstakes, confirm your account information, or make changes to your profile, they may send you a link to click on.

Unfortunately, these notifications are only a trap designed to steal your information or infect your device with malware. Be sure to verify the sender's identity before acting on any such message.

 

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Seeprime said on September 8, 2023 at 4:12 pm
    Reply

    Missing from the “story”: Ukraine’s agreement to never use Starlink for military purposes. This is why.

    Ghacks quality is AI driven and very poor these days since AI is really artificial stupidity.

    1. Karl said on September 12, 2023 at 9:10 pm
      Reply

      “Elon Musk biographer Walter Isaacson forced to ‘clarify’ book’s account of Starlink incident in Ukraine War

      “To clarify on the Starlink issue: the Ukrainians THOUGHT coverage was enabled all the way to Crimea, but it was not. They asked Musk to enable it for their drone sub attack on the Russian fleet. Musk did not enable it, because he thought, probably correctly, that would cause a major war.”
      https://nypost.com/2023/09/11/elon-musk-biographer-walter-isaacson-corrects-detail-about-starlink-in-ukraine/

      1. Karl said on September 14, 2023 at 5:58 pm
        Reply

        I posted above comment to:
        https://www.ghacks.net/2023/09/08/elon-musk-turned-off-starlink-during-ukranian-offence/

        Not to the following article about Geforce where I currently also can see it published:
        https://www.ghacks.net/2023/08/29/how-to-fix-geforce-experience-error-code-0x0003/

  2. Anonymous said on September 11, 2023 at 10:09 pm
    Reply

    Well, using Brave, I can see Llama 2 being decent, but it is still not great?
    All these AI stuff seems more like a ‘toy’ than anything special, I mean, it is good for some stuff like translations or asking quick questions but not for asking anything important.

    The problem is Brave made it mostly for summarizing websites and all that, but all these Big tech controlled stuff, won’t summarize articles it doesn’t agree with, so it is also useless in many situations where you just want it to give you a quick summarization, and then it starts throwing you little ‘speeches’ about how it doesn’t agree with it and then it never summarizes anything, but give you all the 30 paragraphs reasons why the article is wrong, like if I am asking it what it thinks.

    SO all this AI is mostly a toy, but Facebook with all the power they have will be able to get so much data from people, it can ‘train’ or better say, write algorithms that will get better with time.

    But It is not intelligence, it is really not intelligence all these AI technology.

  3. Tom Hawack said on September 14, 2023 at 2:11 pm
    Reply

    Article Title: Tech leaders meet to discuss regulation of AI
    Article URL: [https://www.ghacks.net/2023/09/14/artificial-intelligence-regulation-tech-leaders/]

    The eternal problematic of regulating, here applied to AI. Should regulations (interventionism) have interfered in the course of mankind ever since Adam and Eve where would we be now? Should spirituality, morality, ethics never have interfered where would we be now? I truly have always believed that the only possible consensus between ethics and freedom is that of individuals’ own consciousness.

    Off-topic : Musk’s beard looks like a wound, AI-Human hand-shake is a quite nice pic :)

    1. Karl said on September 14, 2023 at 5:55 pm
      Reply

      Haha, oh dear, Tom.
      I thought that the comments system issue where comments shows up under a totally different article was fixed. But seeing your comment here, the “error” is clearly still active. Hopefully it is sorted as soon as possible.

      1. Tom Hawack said on September 14, 2023 at 6:40 pm
        Reply

        Article Title: Tech leaders meet to discuss regulation of AI
        Article URL: [https://www.ghacks.net/2023/09/14/artificial-intelligence-regulation-tech-leaders/]

        Hi Karl :) Well, let’s remain positive and see the good sides : one’s comment appearing within different articles (the one it was written form and for, another unrelated one) brings ubiquity to that comment : say it once and it’s published twice, double your pleasure and double your fun (“with double-mint, double-mint gum” and old ad!). Let’s forget the complications and inherited misunderstandings it leads to. Not sure the fun is worth the complications though. Which is why, with a few others here, I include Article Title & URL with comment, to ease a bit the pain.

        This said, I’m trying to find a logic key which would explain the mic-mac. One thing is sure : comments appearing twice keep the same comment number.

        For instance my comment to which you replied just above is originally :

        [https://www.ghacks.net/2023/09/14/artificial-intelligence-regulation-tech-leaders/#comment-4573676]

        It then got duplicated to :

        [https://www.ghacks.net/2023/08/29/how-to-fix-geforce-experience-error-code-0x0003/#comment-4573676]

        Same comment number, which let’s me imagine comments are defined by their number as before but now dissociated in a way from their full path : that’s where something is broken, as i see it.

        First amused me, then bothered, annoyed (I took some holidays to lower the pressure), then triggered curiosity.
        I’m putting our best detectives on the affair, stay tuned.

      2. Karl said on September 16, 2023 at 8:58 am
        Reply

        Hehe, yes indeed, staying positive is what we should do. Good comes for those who wait, as the old saying goes. Hopefully true for this as well.

        Interesting that the comments number stays the same, I noted that one thing is added to the duplicated comment in the URL, an error code, the following: “error-code-0x0003”.

        Not useful for us, but hopefully for the developers (if there are any?), that perhaps will be able to sort this comments error out. Or our detectives, I hope they work hard on this as we speak ;).

        Cheers and have a great weekend!

      3. Karl said on September 16, 2023 at 9:18 am
        Reply

        Whoops, my bad. I just now realized that the error I saw in your example URL (error-code-0x0003) was part of the linked article title and generated by Geforce! Oh dear! Why did I try to make it more confusing than it already is lol!

        Original comment:
        https://www.ghacks.net/2023/09/08/elon-musk-turned-off-starlink-during-ukranian-offence/#comment-4573788

        Duplicate:
        https://www.ghacks.net/2023/09/14/iphone-12-radiation-levels-are-too-high/#comment-4573788

      4. Tom Hawack said on September 16, 2023 at 9:20 am
        Reply

        Article Title: Tech leaders meet to discuss regulation of AI
        Article URL: [https://www.ghacks.net/2023/09/14/artificial-intelligence-regulation-tech-leaders/]

        @Karl, you write,

        “I noted that one thing is added to the duplicated comment in the URL, an error code, the following: “error-code-0x0003”.”

        I haven’t noticed that up to now but indeed brings an element to those who are actually trying to resolve the issue.
        I do hope that Softonic engineers are working on fixing this issue, which may be more complicated than we can imagine. Anything to do with databases can become a nightmare, especially when the database remains accessed while being repaired, so to say.

        P.S. My comment about remaining positive was, in this context, sarcastic. Your literal interpretation could mean you are, factually, more inclined to positiveness than I am myself : maybe a lesson of life for me :)

        Have a nice, happy, sunny weekend as well :)

      5. 💾 said on September 16, 2023 at 12:35 pm
        Reply

        Correct: AI is certainly overhyped, it’s also advertised by some shady individuals. It’s can also be misused to write poor quality articles or fake your homework.

        https://wordpress.com/support/post-vs-page/
        https://wordpress.com/support/restore/

        16 September 2023, this website is still experiencing issues with posts erroneously appearing in the wrong threads. There are even duplicates of the exact same post ID within the same page in some places.

      6. 💾 said on September 16, 2023 at 8:41 pm
        Reply

        Clerical error “[It] can also be misused …” you just can’t get the staff nowadays.

        Obviously [#comment-4573795] was originally posted within [/2023/09/14/artificial-intelligence-regulation-tech-leaders/]. However, it has appeared misplaced within several threads.

        Including the following:
        [/2023/09/15/redmi-note-13-specs-release-date-and-more/]
        [/2023/08/29/how-to-fix-geforce-experience-error-code-0x0003]

  4. Anonymous said on September 14, 2023 at 3:39 pm
    Reply

    “How much radiation is dangerous?
    Ionizing radiation, such as X-rays and gamma rays, is more energetic and potentially harmful. Exposure to doses greater than 1,000 millisieverts (mSv) in a short period can increase the risk of immediate health effects.
    Above about 100 mSv, the risk of long-term health effects, such as cancer, increases with the dose.”

    This ban is about NON-ionizing radiation limits, because there is too much radio wave power from the iphone. This has nothing to do with the much more dangerous ionizing radiations like X-rays, that are obviously not emitted at all by mobile phones. I invite you to correct your article.

  5. Anonymous said on September 17, 2023 at 5:03 pm
    Reply

    “Aaro.mil makes history as the first official UFO website”

    I wonder if it’s just smelly crowdsourcing for the spotting of chinese balloons or whatever paranoia they’re trying to instigate, or if they are also intentionally trying to look stupid enough to look for alien spaceships, for whatever reason. Maybe trying to look cute, instead of among the worst butchers of history ?

  6. Anonymous said on September 17, 2023 at 9:12 pm
    Reply

    “The tech titan’s defense”
    “Whether he provides a clear explanation or justifies his actions”
    “the moral compass”

    You take it for granted that this company should agree being a military communications provider on a war zone, and so directly so that his network would be used to control armed drones charged with explosives rushing to their targets.

    You don’t need to repeat here everything you read in the mainstream press without thinking twice about it. You’re not just pointing interestingly that his company is more involved in the war that one may think at first and that this power is worrying, you’re also declaring your own support for a side in an imperialist killfest, blaming him for not participating enough in the bloodshed.

    Now your article is unclear on how this company could be aware that its network is used for such military actions at a given time, which has implications of its own.

    Reading other sources on that quickly, it seems that the company was: explicitly asked ; to extend its network geographically ; for a military attack ; at a time when there was no war but with the purpose of triggering it, if I understood well. You have to be joking if you’re crying about that not happening at that time. But today you have your war, be happy.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.