Undetectable Humanizer: Lifetime Subscription
Transform AI-Generated Text into Human-Like, High-Ranking Content & Bypass Even the Most Sophisticated AI Detectors
Get 95% Deal

Security researchers confirm hackers are using ChatGPT to create malware

Patrick Devaney
Jan 13, 2023
Misc
|
3

It hasn’t taken long. Following its launch on November 30th last year, a little over a month and a half ago, official reports are coming in from cybersecurity researchers that are seeing malware and ransomware that has either been created from scratch or augmented by ChatGPT.

Security researchers confirm hackers are using ChatGPT to create malware

The famous chatbot is able to create text responses to human prompts, even if those prompts ask it to write code. This means that among all of its more traditional literary skills, ChatGPT is also somewhat of a convincing coder. Convincing is the right word here, however, as the large language model has a problem with accuracy. Based on what it can learn from the vast amounts of data used to train it, it can create convincing texts and convincing code, that can look the part. However, it has no actual conception of what is right or wrong, either factually in the text or that the code is correct, which means users can’t know either.

Now there are plenty of users who won’t bother checking whatever ChatGPT spits out at them but unfortunately, we can’t rely on cybercriminals being quite so lazy. According to a new report by Check Point Research, they have spotted several posts on hacking forums, where malicious actors discuss all manner of nefarious activities, discussing ChatGPT and how best to use it. One of the more shocking revelations is that many of the users on the forums who are discussing ChatGPT in this manner are doing so from new accounts, which could indicate that the tool is allowing new users to get into cybercrime, even if previously they haven’t had the ability to do so. We discussed this very possibility a month ago and hope it didn’t give any of you any ideas ?.

In all seriousness, however, this does mark a dangerous escalation and could result in a flood of new cybercrime attacks spreading across the internet. The Check Point report does say that the malicious code it has been discovering mostly seems to be coming from rookies, but it clearly highlights the risk of more sophisticated actors bringing it into their workflows to augment the work they are already doing. The report says:

“t’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web. However, the cybercriminal community has already shown significant interest and is jumping into this latest trend to generate malicious code.”

If this trend does pick up, there may be hope in the form ChatGPT watermark, which is currently being developed to embed texts using cryptography with machine-readable watermarks. However, in its current form that still has shortcomings, and it has only ever been discussed relating to text output and not code.

Summary
Security researchers confirm hackers are using ChatGPT to create malware
Article Name
Security researchers confirm hackers are using ChatGPT to create malware
Description
Cybersecurity researchers have uncovered discussions on illicit forums that clearly show hackers and malicious actors are using ChatGPT to cause harm.
Author
Publisher
Ghacks Technology News
Logo
Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Tom Hawack said on January 13, 2023 at 1:08 pm
    Reply

    AI is getting closer & closer to humans : the best and the worst. The advantage of AI is that you can judge it when judging a human isn’t supposed to be the thing to do. Will AI ever judge humans? It’s done in court already I think, in Japan is it? But morally, will AI ever “say” : “You are evil, my friend”? That’s when it’d be more pertinent than ever to answer “Who are you to judge me” to what the AI could answer “I’m no one, my friend”.

    1. Anon said on January 13, 2023 at 1:21 pm
      Reply

      AI is just a bunch of “If” statements. It will never have the potential of a human being and will always rely on taking data from humans it can find to come up with anything it does. The idea that AI will ever become sentient is preposterous.

      1. Mikhoul said on January 13, 2023 at 7:06 pm
        Reply

        Exactly !

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.