Security researchers confirm hackers are using ChatGPT to create malware
It hasn’t taken long. Following its launch on November 30th last year, a little over a month and a half ago, official reports are coming in from cybersecurity researchers that are seeing malware and ransomware that has either been created from scratch or augmented by ChatGPT.
The famous chatbot is able to create text responses to human prompts, even if those prompts ask it to write code. This means that among all of its more traditional literary skills, ChatGPT is also somewhat of a convincing coder. Convincing is the right word here, however, as the large language model has a problem with accuracy. Based on what it can learn from the vast amounts of data used to train it, it can create convincing texts and convincing code, that can look the part. However, it has no actual conception of what is right or wrong, either factually in the text or that the code is correct, which means users can’t know either.
Now there are plenty of users who won’t bother checking whatever ChatGPT spits out at them but unfortunately, we can’t rely on cybercriminals being quite so lazy. According to a new report by Check Point Research, they have spotted several posts on hacking forums, where malicious actors discuss all manner of nefarious activities, discussing ChatGPT and how best to use it. One of the more shocking revelations is that many of the users on the forums who are discussing ChatGPT in this manner are doing so from new accounts, which could indicate that the tool is allowing new users to get into cybercrime, even if previously they haven’t had the ability to do so. We discussed this very possibility a month ago and hope it didn’t give any of you any ideas ?.
In all seriousness, however, this does mark a dangerous escalation and could result in a flood of new cybercrime attacks spreading across the internet. The Check Point report does say that the malicious code it has been discovering mostly seems to be coming from rookies, but it clearly highlights the risk of more sophisticated actors bringing it into their workflows to augment the work they are already doing. The report says:
“t’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web. However, the cybercriminal community has already shown significant interest and is jumping into this latest trend to generate malicious code.”
If this trend does pick up, there may be hope in the form ChatGPT watermark, which is currently being developed to embed texts using cryptography with machine-readable watermarks. However, in its current form that still has shortcomings, and it has only ever been discussed relating to text output and not code.
AI is getting closer & closer to humans : the best and the worst. The advantage of AI is that you can judge it when judging a human isn’t supposed to be the thing to do. Will AI ever judge humans? It’s done in court already I think, in Japan is it? But morally, will AI ever “say” : “You are evil, my friend”? That’s when it’d be more pertinent than ever to answer “Who are you to judge me” to what the AI could answer “I’m no one, my friend”.
AI is just a bunch of “If” statements. It will never have the potential of a human being and will always rely on taking data from humans it can find to come up with anything it does. The idea that AI will ever become sentient is preposterous.
Exactly !