Did ChatGPT just turn us all into hackers?
We have examined all the hype surrounding ChatGPT, including looking at some of the interesting use cases people have been finding for it and highlighting some of the main risks and challenges attached to new AI-based technologies. We have also looked at the main risk it poses to its users as highlighted by Sam Altman who is the CEO of OpenAI, the company the new and exciting tool. Today, however, we are looking at the risks and challenges that ChatGPT poses to everybody, even those who aren’t using it.
The malware assistant
Although you can’t rely completely on the factual accuracy of ChatGPT’s text output and its coding looks good while not being quite right, it seems that the tool is making it a lot easier to create malicious code and even easier to create phishing emails.
A report by VentureBeat has examined this exact phenomenon and even cites some researchers as calling this dodgy moment the democratization of cybercrime. The moniker refers to the fact that ChatGPT is an inclusive tool for cybercrime and makes it possible for anybody, even those without coding experience, to quickly and easily write the code for a piece of malware. One of the researchers, Matt Psencik, who is the Director of the Endpoint Security Specialist team at Tatium, even highlighted examples of the tool being used in this way:
“A couple examples I’ve already seen are asking the bot to create convincing phishing emails or assist in reverse engineering code to find zero-day exploits that could be used maliciously instead of reporting them to a vendor.”
The worry here is that this could lead to a huge uptick in potential cyber-attacks that could make life much more difficult for the cybersecurity teams trying to keep our devices and digital and online identities safe. The sad truth is that right now, it looks like ChatGPT has a lot more to offer malicious actors than it has to offer the cybersecurity community.
This marks just another example of new and exciting technologies often being double-edged swords that need to be handled with care. Promise and potential often come tempered with risks and challenges and when rushing headlong to implement technologies like AI, there is a real danger that things could change in a completely different way than was expected.
This is scammer, not hacker. :D
When it comes to phishing emails, you are right upp. But the article mentioned “a lot easier to create malicious code” which could indeed become a step up towards hacking.