ChatGPT is used by cybercriminals to write better phishing emails
ChatGPT, the language model optimized for dialogue and conversation, has seen a lot of coverage in the past couple of months. Most coverage looks at the benefits or advantages of using ChatGPT, for instance, to improve search results or answers, help with coding tasks, provide recommendations or use as a translation tool.
Some researchers look in another direction. They are interested in finding out how ChatGPT can potentially by abused by cybercriminals. Last month, Check Point Research published a report in which the company highlighted that malicious actors were using ChatGPT to write malware or improve malware.
Chester Wisniewski, principal research scientist at Sophos, revealed recently in an interview to Tech Target that he was not concerned about the technology that ChatGPT could do, but about the social side of abuse. Cyybercriminals could use ChatGPT to create phishing emails that looked like they were composed by a native speaker.
One of the shortcomings of phishing, even today, is that many phishing emails include spelling and grammar mistakes. While the overall quality of phishing emails has gone up significantly over time, many emails still have indicators that help computer users detect legitimate from illegitimate emails.
Wisniewski's example is the use of British English in phishing emails in the United States. British English differs from American English; some words are spelled differently, and American users are often up in guards when they notice these in emails. Similarly, British English language users would notice American English in phishing emails.
ChatGPT use in malicious emails
ChatGPT, and other language models that have similar capabilities, may be used to construct emails that match language in a certain region or country. It does not have to go as far as asking ChatGPT to copy the style of a famous author, but instructing it to write a formal message in American English that informs users about something is sufficient. The created email sounds like it has been written by a human, and all that is left to do is to plan the malicious bits into the email. These can be links to websites, but also attachments or requests to call a specific phone number.
Wisniewski believes that humans need help in detecting whether an email or chat message was written by a human or a bot. He suggests that the answer could be friendly AI that is analyzing content and providing users with estimations regarding the authenticity of the content. Researchers are already working on AI models that help determine whether content has been written by another AI.
These would then need to be integrated into security solutions, e.g., antivirus programs, and display notifications to users when the analysis suggests that content has been generated by an artificial intelligence and not a human.
Problem with this approach is that there are also legitimate uses of ChatGPT. Organizations and users may use ChatGPT to improve text, e.g. write better ad copy or help them with certain paragraphs. These are not created to scam users, but helpful AI may have difficulties distinguishing between the two use cases.
Phishing continues to be a threat, and the rise of ChatGPT and other language models is adding a new tool to the arsenal of cybercriminals. Most Internet users need to be aware of that and focus their attention on other aspects of emails. While the grammar and spelling may be excellent, there is still the need to get users to open email attachments or click on links, or perform another action.
Now You: have you tried ChatGPT?
How many more articles about GPT are you going to write, seriously?
Glad to see your RSS feed has started to work again after over two months of NOT updating.
Hope it stays that way.
Repetitive but slightly varied subjects aside, no problems here. The problem must the service or the reader you use.
I need to know why my later post about claiming Africa of being one of the worst worldwide origin sites for phishing emails has been deleted. There are a lot of african countries that have the higher numbers of this type of attacks worldwide considering affected people, even more than Russia & Germany that are both number one and second in terms of phising email senders considering the population, however the total affected people are less in comparison.
“According to the KnowBe4 African Report 2019, over “800 respondents across South Africa, Kenya, Nigeria, Ghana, Egypt, Morocco, Mauritius, and Botswana, phishing was one of the top cyber threats faced by the African region. 28.14% of respondents reported that they had previously clicked on a phishing email, 27.71% had previously fallen for a scam, and 19% had forwarded a spam or hoax email”. In 2020 alone Kaspersky detected that South Africa, Kenya, Egypt, Nigeria, Rwanda, and Ethiopia had about 2 million phishing attempts.”
“South Africa has the second most phishing attacks in Africa with a record number of 4,578,216 million attacks – a 144% growth when compared to the stats from the first quarter of 2022. According to stats from Surfshark, South Africa is ranked sixth among the world’s most affected countries in terms of cybercrime, with an estimated 52 victims per 1 million internet users. In 2021, there were an average of 97 victims per hour, while back in 2001 only 6 South Africans per hour fell victim to cybercrime.”
“Nigeria was documented as the third African country with the most phishing attacks, recording 1,046,136 million attacks – a 174% increase when compared to the first quarter. Although Nigeria has the lowest number of attacks amongst the fold, the country seems to have the highest number of scammers present in Africa. A 2020 report from Agari’s Cyber Intelligence Division (ACID) stressed “that a majority (60%) of BEC actors globally were located in Africa, across 11 countries in the region, with “83% percent of African attackers, as well as 50% of global BEC actors, hailing from Nigeria”.”
Now please delete this post again, the european censorship needs new heroes.
Tomatot, my wife and I are both teachers. At every school and university around the world, decisions are right now being made just about every day as to how teachers should deal with the huge issues thrown up by ChatGPT, and by its successors, whose output will be more accurate and even more indistinguishable from human writing.
Every day there are more alarming details coming out, plus workarounds that may at best be temporary. There are no satisfactory solutions at all yet for teaching institutions.
If you are not interested, just skip the article. But please, ghacks, continue to bring us accurate and reliable accounts of developments as they occur.