Brace for impact: Private AI model leaked online

Onur Demirkol
Mar 9, 2023

Meta's non-public AI language model, LLaMA, has been leaked online by a 4chan member, and the company has yet to take action.

The company started fielding access requests to its new language model around a week ago, and now it is all over the internet through a downloadable torrent link. It first came to the surface on 4chan as one of the users posted the link and started the spread. It is now accessible through the internet, and the AI communities are already discussing it, and some are even trying the new cutting-edge technology.

Meta decided not to launch LLaMA as a public chatbot but develop a different product for mass usage. LLaMA, on the other hand, was developed to be used by the AI community through requests to help researchers. The company said it intends to "further democratize access" to AI, encouraging research on its problems. However, it has become a real issue for everyone using the internet because this powerful AI language model could be used for malicious activities such as personalized spam and phishing attempts. Reportedly, LLaMA is way better on benchmarks compared to other open source models. A report by Ars Technica showed that LLaMA-13B outperformed ChatGPT-like tech despite being 10x smaller.

People in the industry criticized Meta for open sourcing the language model, including Jeffrey Ladish, who tweeted: "Well, Meta's 65 billion parameter language model just got leaked to the public internet, that was fast. Get ready for loads of personalized spam and phishing attempts. Open sourcing these models was a terrible idea."

"To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world," Meta said last week. It didn't last long...

LLaMA is reportedly competitive with Google Bard's LaMDA. According to a blog post by Arvind Narayanan and Sayash Kapoor of Princeton University, a wave of malicious use is coming. Still, there aren't any documented cases of such misuse yet. Distributing it freely may cause a serious increase in cybercrime, and people are criticizing Meta for letting that happen and not taking enough precautions.

The powerful language model is quite different from publicly available chatbots like OpenAI's ChatGPT and Microsoft's Bing Chat. It is not as simple as public chatbots but requires expertise, the right hardware, and time. The problem starts at that point because many people are ready to take malicious actions and have all three requirements. The Verge also added that LLaMA is not fine-tuned like the other chatbots in the example above. Fine-tuning lets the chatbot focus all its abilities on a specific task.

Meta hasn't concentrated on fine-tuning or user experience while building the language model because the company had a different goal to reach. Many malicious actions could be taken using the language model, and the company still hasn't made any announcements regarding the matter. It is a 219GB file, and it is impossible for a regular PC to run a big model, but the download file also includes the smallest model with 7B parameters. Try to be a little more careful, especially against phishing attempts.


Tutorials & Tips

Previous Post: «
Next Post: «


There are no comments on this post yet, be the first one to share your thoughts!

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.