Will Socrates and AI meet the same fate?

Emre Çitak
Oct 30, 2023
Misc
|
1

Artificial intelligence (AI) text-to-image models have become increasingly popular in recent years, with the ability to generate realistic and creative images from simple text prompts.

However, a new tool developed by researchers at the University of Chicago could pose a serious threat to this technology.

The tool, called Nightshade, allows users to "poison" AI text-to-image models by making imperceptible changes to images. These changes are too small to be noticed by the human eye, but they can cause AI models to generate incorrect or even nonsensical images when prompted with the poisoned images.

The researchers behind Nightshade say that the tool could be used by artists and other creators to protect their work from being used without their permission in AI text-to-image models. It could also be used by researchers to study the vulnerabilities of AI text-to-image models and to develop new defenses against adversarial attacks.

AI poison Nightshade
AI technologies are no longer the baby robots we know

How does Nightshade work?

Nightshade works by generating small perturbations to images that are specifically designed to confuse AI text-to-image models. These perturbations are so small that they are imperceptible to the human eye, but they can cause AI models to make large mistakes when generating images from poisoned images.

For example, the researchers were able to use Nightshade to poison a popular AI text-to-image model called DALL-E 2. When prompted with a poisoned image of a cat, DALL-E 2 generated an image of a dog. When prompted with a poisoned image of a car, DALL-E 2 generated an image of a cow.

A drawback of the revolt

Back in December 2022, artists on ArtStation were protesting against AI-generated images being allowed on the platform. The site's most popular section, "Explore," has featured computer-created images, sparking outrage among human artists who feel threatened by AI technology.

Many have taken to spamming their portfolios with the message "No to AI-Generated Images" in solidarity with the movement started by costume designer Imogen Chayes and cartoonist Nicholas Kole.

AI poison Nightshade
Artists defend that AI poses a serious threat to their sector

After that, especially with Hollywood productions starting to use AI technologies, the question of "Is AI starting to take people's jobs" came to the agenda, and a war was declared against AI in many sectors.

Nightshade can be considered as a "mecha-biological" weapon against text-to-image generative AI models.

Now you: Is war the way to go, or is it for AI companies to be more transparent about the data they use and for countries to start AI regulation rules?

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Anonymous said on October 31, 2023 at 12:16 am
    Reply

    It has been a longstanding issue with image classifiers, like those used for autonomous driving, airport security, and other cases. With examples such as subtle sabotage of roadsigns causing severe misinterpretation by the autonomous system.

    The only news here is that someone made it public that they have found a way to misdirect dalle specifically, in a crude attempt to put in some faux DRM to thwart the rampant misappropriation of copyrighted or otherwise legally protected material, since “imperceptiple” watermarks are turning out to be a lost battle against unscrupulous grabtastic assholes who will strip metadata and have watermark-defeating filters to provide a figleaf of deniability and care little about the “subtle” loss of quality from defeating watermarks. They care primarily that their training data stash + synthesis ditto can survive a audit without getting too expensive.

    TLDR this is not a serious threat and it does not truly protect any creator from the lawless practices by these companies. It is a temporary inconvenience at best, they already harvest your material and can retrain at any time with better countermeasures. At this point they are not set up for serious adversarial pressure, just massive datagrabs and marketshare. They are basically running a advanced meme-image builder whose novelty will wear off soon and people get to grips with the limitations of their own creativity and the manipulations of this oversold toy.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.