Adobe is training its AI on user data
Data has been underpinning the modern internet age for decades now. User data has long been used by internet giants to fund their operations and profit through powering ever more focused and tailored advertising. Now, however, user data seems to be prized by tech corporations for another controversial issue, training AI. Adobe is the latest company to cause ire among its users as its content analysis policy has been met with backlash from social media users
Adobe users have taken to social media to voice their concerns that the company is using the content they have saved on its servers to train its AI model, which the company calls Sensei. The model has been one of the Adobe suite’s key innovations of late, able to offer creative and impressive outputs. It now seems, however, rather predictably, that its skill and ability to impress Adobe users actually comes from the work of those very users, whose data has been training it.
The issue seems to have arisen when Adobe added a content analysis term to its data collection permissions, putting the impetus on users to opt-out of the plan. Hidden away in the terms and conditions, the company explains that it may analyze content using machine learning to develop and improve its products and services. This means that while Adobe claims that users maintain control over their privacy preferences and settings, in fact, it isn’t too easy for users to exert that control.
The move has clearly annoyed some users too with many listing ways to lock down privacy settings across Adobe's suite of Creative Cloud and Document Cloud apps in a bid to help other users fall foul of having their content used in this way.
This all raises the point of who owns what when it comes to AI, which is an issue that has been causing quite a debate among artists and creatives. If an AI has been trained on your work, surely your work has contributed to the creative output it has produced. In this sense then, surely you should have an element of ownership over it, shouldn’t you? Or is it the giant tech corporation that used your data to train its model that should reap all the rewards?
It is an interesting debate for sure, but it is slightly disheartening that as we move forward into implementations of seemingly revolutionary technologies, we are still seeing the same patterns of user data not really belonging to anybody and being fair game for tech giants to take and use at their will.
Ditched adobe years ago.
Somewhere around 2019 i realized that every key stroke is sent back to their servers.
Affinity is a great replacement.
No surprise. Adobe is a terrible company. I wish Macromedia would come back.
People really need to understand that anything and everything that goes to the cloud will be inspected and analyzed by the best (and worst) AI algorithms that humanity can produce. And occasionally, also inspected and analyzed by humans themselves.
There is no privacy when it comes to the cloud. Zero. And to think, a substantial chunk of the population pays monthly to be subject to the above!
Think of the cloud as a public computer terminal where you save all your files on the C drive. The data is up for grabs by anyone; the sys admins, governments, hackers, everyone.