ChatGPT under fire by FTC for data leak and inaccuracy
The U.S. Federal Trade Commission has opened an investigation into OpenAI, the firm that developed ChatGPT, to see if it violated consumer protection laws by using its chatbot to disseminate misleading material and scrape public data.
In a 20-page letter, the government asked OpenAI for specific details on its AI technology, products, clients, privacy policies, and data security measures.
The action against San Francisco-based OpenAI represents the largest regulatory threat to date to a startup that launched the generative artificial intelligence craze, captivating customers and companies but arousing doubts about its potential dangers.
FTC ChatGPT investigation target spreading false information
The company that created ChatGPT, OpenAI, is under investigation by the U.S. Federal Trade Commission to discover if it violated consumer protection rules by using its chatbot to spread false information and scrape open data.
The government requested precise information from OpenAI in a 20-page letter on its AI technology, goods, clients, privacy policies, and data security procedures.
The spokeswoman for the FTC chose not to comment on the investigation, which was first reported on Thursday by the Washington Post.
The FTC has also requested that OpenAI make public the data it used to train the big language models that serve as the foundation for services like ChatGPT, but OpenAI has so far rejected. One among the writers suing OpenAI over allegations that ChatGPT's LLM was trained on data including their works is the American comedian Sarah Silverman.
The FTC has asked OpenAI to disclose whether it received the data directly from the internet (via "scraping") or by buying it from other parties. Additionally, it requests details on any measures made to ensure that personal information was not included in the training data as well as the identities of the websites from where the data was obtained.
Related: ChatGPT failed to get service status: How to fix it
Poor governance inside AI firms, according to Enza Iannopollo, principal analyst at research company Forrester, could be a "disaster" for customers and the companies themselves, opening them up to investigations and fines.
“As long as large language models (LLMs) remain opaque and rely largely on scraped data for training, the risks of privacy abuses and harm to individuals will continue to grow,” she said.