CNET Halts Publishing AI-Generated Stories Following Disclosure Controversy
CNET is a media company that has been around for more than two decades. It was founded in 1994 and is based in San Francisco, California. They have a website called CNET.com, which covers a wide range of technology topics and more. They also run a YouTube channel where they post tutorials and product reviews and also have a podcast network that focuses on technology-related topics. Recently, I discovered that CNET has put a pause on all its publications because their stories were being generated using an artificial intelligence (AI) tool. CNET only formally announced the use of AI when readers noticed a tiny disclosure. Their Editor in Chief claimed the use of the AI was not secret but quiet.
Readers noticed that original publications of the stories contained errors such as confusing abbreviated terms. You would find terms such as APR and APY and some incorrect calculations on some figures. Apparently, CNET is amongst several websites that have been publishing and writing using AI. An example of other sites is Creditcards.com and Bankrate, which have also paused any AI stories.
The unnamed AI tool was built by Red Ventures, and editors can choose domain-level sections and domains from which to get data to generate their stories. The Editors could also use AI-generated text as well as their own reporting and writing. No information was received on the data set that was used to train the AI, and no information was received concerning plagiarism. This was dismissed by saying that more information would be made available at a later stage.
The leadership of the company went on to differentiate between the unnamed AI tool and other automated technology that Red Ventures uses. These automated tools are used to insert numbers for refinance rate and mortgage rate stories. These tools have been in use for longer than the company has disclosed it. CNET is set to start disclosing its own stories about AI.
According to The Verge, the staff were clueless about what was going on in terms of the use of AI tools. Publishing content on banking and finance is something media sites love because it draws a lot of attention through search engines, and this can be converted into affiliate link profits. When a media company creates content, it's standard practice that they want the information optimized for search engines. The problem comes when a bot is used to identify and create these stories, and this totally defies the ethical editorial practice. CNET prioritized making money rather than staying timely and relevant with its news.
In response to all these accusations, CNET published a post called ‘Opens in a new window', explaining that the AI was solely intended to test if it could assist their busy staff of editors and reporters by covering topics from a wider perspective. The justification is that the technology gave them time and energy to focus on deeper reporting and analysis.