CNET Halts Publishing AI-Generated Stories Following Disclosure Controversy
CNET is a media company that has been around for more than two decades. It was founded in 1994 and is based in San Francisco, California. They have a website called CNET.com, which covers a wide range of technology topics and more. They also run a YouTube channel where they post tutorials and product reviews and also have a podcast network that focuses on technology-related topics. Recently, I discovered that CNET has put a pause on all its publications because their stories were being generated using an artificial intelligence (AI) tool. CNET only formally announced the use of AI when readers noticed a tiny disclosure. Their Editor in Chief claimed the use of the AI was not secret but quiet.
Readers noticed that original publications of the stories contained errors such as confusing abbreviated terms. You would find terms such as APR and APY and some incorrect calculations on some figures. Apparently, CNET is amongst several websites that have been publishing and writing using AI. An example of other sites is Creditcards.com and Bankrate, which have also paused any AI stories.
The unnamed AI tool was built by Red Ventures, and editors can choose domain-level sections and domains from which to get data to generate their stories. The Editors could also use AI-generated text as well as their own reporting and writing. No information was received on the data set that was used to train the AI, and no information was received concerning plagiarism. This was dismissed by saying that more information would be made available at a later stage.
The leadership of the company went on to differentiate between the unnamed AI tool and other automated technology that Red Ventures uses. These automated tools are used to insert numbers for refinance rate and mortgage rate stories. These tools have been in use for longer than the company has disclosed it. CNET is set to start disclosing its own stories about AI.
According to The Verge, the staff were clueless about what was going on in terms of the use of AI tools. Publishing content on banking and finance is something media sites love because it draws a lot of attention through search engines, and this can be converted into affiliate link profits. When a media company creates content, it's standard practice that they want the information optimized for search engines. The problem comes when a bot is used to identify and create these stories, and this totally defies the ethical editorial practice. CNET prioritized making money rather than staying timely and relevant with its news.
In response to all these accusations, CNET published a post called ‘Opens in a new window', explaining that the AI was solely intended to test if it could assist their busy staff of editors and reporters by covering topics from a wider perspective. The justification is that the technology gave them time and energy to focus on deeper reporting and analysis.
Starting to think “Shaun” is AI-generated content. This whole article is just a plagiarized, shortened version of the linked Verge article.
Most internet articles are copied from some main article. Sad but true.
How do you think you can get millions of hits on a Google search term?
Big companies be like:
“Instead of paying my employees unfairly low wages, I can just replace them all with AI algorithms which I don’t have to even pay (at all), and nobody will notice!”
I guess the public best be thankful that it is only article-writing in this particular instance, and not something more critical like writing of software, which yes, is exactly what some of the big software companies are indeed exploring right now. (software written by AI, so that it doesn’t need to be paid)
Maybe ghacks uses AI to generate their articles as well. It’s my only explanation why it feels like every second article is about AI the last few weeks.
Did you get AI to suggest how to write the links for most of this article? I get that impression as the author seems to have a fervent following of AI. And often publishing articles with link text is that distracting, and has no relevance to the associated text whatsoever.
Link text should usually make sense out of context. A link’s primary purpose is to communicate to users what they’ll find on the other side of a click.
“[…] The problem comes when a bot is used to identify and create these stories, and this totally defies the ethical editorial practice. … ”
Perhaps albeit some of the biased AI articles you have recently been writing. Seem to be advertising that people that do similar things, e.g. getting AI to generate marketing content, etc. Shouldn’t you be leading by better example?
“gHacks Halts Publishing AI-Generated Stories Following Disclosure Controversy” ;)
If the CNET controversy started because some readers noticed, we are already on our way there…
As realistic-SOUNDING as ChatGPT may be, many questions I have posed to it based on demographics, computer topics, or information on US states were answered incorrectly. It is also about 2 years behind on current events. While there are a lot of people who will be fooled by it, the same people are already falling victim to phishing attacks, internet hoaxes, and dubious “news” on social media. AI for the medium-term is going to be just one more source of noise.
ppl forget that its a language model. not a knowledge model. this ‘ai’ lacks any ability to confirm if its output makes any real-world sense. its only capablity is to sound like a human so it fakes sense. but it has actually no clue if that what it talks makes sense. its not able correct its output by real-world experience aka it cant learn by itself beyond the scope of the data it was fed with.
Probem are hyped ppl like the author who tends to anthropomorphise this box of matrix-multiplications.
“AI” has been writing newspaper content for at least a decade, especially in the sports category.