Using AI to generate content? The story of CNET's downfall will be an eye-opener

Wikipedia placed the publication in the Generally Unreliable list

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

CNET's AI content generation downfall

When Futurism first broke the news that CNET, a major web publication, is employing AI-based tools to generate entire articles, which surprisingly had high plagiarism and grammatical errors, everyone knew this would have implications!

As it turns out, Wikipedia has now down downgraded the website from being Generally reliable in pre-2020 to Generally unreliable at present. This, of course, is a downside of using AI-generated content without necessary checks in place.

The CNET and AI story

Back in 2022, when CNET, operated by Red Ventures, reportedly first quietly pushed AI-generated articles, the company didn’t add a distinguishable disclaimer. Rather, it was included in a dropdown menu. This was highlighted by Gael Breton on X (formerly Twitter).

At first, Google did reward such articles, and they enjoyed the top spots in search results. Since then, things have somewhat changed!

Google, on its part, is not entirely against the use of AI to generate content. Its primary objective is to deliver results that best describe or explain the searched query. So, as long as the content is of high value, Google won’t consider the origin.

However, in the Search Central documentation, Google states that AI content created with the aim of manipulating SERP is in violation of the existing policy.

If you’re primarily making content to attract search engine visits, that’s not aligned with what our systems seek to reward. If you use automation, including AI-generation, to produce content for the primary purpose of manipulating search rankings, that’s a violation of our spam policies.

Note that if you are relying on artificial intelligence, it’s imperative to add a disclaimer that the article is generated using AI and describe the extent. This is the recommended approach by Google.

Many types of content may have a “How” component to them. That can include automated, AI-generated, and AI-assisted content. Sharing details about the processes involved can help readers and visitors better understand any unique and useful role automation may have served.

Coming to the present day, CNET’s experiment with AI, in our opinion, has been a failure. After Wikipedia, other similar platforms and general users will stop trusting the publication!

Also, Wikipedia, as opposed to the common notion, remains a trusted source of information. While it does allow anyone to make edits, the community running the platform asks for citations and sources for any changes made. So, its actions against CNET were in the right spirit.

AI-generated content has long troubled both publishers who believe in organic content and readers who want to get first-hand, experience-based information. Given how things are going, we may see Google and other search engines bring in more stringent checks!

Another concern is the use of content to train AI-based models. Microsoft and Open AI are being sued for the same reason!

What do you think about the use of AI in content generation? Share your thoughts in the comment section.

More about the topics: AI