Dozens of websites are using AI chatbots to copy and repurpose articles from top publishers, according to a report from the news-rating group NewsGuard, offering a glimpse into how artificial intelligence tools risk undermining media companies and muddying the online news industry.
The 37 websites, which Bloomberg also reviewed, posted stories that contained identical text, photos and quotes to articles previously published by the New York Times, Reuters and CNN, according to the report.
The examples NewsGuard found – a mix of online content farms publishing breaking news, lifestyle content and more, including sites with names such as DailyHeadliner.com and TalkGlitz.com – didn’t credit or refer to the original authors or publications.
News aggregators and content farms like these have long existed, seeking to generate traffic via search engines.
NewsGuard’s report examined the use of AI to rewrite news to populate them.
Tensions have escalated in recent months between the media industry and tech companies over concerns that a new crop of powerful AI tools trained on large swaths of online data – including copyrighted works – could churn out content that undercuts the livelihoods of authors, artists and journalists.
Authors have taken legal action against multiple AI companies, alleging copyright infringement, and concerns about the use of AI-generated content in TV and movies have become a major issue in the Hollywood actors’ and writers’ strikes.
The New York Times is also said to be weighing legal action against ChatGPT-creator OpenAI over how its reporting is incorporated in training data, according to NPR.
The websites cited in the NewsGuard report didn’t mention whether they made use of AI chatbots such as ChatGPT or Alphabet’s Google Bard, which can generate human-sounding text in response to a short written prompt. OpenAI specifically prohibits using its AI models for “plagiarism.”
Google prohibits the use of its generative AI for generating and distributing “content intended to misinform, misrepresent or mislead,” including by presenting AI-generated content as though it was made by a person, or as original, “in order to deceive.”
OpenAI and Google didn’t immediately respond to requests for comment from Bloomberg News.
While the websites don’t spell out the role AI plays in populating stories, NewsGuard found articles on each site with the same telltale sign: an automated error message. NewsGuard said it spotted 17 articles posted in the past six months on GlobalVillageSpace.com, for example, that contained AI error messages in the body of the stories.
One post from the website used the same images and quotes included in a May piece in the New York Times about professional football player Darren Waller’s talent as a musician.
The last two lines of that post read, “As an AI language model, I cannot guarantee the accuracy of this article as it was not written by me. However, I have tried my best to rewrite the article to make it Google-friendly.”
The post was removed from GlobalVillageSpace.com after NewsGuard contacted the website, but it remains visible via a screenshot taken by the Internet Archive.
NewsGuard said it reached out to the New York Times, as well as all other publishers whose content it believed had been reposted by the group of 37 websites it surveyed, and the websites that reposted the content.
New York Times Co. spokesperson Charlie Stadtlander told the news-rating group that the website wasn’t authorized to use the article.
“I think we’ll continue to see more and more of this until the detection tools get better, until news outlets start to realize it’s a problem and until other intermediary sources start to crack down on it,” said Jack Brewster, a NewsGuard enterprise editor who co-authored the report.
NewsGuard previously published a report that found dozens of news websites – 49 in total – generated by AI chatbots proliferating online.
Many of the websites began publishing this year as AI tools gained widespread adoption.
For online content farms, using AI tools to rewrite numerous stories from other websites could be a way to boost revenue.
Fifteen of the websites NewsGuard surveyed showed automated ads from well-known companies, including on articles that had content NewsGuard suspected was rewritten using AI.
However, this practice may also test the limits of what’s considered an acceptable aggregation of a news article.
In response to questions in May about whether the AI-generated websites violated its advertising policies, Google spokesperson Michael Aciman said that the company doesn’t allow ads to run alongside harmful content or spam, or material that has been copied from other sites.