NPR was one of 22 public service media (PSM) organizations across 18 countries participating in study, led by the BBC and European Broadcasting Union (EBU)
At NPR, we recognize our responsibility in understanding AI's impact on journalism and in advocating for best practices that ensure our reporting is represented accurately. To that end, NPR participated in a global research study led by the BBC and the European Broadcasting Union (EBU) on news integrity in AI assistants. This was one of the largest evaluations of its kind to date, including 22 public service media organizations across 18 countries, and in 14 languages.
The study's results, released today by BBC and the EBU, found that AI assistants routinely misrepresent news content no matter which language, territory, or AI platform is tested. An accompanying toolkit outlines problems that need to be solved to address the study's findings.
Why did NPR participate?
As a public media organization, NPR is committed to delivering trusted, accurate journalism to our audiences, even as news consumption habits change. AI assistants are already replacing search engines for many users. According to the Reuters Institute's Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s. This study provided us with a unique opportunity to collaborate with a well-respected set of journalism organizations to analyze the impact of AI summarization and representation of news content.
How did NPR participate?
Fourteen members of NPR's editorial staff volunteered to serve as reviewers of the AI assistants' answers. As part of this study, we temporarily stopped blocking relevant bots from accessing our content for approximately two weeks to collect the necessary responses for our analysis. Content blocking was then re-enabled.
What did the study find?
The study identified multiple systemic issues across four leading AI tools. Based on data from 18 countries and 14 languages, 45% of all AI answers had at least one significant issue, and 31% of responses showed serious sourcing problems — missing, misleading, or incorrect attributions. The full study can be found here.
How will NPR use this information?
The results help us consider what safeguards and audience education may be necessary, and can inform our strategies and training for AI adoption internally. These findings also reinforce the importance of our existing principles and standards, which demand that all final work products must be reviewed, fact-checked, and edited by humans, and cannot rely on AI for accuracy. NPR also contributed to the News Integrity in AI Assistants Toolkit, intended to be a resource for technology companies, media organizations, the research community and the general public.
Copyright 2025 NPR