News

News media is 'becoming part of AI systems': Notes from the JournalismAI Festival 2025

Nieman Lab · Andrew Deck · last updated

London — Last month, fact checkers, newsroom leaders, product managers, and AI strategists gathered in the Southwark neighborhood of South London for the JournalismAI Festival 2025. The event brought out journalists from several continents to share their most innovative AI use cases, as well as deeper insights into how these technologies are impacting newsrooms globally.

The two-day conference was hosted by Google News Initiative and the eponymous JournalismAI, a project of Polis, the journalism think tank at the London School of Economics and Political Science (LSE). Launched in 2019, JournalismAI was untangling knotty questions about the usefulness and ethics of AI adoption in journalism long before the launch of ChatGPT. After years as an online festival, this year marked JournalismAI’s first in-person conference.

Since some of the initial sheen of generative AI technologies has rubbed off, newsrooms in attendance appeared to have a thorough grasp on the specific promises and limitations of these technologies in their editorial work. More uncertain was generative AI’s impact on news audiences, as concerns and anxieties about its downstream effects surfaced on stage.

“Referrals are down in many of the markets that we are working in Brazil, in South Africa, in Indonesia. We are hearing from publishers — large publishers — that their traffic is down 50 to 60% in the past year,” said Irene Jay Liu, the director of AI, emerging tech, and regulation at the International Fund for Public Media (IFPM), which provides grants to news organizations serving the Global Majority.

Liu, who for years led Google’s News Lab across the Asia-Pacific region, pointed to search engines that are “taking journalistic content and summarizing it” as one of the main contributors to the traffic drop. “This is critical, because [the news] business model was already teetering,” she said in a conference wrap-up panel.

In the same panel, Ezra Eeman, the strategy and innovation director at Dutch public broadcaster NPO, questioned if newsroom adoption was the most pressing framework for conversations about journalism and AI. “We’re looking at how we add AI to our existing organizations, to optimize our existing flows, to add intelligence to it,” said Eeman. “The bigger play that’s happening, of course, is that media is being added to AI — becoming part of AI systems.”

The bulk of this year’s JournalismAI Festival offered lessons and learnings on the former topic, like reporting and editing use cases, AI-assisted article production and distribution, and ways to steer organizational adoption and change. The 35 newsroom grantees from JournalismAI’s 2024 Innovation Challenge participated in these conversations. The program, funded by Google News Initiative, distributes grants to news organizations in order to support AI adoption and experimentation, and representatives from these organizations were woven throughout this year’s program.

As the conference kicked off, JournalismAI announced the next edition of its Innovation Challenge. The call for proposals takes on these looming questions about AI’s impact on news audiences. Unlike the cohort from the inaugural challenge, these new grants will be awarded to projects focused on “audience intelligence and revenue growth.” In other words, questions of sustainability will drive this next grant cycle.

The new challenge seems to implore newsrooms to ask how AI technologies can not only be used to build better or faster newsrooms, but also stronger businesses.

No more Western media navel gazing

One of the strengths of this year’s JournalismAI Festival was its internationalism. Many grantees, including newsrooms across Latin America, Africa, and Asia, flew to London for the conference and presented on most cutting-edge AI use cases in their respective newsrooms.

“There’s a lot of innovation that’s happening, there’s a lot of hustling. A lot of the cool things that are being done — whether due to language, distance, air flight costs — we’re not seeing them in the global conversation,” said Liu, the AI director at IFPM. She encouraged North American and European newsrooms to look outside their own professional bubbles for the most innovative experiments in AI and journalism.

At the Indian digital news site Scroll, Sannuta Raghu has been leading an AI innovation team for years. One of the publication’s recent efforts is in article personalization. Readers can slide a bar at the top of the article to access several versions of an explainer story, at varying word counts, based on whether they want more or less context.

“We’re really thinking about how we can not create content silos, which is the typical group for personalization. We instead focus on form — how a particular news piece is delivered,” said Raghu, in a session on creating AI laboratories in newsrooms.

José Jasán Nieves Cárdenas, the editor-in-chief of El Toque, shared the latest update on his newsroom’s tools that analyzes social media posts in order to output real-time currency exchange rates for the Cuban black market. Small businesses across Havana rely on the rates to set their daily prices. The newsroom, which largely operates in exile, is now rolling out a premium subscription version to create a new revenue stream and also using the same AI technology to track grocery prices.

In Tunisia, the digital outlet Nawaat has been publishing independent accountability reporting since 2004. Just 10 days before the Festival, Nawaat received an order from Tunisian state authorities to suspend all of its activities for the month of November, according to its co-founder and publisher, Sami Ben Gharbia. In the face of increasing censorship, Gharbia says they are leaning on AI tools to try to make Nawaat’s archive as accessible as possible, including its past reporting on the Tunisian revolution in 2011.

“We need a tool that will always make Nawaat a living memory of the entire recent history of Tunisia,” said Ben Gharbia, during a presentation on its new website, Nawaat Chronicles. “[The site] actually summarizes the entire archive — 22 years of our content that is published in French, in English, and in Arabic.”

Similar to other interactive news chatbots, Nawaat Chronicles uses its story archive to generate a written summary on a given topic, place, or person and links to cited articles. This tool, though, can also generate a chronological timeline of events. Its “time machine” feature will even spotlight the biggest themes and stories published during a given month in Nawaat’s history.

There is no magic detection tool

The importance of knowledge sharing across regions felt especially urgent during a panel about AI detection tools and other methods for rooting out deepfakes on social media.

Celine Samson, head of online verification at VERA Files, said deepfake ads selling products and scamming social media users have spiked across the Philippines. Nieman Lab has reported on how similar scam ads — which regularly feature deepfakes of broadcast journalists, as well as other public figures and celebrities — have also touched down in India, South Africa, the U.S., and the U.K.

In the face of these deepfakes, newsrooms may be tempted to turn to the many for-profit AI detection tools that have flooded the market. But Samson and other speakers cautioned against an overeliance on these tools.

“I have instructed my team to never rely just on one tool to build your fact check, because we know that tools can be unreliable,” said Samson.

For Stephanie Burnett, a digital verification editor at Reuters, who leads a global team of fact checkers, auditing these tools has become a part of the job. “My inbox is flooded with ‘we’ve got the magic tool’ and ‘this is the one,’” she said, explaining that there is no one-stop shop for AI detection. “What we’re trying to establish is our own Swiss army knife — so one that’s really good at detecting audio deep fakes, one that’s really good at visual deepfakes.”

Multiple speakers spoke highly of InVid, a video verification platform and plugin developed by Agence France–Presse (AFP) and a consortium of other European publications. InVid both integrates AI detection products and centralizes access to other analysis tools needed in the debunking workflow, like keyframe enhancers and reverse image search engines. Unlike some tools built by AI startups, InVid consulted journalists directly in its product development and some analysts said its reps are more accessible for troubleshooting.

In reviewing tools for your own newsroom, panelists said it’s important to remember that efficacy isn’t static, especially as new models launch. “Every time OpenAI updates its platform, or Google updates its platform, [these tools] have to play catch-up, so they’re always one step behind,” said Burnett.

Use AI to cover your blind spots

Throughout the conference talks, uses cases for investigative journalism shone through. These tools sift through tremendous amounts of information to identify story leads and, ultimately, automate tasks that couldn’t be done at scale manually due to resource constraints.

Nowhere was AI’s potential for investigative reporting more apparent than during CalMatters’s presentation on its custom-built Digital Democracy website.

In 2024, the state-focused nonprofit publication helped launch a revamped version of the site, which brings together over 200 data points from government databases all over California, including the voting records of each of the state’s 120 legislators. “None of it is secret. It’s all government information, but it’s impossible to find. It’s in many different places. It’s in obscure databases. It’s not easy to get,” said Neil Chase, the CEO of CalMatters, in a talk about the project.

Most of this work is done by bots and scripts that scrape data from these databases, but some is still done the old-fashioned way. “We do have college students come in every six months. We feed them pizza and pay them $20 an hour to read these forms and turn it into data. [Some of] the stuff is just not machine readable,” Chase added.

The AI system flags notable votes, like when a specific legislator votes against the interests of a major donor. The end product is not a finished article, but a tip sheet — a short write-up that can be used by CalMatters staffers and other reporters to do the shoe leather reporting still needed to file a story.

According to Chase, Digital Democracy now contains “every word spoken in the state government, every dollar donated to these politicians, every bill introduced, and every vote cast.” Reporters used that intel to publish an article and an Emmy-winning news segment on how California legislators regularly didn’t cast ballots at all, in an attempt to avoid the potential fallout of going on record on certain issues.

“They are now voting more often because of these stories we did,” said Chase. In September, CalMatters stepped outside of California and launched a similar tool for the Hawaiian state legislature in partnership with Honolulu Civil Beat.

In Spain, a fact-checking organization called Newtral has built out a custom AI tool to feed its reporters tips. One of the most challenging social media platforms for Newtral’s newsroom to monitor is the messaging app Telegram, which, among other challenges, doesn’t allow for keyword searches across the platform.

“Finding stories and communities that spread disinformation on Telegram is more of an artisan job,” said Marta Martínez Mora, a machine learning engineer at Newtral. That’s why Martínez Mora’s team decided to build an AI tool that could monitor every post on select suspicious channels and alert fact-checkers when a dangerous narrative was beginning to go viral.

FactFlow began by monitoring a list of known Telegram channels that spread disinformation. Today, it monitors roughly 7,000 Telegram channels. “We gathered messages from all those channels,” said Martínez Mora. “With that, what we actually did was train a model to understand disinformation patterns, to point out some contents that maybe were toxic or recurring dangerous narratives.”

FactFlow currently only monitors text-based messages, but as a next step the newsroom is hoping to build out similar functionality for image and video-based messages. To date, FactFlow has reviewed more than 10 million messages on Telegram. Now less time is spent in the newsroom simply monitoring channels, and more time on actually fact-checking posts.

“For some tasks, like identifying suspicious content and the [potential] falsehoods inside that content, before it took 45 minutes or even hours to do that. With FactFlow, it has taken one minute or seconds,” said Martínez Mora.

Photos of JournalismAI Festival 2025 at etc.venues Prospero House in London used courtesy of JournalismAI.