'We’re not going to do a chatbot anytime soon': Notes on the RISJ’s AI and the Future of News symposium
On March 17, the Reuters Institute for the Study of Journalism (RISJ) gathered reporters, academics, and technologists at the University of Oxford for a one-day symposium on AI and the future of news.
Conversations about AI and journalism have been roiled in recent weeks with controversies over AI-generated misquotes, Slack channel leaks, and J-school curriculum critiques. The RISJ symposium took a step back from those debates to focus on the bigger picture and how this technology can be used ethically, and effectively, in journalism. Nieman Lab attended virtually. Here are a few of the most interesting tidbits we heard over the course of the day.
AI means computer vision, not just LLMs
A panel on AI adoption in investigative journalism reminded audience members that AI doesn’t just mean large language models (LLMs), but also technologies like computer vision.
Ryan McNeill, the enterprise editor for geospatial investigations at Reuters, said the first time he saw satellite imagery used in a news story was in the early 2000s. With advancements in machine learning, the geospatial investigations of old pale in comparison to the work he’s doing with his team today.
“If you’ve ever been to Google Earth Engine…you can process the entire landsat archive going back to the [19]80s or 70s in minutes or seconds,” said McNeill. “We can go through huge amounts of data even faster than we could’ve imagined.”
At Reuters, McNeill’s team has been using satellite imagery analysis to cover global conflicts and human rights abuses. That includes the Sudanese civil war, during which Reuters manually assembled a database of satellite imagery of cemeteries across the Darfur region. The imagery relied on remote sensing, or satellites that capture the reflections or emission of electromagnetic energy, offering a level of granularity that can detect small mounds and other topographical changes on the earth’s surface. As famine swept across Sudan in 2024, the Reuters team used the imagery to investigate the proliferation of grave sites in the region and begin to quantify the tremendous loss of life.
With this manually compiled dataset, McNeill says his team could train a model to identify cemetery sites and count new graves elsewhere. The same methodology can be used to trace other changes in landscapes, like vegetation loss, destruction of buildings, growth of urban development, and even impact craters. With computer vision models, reporters can not only analyze hundreds of images, but tens of thousands.
“It gives you the opportunity to create empirical data where it doesn’t exist because of conflict or anything else,” he said.
The great chatbot debate
Newsrooms worldwide have been experimenting in recent years with reader-facing chatbots. These range from conversational archives and election-focused explainer bots to more general chatbots that pull from a specific desk’s beat reporting.
Chris Moran, the head of editorial innovation at The Guardian, expressed skepticism about this path forward for AI in journalism.
“Just because you point an LLM that you don’t own…at your archive, does that mean what it spits out is Guardian journalism?” Moran asked the audience.
He pointed to the fact that most outputs of an LLM-powered chatbot are unique or novel, surfacing questions of accountability when compared to the static, precise, and verified text of an article page. “That is a very interesting distinction between AI journalism and journalism. You could argue that they both have their upside.”
For now, Moran said, “there would be a very high bar for entry for The Guardian.”
Instead of delivering chatbots to readers, his team is building chatbots for reporters. An internal tool, Ask the Guardian, hits the publisher’s own API and is able to fetch past stories on a given topic and summarize them. According to Moran, it is likely most helpful for archival searches, particularly to parse stories published pre-SEO optimization. All summaries include citations and URLs.
“It’s for people looking for old interviews with Peter Mandelson, or quotes from an individual person,” he said, explaining it is intentionally narrow and is meant to serve a basic research function.
That’s not to say that The Guardian isn’t experimenting at all with reader-facing AI tools. Just this week, Moran’s team rolled out a new A/B test for LLM-generated topic pages on The Guardian’s website. Curating and maintaining topic pages, or tag pages, can be an endless task for homepage editors and audience professionals. This new automated offering uses an LLM to extract three top “storylines” on a given topic and generate a short title for each of them. The tool then curates relevant articles on each topic, with a clear disclaimer about where the AI-generated text has appeared.

The hope is that this use case can break up the wall of reverse chronological articles on a topic page like China, to make the archive more digestible and, hopefully, improve click-through rates.
“We’ve built this exactly in opposition to the fact that we’re not going to do a chatbot anytime soon,” Moran said.
Using AI to fact-check livestreams
Brazilian fact-checking organization Aos Fatos has been finding its footing as a wave of AI-generated disinformation floods the information ecosystem.
Tai Nalon, the founder of Aos Fatos and a current fellow at RISJ, shared some numbers. Aos Fatos’ fact checks of AI-generated disinformation increased by 70% in 2025 over the previous year. Out of 619 fact-checked false claims last year, 99 were of synthetic media of some kind. That represents about 16% of all the organization’s fact-checks. These were mostly image-driven, though Nalon said audio deepfakes have also been on the rise.
From the organization’s monitoring of social media posts, they found that last year AI-generated false content had reached over 32.6 million views across TikTok, Threads, X and Kwai, a popular short-form video app in Brazil. The organization also tracked 2.1 million interactions, such as likes and shares, on Facebook and Instagram for AI-generated disinformation.
“This growth is directly linked to advancement in generative AI tools, especially hyperrealistic visual generation,” Nalon said. Anecdotally, she said, the organization’s fact checks of AI-generated content in 2024 were mostly focused on scams, but the content has increasingly become more political.
As Brazil looks ahead to its election in October, Aos Fatos is developing some internal AI tools to aid in its fact-checking work. Busca Fatos is a tool for fact-checking live events like debates, major speeches, and interviews in real time. An LLM-powered tool transcribes the stream and links it to specific fact-checks, explainers, and other relevant sources to help a human fact-checker verify statements and accelerate the editorial process.
Busca Fatos is currently best integrated with YouTube livestreams, Nalon said. “YouTube [has been] the main source of information for over a decade in Brazil, so that is our emphasis right now,” she said, adding that down the line Aos Fatos hopes to improve its compatibility with other livestreaming platforms, like Instagram. The organization also plans to make the tool available to other newsrooms.
You can watch the full day’s events on the RISJ YouTube channel.