AI tools for journalists, and how to avoid ‘brain death’
For journalists in Britain, artificial intelligence is a double-edged sword.
On the one hand, AI (in the form of Google’s Overviews, is destroying traffic and forcing publishers to shift business models), while deceptive AI practices such as ‘fake experts’ are making the job of journalism harder than it has ever been.
But at the same time, many journalists use AI a huge amount in their daily job, with some harnessing exotic AI tools for their reporting, British journalists told Press Gazette.
Research by the Thomson Reuters foundation shows that 53.4% of journalists express fears about AI and its impact on ethics and journalism – but at the same time, 81% use it daily.
Harriet Meyer, founder of AI for Media, which trains journalists and PRs how to use AI without compromising editorial standards, said some AI models are less prone to hallucination.
She said: “There are some specific tools which work particularly well for journalists, notably Google’s Notebook LM which is a so-called ‘grounded’ AI model that pulls cited information from your source material. It now includes Gemini’s incredibly powerful ‘deep research tool’ too, enabling journalists to prompt for the most relevant story sources.”
NotebookLM
NotebookLM is available as part of paid-for plans on Google Accounts. It can also turn one type of data into another at the touch of a button, Meyer says.
“Journalists can then upload and critically interrogate these, find the gaps, and turn info into data tables and infographics. It can also transcribe Youtube videos and interview recordings, making it super powerful.
Claude Projects
More focused AI searches are also hugely useful, says Meyer.
She said: “The big generative large language models – ChatGPT, Claude, Copilot and , Gemini – come with warnings about hallucinations and bias. But some tools within these, such as Claude Projects, can lock down specific instructions and focus the model on your uploaded data. Their memory capabilities are also amazingly powerful, enabling vast amounts of information to be uploaded and analysed in minutes. Human journalists simply don’t have the capability to spot the patterns and analyse so much information at once.”
Trint and Otter
Multiple journalists said they relied on auto-transcription services such as Trint and Otter to deal with phone, video and in-person interviews.
However, it should be noted that some newsrooms ban the use of Otter because it has been found to use transcriptions to train its AI tool and hold recordings indefinitely.
Simon Bainbridge, former editorial director of the British Journal of Photography, said: “Otter has made journalism just about sustainable, and I no longer have to spend hours or even days transcribing. (I don’t do quick interviews, and could never resist transcribing every word.) But, as you know, it’s far from perfect.”
Trint and other transcription services appear to offer better levels of security than Otter.
Bainbridge also experimented with custom GPTs – special customised versions of ChatGPT built for specific tasks, in this case to clean up transcripts.
He said: “I fed them with my previous work and various titles’ style guides. I didn’t work out as well as I hoped. No matter how much I insisted on a verbatim transcript, minus repetitions, and ums and ahs, and correcting obvious errors, I found it always wants to ‘improve’ the text.”
Bainbridge says that he has ongoing issues with the style of writing ‘imposed’ on the text by AI.
He said: “I work mostly for arts publications, and I find it wants to impose a phraseology that I am seeing more frequently, but which has been emerging for years, which is a kind of compressed academic style that, when you break it down, often doesn’t actually say anything, but sounds clever. As someone who used to be an editor and who values readability and connection with the reader, I have always hated this pseudo-academic language, which seems to be AI’s default.”
Claude Cowork
The ‘agentic’ capabilities of Claude Cowork put it far above standard Large Language Models such as ChatGPT, says tech journalist Kane Fulton.
‘Agentic’ AI refers to AI that can complete tasks by itself, rather than simply answering queries in a chat window.
Cowork can organise files, for example, read emails and send them on your behalf.
Fulton, a journalist who has written for tech titles such as T3 and TechRadar, said: “When I use Google Deep Research or Perplexity for researching a topic, it feels like an assistant taking your brief, going away for 5-10 minutes to conduct the research, then coming back with a massive pile of papers that you have to sift through to find the most relevant information.
“Performing the same task in Cowork feels like having a researcher sit next to you who finds the information in half the time and then asks you if it’s on the right track before performing additional searches if required. It even asks you questions about what it’s found, with multiple-choice answers on what it does next. Loving it so far – just wish it had more generous limits on the ‘basic’ Pro plan.”
Perplexity
AI-enhanced search tools such as Perplexity are useful for research (and finding royalty-free images), says Pete Warren, an investigative technology journalist who has worked for titles including the Guardian and Sunday Times.
Various publishers are currently suing Perplexity for taking their content without consent.
Warren said: “I use Perplexity because it seems appropriate for a journalist, ChatGPT and Anthropic Claude. I use them for searches that are more nuanced than Google or Microsoft. I use a number of them together to find information and people.
“I always check sources and keep going over information. I don’t use them to do any writing because that way leads to brain death. If you don’t keep writing yourself, you lose the skills. It’s very easy to let the AI take over, and we have done a lot of research into that with psychologists.
“It’s a tendency that people don’t notice but it’s like a virus if you keep using spell check your spelling gets lazy. I occasionally use AI to generate images because it’s quite good at getting something you want. I also use it to find royalty-free images.
“Researchers found that Army personnel in the US would not override the missile systems because they wanted to dodge responsibility for their actions. People abdicate responsibility very easily. AI erodes confidence and experience.
“I interviewed Edvard Moser, the Nobel Prize Winning neuro-scientist about this, and he said Sat Nav is a great example. We navigate by remembering landmarks and waypoints. We do that using reinforcement. During Covid apparently a lot of people lost mental route maps. AI is like Sat Nav. It has all of the data.”
The post AI tools for journalists, and how to avoid ‘brain death’ appeared first on Press Gazette.