Journalists may see AI as a threat to the industry, but they’re using it anyway
Although a few larger surveys of how journalists use artificial intelligence have taken place, some mostly surveyed early adopters and others didn’t distinguish between current use and planned future use.
So we decided to survey a representative sample of journalists — in the U.K. We asked about their and their newsrooms’ actual use of AI and how they perceived and approached it. The results were published in our recent report by the Reuters Institute for the Study of Journalism at the University of Oxford.
Overall, the results show that most journalists (56%) use AI professionally weekly, including 27% who use AI on a daily basis. Only 16% said they had never worked with AI on a journalistic task.
The three most frequent uses are for language processing (transcription, translation, grammar checking). These tasks may top the list because the accuracy problems associated with AI output are probably of less concern in these contexts than they would be for tasks such as fact-checking. Nevertheless, our findings clearly show that journalists are also using AI for substantive journalistic tasks. More than a fifth use AI at least monthly for “story research” and 16% for “idea generation” and “generating parts of text articles.” At the other end of the scale, AI is rarely used for still image or video generation.
Who is using AI more frequently?
Male journalists reported somewhat higher levels of AI use than their female colleagues and younger journalists use AI more frequently than those who are older. Our findings also show that AI use increases with management responsibility. Part of the explanation may be that the uses being made of AI by those with more management responsibility are subject to fewer limitations than the uses being made by those with less. AI use was also linked to some of the reporting beats journalists worked on. We found business journalists use AI significantly more frequently than those reporting on lifestyle topics.
In the survey we asked journalists to tell us which media formats they produced in. We found that photojournalism is associated with less frequent use of AI. By contrast, being involved in the production of “graphics, cartoons, illustrations, or animation” is associated with more frequent AI use. We also found that the more of these media formats journalists produced in the more frequent their AI use. They may be turning to AI to try to reduce the pressure of producing journalism in multiple formats. Or AI may be enabling journalists to produce in more formats.
AI and job satisfaction
It has often been suggested that the use of AI in journalism will relieve journalists from low-level tasks, freeing up time for them to work on more complex and creative tasks. Our survey results are not aligned with such suggestions. We found that more frequent AI users are more likely to believe they work on low-level tasks too much. One explanation could be that AI use comes with new, AI-specific, low-level tasks, such as cleaning data and checking AI output. Another explanation could be that journalists who feel they work on low-level tasks too frequently are using AI more often to try to lighten this aspect of their workload.
Our survey also shows that more frequent AI users are not more satisfied with the amount of time they spend working on complex and creative tasks such as in-depth interviews and investigations. Indeed, those who are most satisfied are journalists who do not use AI at all.
Opportunity or threat?
We found that a clear majority of journalists (62%) think that AI represents a large or very large threat to journalism, whereas only a small minority (15%) say it’s a large or very large opportunity. Despite younger journalists being more likely to use AI, they don’t see it as less of a threat or more of an opportunity.
However, management responsibility does make a difference. Those in the more senior roles are more positive toward AI, although they still think it’s more of a threat than an opportunity. Journalists with higher levels of AI knowledge are also more likely to see AI as an opportunity.
But the largest differences are to do with frequency of AI use. Those that use AI daily are one of the few groups who do not have an overwhelmingly pessimistic view of the potential impact of AI on journalism. This highlights the importance of technology use for shaping attitudes.
Ethical concerns
Part of the perceived threat to journalism posed by AI is the potential ethical concerns it raises. To dig deeper into this issue, we asked journalists how concerned they were about a range of different potential ethical consequences. Overall levels of concern are very high. For example, more than half say that they are extremely concerned about the negative impact on public trust in journalism, accuracy and originality.
Most groups of journalists share these ethical concerns. However, there are some differences. Concern tends to be higher among those with higher levels of AI knowledge, but lower among those that use AI daily.
Present and future newsroom integration
Forty percent of the journalists we surveyed reported that AI has not been integrated into their main newsroom’s processes at all. Where it had, integration was mostly limited. Just 11% of journalists said AI integration was “moderate,” with very few describing it as extensive or full.
We see more integration in news outlets that are part of conglomerates than at independently owned outlets. Independents may be more flexible, allowing their journalists to adopt AI in an ad hoc way. However, conglomerates are more likely to be able to roll out AI companywide, due to having dedicated AI staff and more resources.
Although U.K. newsrooms have only achieved limited AI integration, if any, so far, journalists do think this will change in the future. Overwhelmingly they think AI integration will increase at their main news outlet. This is more the case among journalists whose main outlet is part of a conglomerate rather than independent.
Integration of AI into newsrooms comes with various practical and organizational issues that news outlets must consider. So how are news organizations approaching issues such as AI guidelines, tool selection, and training?
Around 40% of U.K. journalists reported that their main news outlet has established guidelines for most of the issues we asked about, including human oversight, data privacy, and transparency. However, fewer said that there were guidelines on AI bias.
Around one third (32%) of U.K. journalists report that their news organization provides AI training. Journalists working for conglomerates are more likely to say their news organisation provides AI training (50%) than those working for independent outlets (14%) — likely reflecting their increased resources.
In the survey, we made a distinction between AI tools developed in-house and by third parties. Given the skills and resources required to develop AI, it’s not surprising that 57% of journalists say that their main outlet only uses third-party tools. Independent outlets are more likely to only use third-party tools than conglomerates.
Overall, our data shows a gap between independent outlets and conglomerates, a gap that journalists expect will increase. Unless action is taken, less well-resourced news outlets may continue to integrate AI less, provide their staff with less AI training, and be more dependent on third-party AI tools.
You can read the full report here.
Neil Thurman is a professor of communication in the Department of Media and Communication at LMU Munich and a senior honorary research fellow in the Department of Journalism at City St. George’s, University of London. Sina Thäsler-Kordonouri is a teaching and research associate in the Department of Media and Communication at LMU Munich. Richard Fletcher is director of research and deputy director of the Reuters Institute for the Study of Journalism.