'Like nailing Jell-O to a wall': Why unions are struggling to protect journalists’ rights in the age of AI
ProPublica journalists walked off the job for 24 hours, after more than two years of negotiations that failed to yield a deal for a union contract that would have included terms around AI and a ban on AI-related layoffs.
Meanwhile, in Italy, the country’s main journalists’ union called for two strike days over publishers refusing to accept basic rules on the use of artificial intelligence. And at The New York Times, according to Axios, editorial union leaders told the newspaper’s management its AI standards are too vague and inadequate, creating editorial problems and trust issues.
As AI is becoming a defining issue for labor unions, I spoke with four journalism union representatives from the United States, the Philippines, and Greece to find out how their organizations are protecting their members from any potential labor changes that AI might bring.
The unions versus AI
No union I spoke to reported having any of their members being replaced by AI. But one of their central concerns has always been ensuring human staff is protected as these technologies become widespread. Collective bargaining agreements help enact these protections. Some agreements implicitly state that AI cannot be used to displace a member of the staff, like the News Media Guild, while others mandate higher severance pay if layoffs are AI-related, such as the PEN Guild.
However, AI use at work raises many complex issues beyond layoffs, said Tony Winton, chief administrative officer of the News Media Guild, which represents newsrooms like the Associated Press and The Guardian in the United States. Unions have the right to bargain not just over whether jobs remain, but also over working conditions and how AI changes the way people do their jobs.
“The more difficult issue is which uses are allowed, short of something that actually changes the size of the workforce, and there are a lot of very thorny issues here,” he said. “We have an active working group of members who want to expand this conversation with the AP. The contract language we have is good. But as more and more uses are being found for the technology, we need to have a conversation.”
The specific uses of AI in a newsroom, and how they impact the work of journalists beyond layoffs, is something that all union representatives I spoke to cited as grievances they have brought to their management. While they care about core issues like jobs, pay, and working conditions, Winton said, AI also raises serious concerns about journalistic accuracy, for example, that managers need to address.
“AI has struggled with a lot of fabrication problems,” he said. “So, for a person with a byline and a public identity, AI is a real concern. You don’t want to incorporate inaccurate work into your reporting that affects not just journalism quality, but also the reputation of the person whose name is attached to the story itself.”
Ariel Wittenberg is a public health reporter and the unit chair of PEN Guild, which represents workers at Politico and E&E News. Like Winton’s, her union has not seen layoffs due to AI yet, but her concerns extend towards the way AI is used, and how it can impact journalists’ work and journalism ethics.
She described two recent incidents at Politico, where managers were required by contract to warn the union in advance and negotiate before using AI in ways that meaningfully affect employees’ job duties. Politico ignored this clause and deployed two AI initiatives without telling them: one used AI to generate written coverage of the Democratic National Convention and the other one was a deal with Capital AI to automatically produce reports.
“We think they violated the contract, which says that any AI use has to be done in accordance with Politico’s standards of journalism ethics and with human oversight,” said Wittenberg. “If something is coming back with inaccuracies, if it’s not following our stylebook in other ways, and there are no corrections policy applied, that is not up to our political ethics.”
“An existential threat”
Establishing protections on AI-related issues hasn’t been easy for journalists working in other latitudes. A newsroom manager who is also a director for the National Union of Journalists of the Philippines (NUJP) spoke to me on the condition of anonymity about how difficult it is to establish protections for workers on these kinds of issues.
He said most newsrooms just have general provisions of using AI responsibly and ethically. But nowhere is it stated that AI will not be used to replace journalists. “This is an existential threat,” he told me. “My hope is that at some point [managers] will realize it and then we will have to adjust our policies on it.”
The Philippines’ national union does advocacy work, whereas specific employer unions are the ones with bargaining power. While there is no authoritative count, the latter ones in the Philippines are limited in number, unevenly distributed, and much less institutionalized than in European countries. The newsroom of the manager I spoke to, for example, doesn’t have a union in place.
“[The] most the NUJP can do is to issue statements to create noise, to try to advance the conversation, and to call attention to certain issues,” he said. “The most you can do is to recommend. We have to set these policies in stone and encourage media owners to craft a policy that would protect their workers from the threat of AI.”
Journalists in other countries face a similar challenge. Greek journalist Sotiris Triantafyllou, president of the Panhellenic Federation of Journalists’ Union, describes AI adoption in his home country as not quite as expansive as in Northern Europe. This has allowed his union to be ahead of the curve domestically. In 2025, for example, they launched a code of ethics now adopted by the five unions of the federation.
“Now we are in discussions with managers and media owners. I don’t know what will happen in the future. But for now they agree with us, and I think they are in a mood to protect journalists,” Triantafyllou said.
“Like nailing Jell-O to a wall”
What all the union representatives I spoke to are looking for is a bottom-line commitment to human-led journalism and that AI does not take over skilled labor. There is broad support in using AI “housekeeping” tasks like transcription, translation, and summarization of large datasets. But pushback arises when managers implement tools that automate creative and journalistic work.
“We try to protect the central role journalists play because we believe that AI cannot replace them,” Triantafyllou said.
Unions often have to play whack-a-mole to deal with all the potential effects AI can have on workers. The initial question was perhaps “Will AI come for my job?” But now a myriad of other questions arise: if an employer sells journalistic information to a model, should employees who produced that content be compensated? Is the use of AI optional or will employees be replaced if they don’t adopt it? Will there be universal training for employees to apply these tools?
“It’s a moving target. It’s like nailing Jell-O to the wall, because you think you’ve got something done, and then the technology changes again,” Winton said. “When you are hired to do a job, you are hired to write a story for a publication, not to be part of this blob of AI that goes on forever. There’s a lot of interesting things that people are thinking through.”
Some news organizations, for example, are now trying to increase their output with the help of AI and AI-assisted reporters, such as U.K. local news publisher Mediahuis. Recently, Fortune editor Nick Lichtenberg came under scrutiny after a profile detailed how he used AI to crank out more than 600 stories. On these use cases, the journalists I spoke to believe in having a seat at the table: as AI-writing is becoming an unavoidable reality of journalism, journalists should have a say in how AI is used in their newsrooms rather than just executives looking to adopt the technology.
The newsroom manager and union director from the Philippines believes that while AI writing in journalism is seen as deeply unsettling because it threatens human creativity, authenticity, and editorial craft, its spread is still inevitable as economic pressure will push newsrooms to adopt it.
“It’s sad and tragic in a lot of ways, and many of us are mourning the kind of journalism we are used to, but the reality is ChatGPT, Gemini, and others are already capable of replicating the way humans speak and write, and they’ve been able to do so for quite some time now,” he said.
Despite these ongoing challenges, unions seem more important than ever. The representatives I spoke to highlighted a number of victories, from proactive negotiation with management in the case of Triantafyllou in Greece to providing binding arbitration in the United States.
“AI is something that is already impacting our industry, and union contracts are one way that journalists can have a say in how AI is deployed, rather than leaving those decisions up to news executives or corporations,” said Wittenberg.
A difficult balance
Few industries show financial distress as clearly as the news industry: repeated waves of job cuts, declining engagement, and precarious business models. In light of these existential challenges, AI has been presented as both a problem and an opportunity for growth.
No newsroom wants to be left behind, and some are resembling Silicon Valley in their language of adoption, pursuing rapid experimentation, “content scaling,” and liquid content.
Wittenberg has found AI to be a useful tool in handling large data sets or doing menial tasks like transcriptions. But she thinks some newsrooms have lost sight of why audiences come to them: because they want accurate and factual news.
“In the rush to innovate, news organizations think they are competing with tech companies,” she said. “The reality is that we are still news organizations and that means that we have an obligation to our ethics and to give our readers accurate factual news and to be held accountable when we make mistakes.”
My source in the Philippines admitted that protecting media workers from AI’s potential harms will be difficult because the news industry largely regulates itself and media owners are not naturally incentivized to put strong protections in place.
“They are looking at how they can make news more efficient, how they can save more money, how many employees they can let go because AI can do the work that they’re doing,” he said.
In his view, despite having limited power, journalists and their unions should still push to protect their own rights and the industry as a whole as many concessions will happen due to public pressure and broader public opinion.
“There’s always been that kind of divide between those who own the news media and those who are the news media,” he said. “As journalists, we have to be prepared because this is going to be an uphill battle.”
Gretel Kahn is a journalist at the Reuters Institute for the Study of Journalism, where this story was originally published.