Why these researchers say AI could be mortal threat to democracy
Whatever benefits artificial intelligence might offer, it has become increasingly clear the technology also engenders some significant harmful side effects, such as increasing carbon emissions and allegedly encouraging suicides.
In a new paper, a pair of legal researchers warn of an additional, profound and potentially far-reaching AI hazard. They argue that the technology is set to undermine democracy in America by damaging and potentially destroying the institutions that undergird it, including the rule of law and journalism.
“AI is anathema to the well-being of our critical institutions,” wrote Boston University law professors Woodrow Hartzog and Jessica Silbey in the draft of their paper entitled “How AI Destroys Institutions.” Due to that fact, they wrote, “absent rules mitigating AI’s cancerous spread, the only roads left lead to social dissolution.”
In their paper, which they said will be published in the UC Law Journal later this year, Hartzog and Silbey focus on institutions, which they define not as individual organizations, but the particular “field(s) of human action” in which such organizations operate and the values and norms for those fields. So, under that definition, a hospital such as Zuckerberg General Hospital isn’t an institution, but the field of medicine or health care is.
The pair also take a broad view of AI. Their paper looks at the combined effect of not just generative AI systems such as OpenAI’s ChatGPT chatbot, but facial-recognition systems and similar predictive technologies, as well as automated decision systems, such as those sometimes used to set bail or to review job candidates.
Hartzog and Silbey argue that AI weakens institutions in three different ways. It hinders people within institutions from developing the knowledge, skills and expertise needed to maintain or reinvigorate them. AI systems used within institutions reduce or eliminate the role of humans in decision-making and deliberation, eroding their legitimacy in the eyes of those affected by the decisions and weakening their ability to respond to changing circumstances.
And the technology isolates people, reducing their ability to learn from, debate with, understand and reach consensus with others who have different views or knowledge. Institutions can’t function without that consensus or the mutual respect that comes from interpersonal connections, they wrote.
Much of the social and human progress seen in the 19th and 20th centuries was built on the development of crucial institutions such as higher education and the legal system, Silbey told The Examiner.
Thanks to AI, she said, “I think we’re seeing [those institutions] erode right in front of our eyes.”
AI PESSIMISM 3
Jessica Silbey and Woodrow Hartzog’s paper examines the use of artificial intelligence beyond the use of generative AI systems such as Anthropic’s Claude chatbot.
Kelsey McClellan © 2025 The New York Times Company
How the public and governments respond to the potential threats of AI, including to institutions, will be of great importance to San Francisco. The City has become ground zero for the AI industry, home to the two best-funded private companies in the sector — OpenAI and Anthropic, the latter of which reportedly signed a 13-year lease Friday to occupy a 27-story downtown office tower — and numerous smaller ones.
Thanks to those companies and the researchers and developers they deploy, San Francisco has garnered an outsized share of venture-capital investment in recent years. And the industry’s surge has helped spark a revival in The City’s downtown and a rebound in its depressed office real-estate market.
But The City’s citizens also stand to be harmed by the technology, particularly if it undermines democratic governance.
Hartzog and Silbey argue that AI is doing just that — by hindering expertise, replacing human deliberation and isolating people, the technology is harming institutions as disparate as medicine, the family, and religious and financial institutions.
But in their paper, they focus specifically on how AI is damaging the rule of law, higher education and journalism.
There have been numerous cases in which lawyers have filed documents with courts that contain fake citations made up by AI systems. But in looking at how AI is harming the rule of law, Silbey and Hartzog chose not to focus on that issue. Such practices can harm the careers and reputations of the lawyers who used AI for their research — as well as those of their firms — but the researchers chose to focus instead on what they see as the higher-level threat AI poses to the entire institution, Silbey said.
As she and Hartzog lay out in their paper, that threat comes largely from the offloading of decision-making — about questions such as the amount bail should be set at, the lengths of criminal sentences, benefits calculations, or who the IRS should target for audits — to automated systems. That practice is becoming increasingly widespread due to the sense that such systems are free from human bias and can dispassionately and accurately make determinations, the pair wrote.
But such systems are essentially black boxes, they argue. It’s unclear exactly how they make their decisions or how they weigh particular factors. That undercuts the legitimacy of those decisions, Silbey and Hartzog argue. It also makes them unpredictable, they say — and that in turn violates the notion of equal justice under the law, because there’s no way to know whether the systems will apply the law in the same way in similar situations.
“Algorithmic invasions of our legal institutions subvert the reason we believe in and follow the rule of law,” they wrote in their paper.
ENGLISH PROFESSORS AI 1
Students during a class taught by Benjamin Breyer, a professor in Barnard’s first-year writing program who developed an AI chatbot that helps students develop thesis statements, in Manhattan on Oct. 29, 2025.
Hiroko Masuike © 2025 The New York Times Company
By contrast, AI is harming higher education in multiple ways, Silbey and Hartzong say. It’s undermining the development of knowledge and expertise — the foundations of that institution — by encouraging people to offload tasks that require deep thought, they say.
Because of the way they are designed — large language models such as that underlying ChatGPT generate sentences and paragraphs by essentially determining the most likely next words based on the large amounts of documents they’ve been trained on — AI systems produce homogenized content and suppress or discourage exceptional thoughts or insights, Silbey and Hartzog argue.
Such technologies also prioritize fields of inquiry that can be easily quantified, thus neglecting or even discouraging areas such as the humanities that are less amenable to that type of study.