Trump Administration Official Says Quiet Part Out Loud on AI-in-Government Plans
Last week, ProPublica reported that the United States Department of Transportation is planning to use Google Gemini, a large language model, to draft federal transportation regulations. Writing a federal regulation is frequently a long and intensive process. Agency officials apparently believe they can outsource “80% to 90%” of that work, usually done by legal and policy experts, to artificial intelligence, “revolutioniz[ing] the way we draft rulemakings.” As the agency’s general counsel put it, “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.” DOT plans to be the “point of the spear” of a broader federal effort to use LLMs to speed rulemaking. That is consistent with reporting last summer that the erstwhile US DOGE Service hoped to use AI to facilitate the rescission of half of all federal regulations in a matter of months.
Clearly, the Trump administration is all-in on regulation-by-AI. Others are not so sure: although AI has the potential to greatly aid the work of federal regulators, agencies that over-rely on LLMs to do their work for them open themselves to legal and policy risk. What’s most remarkable about the DOT’s plans is that agency leaders seem to have little interest in mitigating or avoiding those risks. Instead, they apparently welcome them as an acceptable price to pay for speed and volume. “We don’t need a perfect rule on XYZ. We don’t even need a very good rule on XYZ,” the agency’s general counsel apparently boasted. “We want good enough. We’re flooding the zone.” The administration, at least behind closed doors, seems to have dropped the pretense that good governance is the goal.