The Trump administration is considering use artificial intelligence to draft federal transportation regulations, according to U.S. Department of Transportation records and interviews with six agency staff members.
The plan was presented to DOT staff last month during an AI demonstration “potential to revolutionize the way we write rules,” said agency lawyer Daniel Cohen. wrote to colleagues. The demonstration, Cohen wrote, would showcase “exciting new AI tools available to DOT rule writers to help us do our jobs better and faster.”
Discussions about the plan continued among agency leaders last week, according to meeting notes reviewed by ProPublica. Gregory Zerzan, the agency’s general counsel, said at the meeting that President Donald Trump was “very excited about this initiative.” Zerzan seemed to suggest that DOT was at the forefront of a broader federal effort, calling the department “the tip of the spear” and “the first agency fully empowered to use AI to write rules.”
Zerzan seemed primarily interested in the quantity of regulations AI could produce, not their quality. “We don’t need a perfect rule on XYZ. We don’t even need a really good rule on XYZ,” he said, according to meeting notes. “We want good enough.” Zerzan added: “We are flooding the area. »
These developments have alarmed some at the DOT. The agency’s rules touch virtually every facet of transportation safety, including regulations that keep planes in the sky, prevent gas pipelines from exploding and prevent freight trains carrying toxic chemicals from skidding off the tracks. Why, some staffers wondered, would the federal government outsource the writing of such crucial standards? has an emerging technology known for making mistakes?
The answer from plan boosters is simple: speed. Drafting and revising complex federal regulations can take months or even years. But, with the DOT version of Google Gemini, employees could generate a proposed rule in minutes or even seconds, recalled two DOT staff members who attended the December protest. Regardless, most of what appears in the preambles of DOT regulatory documents is just “word salad,” one staffer recalled telling the presenter. Google Gemini can make word salad.
Zerzan reiterated his ambition to accelerate rulemaking through AI at last week’s meeting. The goal is to significantly reduce the time it takes to develop transportation regulations, so they can go from idea to final version ready for review by the Office of Information and Regulatory Affairs in just 30 days, he said. This should be possible, he said, because “it shouldn’t take you more than 20 minutes to get a draft of Gemini’s ruler.”
The DOT plan, which has not been previously reported, represents a new front in the Trump administration’s push to integrate artificial intelligence into the work of the federal government. This administration is not the first to use AI; federal agencies have gradually integrated technology into their work for yearsin particular to translate documents, analyze data and categorize public comments, among other uses. But the current administration has been particularly enthusiastic about the technology. Asset published several decrees in support of AI last year. In April, Office of Management and Budget Russell Vought broadcast a memo calling for the acceleration of its use by the federal government. Three months later, the administration issued a “AI Action Plan» which contained a similar directive. However, none of these documents explicitly called for using AI to write regulations, as the DOT is now considering doing.
These plans are already in motion. The department used AI to draft a previously unpublished Federal Aviation Administration rule, according to a DOT staffer briefed on the matter.
Skeptics argue that so-called large language models such as Gemini and ChatGPT should not be trusted with complicated and consequential governance responsibilities, since these models are error-prone and incapable of human reasoning.. But its supporters see AI as a way to automate mindless tasks. and extracting efficiencies from a slow-to-evolve federal bureaucracy.
Such optimism was on display in a windowless conference room in Northern Virginia earlier this month, where federal technology officials gathered for a meeting. AI Summit, discussed adopting an “AI culture” within government and “upskilling” the federal workforce to use the technology. These federal officials included Justin Ubert, division chief for cybersecurity and operations at the DOT’s Federal Transit Administration, who spoke on a panel about the Department of Transportation’s plans for “rapid adoption” of artificial intelligence. Many people view humans as a “chokepoint” that slows down AI, he noted. But eventually, Ubert predicts, humans will fall back into a simple surveillance role, monitoring “AI interactions.” Ubert declined to speak on the record to ProPublica.
A similarly optimistic attitude about the potential of AI permeated the presentation to the DOT in December, attended by more than 100 DOT employees, including division heads, senior lawyers and regulatory office officials. Brimming with enthusiasm, the presenter explained to them that Gemini can handle 80 to 90% of the work of drafting the regulations, while DOT personnel could do the rest, one attendee recalled the presenter saying.
To illustrate, the presenter asked the audience to suggest a topic on which the DOT might need to write a Notice of Proposed Rulemaking, a public filing that outlines an agency’s plans to introduce a new regulation or amend an existing regulation. He then inserted the topic keywords into Gemini, which produced a document resembling a notice of proposed rulemaking. However, it seemed like the actual text that was in the Code of Federal Regulations was missing, one staffer recalled.
The presenter showed little concern that regulatory documents produced by AI might contain so-called hallucinations – erroneous text that is frequently generated by large language models like Gemini – according to three people present. Either way, that’s where DOT staff would come in, he said. “It seemed like his vision for the future of rulemaking at DOT was that our job would be to proofread this machine product,” one employee said. “He was very excited.” (Participants couldn’t clearly remember the main presenter’s name, but three said they thought it was Brian Brotsos, the agency’s acting director of AI. Brotsos declined to comment, referring questions to the DOT press office.)
A DOT spokesperson did not respond to a request for comment; Cohen and Zerzan also did not respond to messages seeking comment. A Google spokesperson did not provide comment.
The December presentation left some DOT staffers deeply skeptical. Rulemaking is complex work, they said: requiring expertise in the subject in question as well as in existing laws, regulations and case law. Errors or oversights in DOT regulations could lead to lawsuits and even injuries and deaths in the transportation system. Some rule writers have decades of experience. But all of that seemed to be ignored by the presenter, attendees said. “This seems totally irresponsible,” said one, who, like the others, requested anonymity because he was not authorized to speak publicly on the matter.
Mike Horton, the DOT’s former acting director of artificial intelligence, criticized the plan to use Gemini to write regulations, likening it to “having a high school intern do the rulemaking.” (He said the plan was not in the works when he left the agency in August.) Noting the life-and-death stakes of transportation safety regulations, Horton said agency leaders “want to move fast and break things, but moving fast and breaking things means people are going to get hurt.”
Academics and researchers who track the use of AI in government have expressed mixed opinions about the DOT plan. If agency rule writers use the technology as a sort of research assistant with lots of oversight and transparency, it could be helpful and save time. But if they cede too much responsibility to AI, this could lead to deficiencies in critical regulations and run counter to the requirement that federal rules be based on reasoned decision-making.
“Just because these tools can produce a lot of words doesn’t mean those words constitute a high-quality government decision,” said Bridget Dooling, a professor at Ohio State University who studies administrative law. “It’s so tempting to try to figure out how to use these tools, and I think it would make sense to try. But I think it should be done with a lot of skepticism.”
Ben Winters, director of AI and privacy at the Consumer Federation of America, said the plan was particularly problematic. given the exodus of subject matter experts from government following the administration’s federal workforce reductions last year. The DOT recorded a net loss of nearly 4,000 of its 57,000 employees since Trump’s return to the White House, including more than 100 lawyers, federal data shows.
Elon Musk’s Department of Government Effectiveness was a leading proponent of AI adoption in government. In July, The Washington Post reported on a DOGE presentation leaked which called for using AI to eliminate half of all federal regulations, and to do so in part by having AI draft regulatory documents. “Writing is automated,” the presentation reads. DOGE’s AI program “automatically drafts all submission documents that lawyers can mo dify”. DOGE and Musk did not respond to requests for comment.
The White House did not respond to a question about whether the administration was also considering using AI in rulemaking at other agencies. Four senior administration technology officials said they were unaware of such a plan. As for the DOT’s “spear point” claim, two of those officials expressed skepticism. “There’s a lot of posturing like, ‘We want to look like a leader in federal adoption of AI,’” one said. “I think it’s really about marketing.”

























