Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence

A conceptual landscape blending nature with digital AI elements.

The administrative state, long characterized by the measured cadence of deliberation, public notice, and multi-year regulatory refinement, is reportedly undergoing a radical acceleration under the second Trump Administration. At the nexus of this velocity shift lies a controversial initiative: the direct utilization of generative Artificial Intelligence for drafting federal regulations. As revealed in recent investigative reporting, this pivot is not a mere technological upgrade but a fundamental policy decision, placing the perceived imperative of technological supremacy above established mechanisms of administrative scrutiny. The Department of Transportation (DOT) stands as the vanguard of this experiment, signaling a profound trade-off in the governance of the nation’s infrastructure and safety standards.

The Doctrine of Velocity: AI Drafting in the Executive Branch

The core of this new administrative approach is captured by a mandate for speed that appears to supersede the pursuit of perfection. The Trump Administration, having made “unquestioned and unchallenged global technological dominance” a central tenet of its policy since January 2025, is now embedding AI directly into the rule-making process. This is explicitly being piloted within the Department of Transportation, the agency responsible for overseeing the safety of commercial aircraft, automobiles, and hazardous materials transport.

The DOT Pilot Program and Google Gemini

According to records reviewed by investigative journalists, the plan to integrate AI—specifically utilizing Google’s Gemini model—into the drafting of federal regulations was presented to DOT staffers in late 2025. The stated objective is to drastically compress the regulatory lifecycle. Where the drafting and refinement of complex transportation rules once consumed months, the new model suggests an output capability measured in minutes.

The push for this velocity is reportedly sanctioned at the highest levels. Gregory Zerzan, the DOT’s general counsel, has been quoted in meeting transcripts as articulating a stark departure from traditional regulatory quality metrics. His reported declaration emphasizes quantity and speed over granular accuracy: “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough,” followed by the aggressive assertion, “We’re flooding the zone”.

This signals a pragmatic, if startling, acceptance of generative AI’s primary acknowledged flaw—hallucinations—in favor of rapid deployment. The goal is to achieve a significant output of regulatory text quickly, with the expectation that human personnel will handle the remainder of the work, providing oversight and refining the drafts. The department has reportedly already utilized AI to compose an unpublished rule for the Federal Aviation Administration (FAA), marking a transition from using AI for administrative tasks like data categorization to using it for substantive rule generation. DOT has been positioned by agency leadership as the “point of the spear” for this broader federal initiative, intended to be the “first agency that is fully enabled to use AI to draft rules”.

The Policy Context: AI Supremacy and Deregulatory Momentum

The DOT’s AI rule-drafting experiment is not an isolated technological foray; it is a direct product of a deliberate, administration-wide policy pivot initiated early in 2025, centered on achieving global AI leadership.

Revoking Guardrails for Speed

The initial action upon the Administration taking office in January 2025 was the swift rescission of prior executive orders focused on AI safety. On January 23, 2025, Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” was signed, explicitly revoking the predecessor’s EO on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. The stated purpose of EO 14179 was to clear the path for decisive action to retain global leadership, viewing previous policies as “barriers” to innovation.

This rollback extended to specific safety mandates. The previous administration’s requirements for developers of high-risk AI to report safety testing data to federal authorities prior to deployment were nullified. This move, aligned with a platform arguing that federal oversight stifled innovation, signaled a clear preference for an environment where the technology could advance unimpeded by pre-deployment scrutiny. Furthermore, the administration has taken steps to reorient federal AI oversight bodies, such as rebranding the AI Safety Institute to the Center for AI Standards and Innovation (CAISI), suggesting a functional shift away from risk assessment and toward enabling development.

The Deregulatory Campaign

The focus on speed through AI drafting is amplified by a parallel, aggressive campaign to eliminate existing regulatory burdens across government. In July 2025, the “Winning the AI Race: America’s AI Action Plan” called for slashing regulations across federal procurement, R&D, and infrastructure to hasten US deployment. This included directing the Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) to lead a major effort to identify and eliminate rules hindering AI development.

Specific mechanisms were put in place to ensure the pace of deregulation met expectations. A directive from OMB in late 2025 urged agencies to speed up timelines, including bypassing the traditional notice-and-comment periods for repeals under a “good cause” exception to the Administrative Procedure Act (APA). Bypassing this public participation step, which can take a year or more, is a direct mechanism to prioritize speed over the traditional scrutiny offered by public engagement. This overall philosophy was reinforced by other EOs in 2025 aimed at lowering review thresholds and requiring agencies to repeal ten existing regulations for every one new regulation issued. The Administration’s stated aim is to reduce “unnecessary, burdensome, and costly federal regulations” and mandate that the cost of all new regulations be “significantly less than zero” for fiscal year 2025.

The Legal and Societal Challenge: Preemption and Risk

This administrative acceleration, particularly in the realm of AI rule generation, necessitates a confrontation with existing legal frameworks and societal expectations regarding safety and due process.

The Preemption of State AI Law

A key component of securing a unified, fast-moving AI ecosystem involves neutralizing regulatory fragmentation at the state level. In December 2025, an Executive Order titled “Ensuring a National Policy Framework For Artificial Intelligence” sought to significantly restrict states from independently regulating AI in ways deemed “onerous and excessive” or in conflict with federal priorities.

The rationale presented is that a “patchwork of 50 different regulatory regimes” hinders compliance, especially for startups, and slows down the US effort to win the global AI race against adversaries. The EO specifically targeted state laws that might mandate the embedding of “ideological bias” into AI models, citing the example of a new Colorado law banning “algorithmic discrimination” as a potential source of stifling, inconsistent compliance requirements. This push for federal preemption is designed to create regulatory clarity and uniformity, allowing AI development and the corresponding AI-drafted regulations to proceed at a national velocity, free from localized pockets of stricter scrutiny.

The Question of Consequence in Governance

The use of generative AI like Gemini, which is notorious for errors, to write rules governing complex areas like aviation safety and hazardous materials transport raises immediate and profound questions about accountability and public trust. Regulating safety standards for commercial aircraft—a core DOT function—is a task that requires subject-matter expertise and meticulous statutory knowledge. Relying on a model that might “hallucinate” introduces the potential for errors that could lead to litigation, accidents, or worse.

The concern articulated by skeptics is that the perceived benefits of speed—the ability to “flood the zone” and quickly implement the administration’s agenda—are being weighed against the fundamental governmental duty to protect the public interest through carefully constructed, legally sound regulations. The DOT’s stated intention to compress timelines suggests a governmental decision to treat the speed of policy creation as the preeminent national security objective, a choice that may force a reckoning regarding the acceptable level of risk in the application of the machine’s output to lawmaking.

The Broader Trend: From Digitization to Automation of Law

This initiative represents an evolution from prior federal uses of AI. For years, agencies employed AI for document translation, data analysis, and categorizing public comments received during the regulatory process. The 2025 pivot marks a transition from using AI as an assistant to using it as a co-author of law.

This development occurs within an administration that has made technological acceleration a hallmark. The Office of Science and Technology Policy (OSTP) itself published a report in early 2026 highlighting the administration’s achievements, including the AI Action Plan and broader efforts to foster emerging technologies. The administration’s vision, as articulated by the President’s Council of Economic Advisors in January 2026, suggests this policy acceleration is intended to capitalize on a potentially transformative economic shift comparable to the Industrial Revolution, aiming to create a “Great Divergence” in growth favoring the US.

Federal agencies have generally been moving toward deeper AI adoption, with other departments, including DOT, looking at “agentic AI” capabilities that can take actions beyond simple text generation, such as analyzing weather data to generate alerts or reviewing grant applications for compliance. However, the direct application of generative AI to primary rule-writing—the act of creating the foundational text of regulation—is the most visible and consequential step yet toward an automated governance model.

The Promise and Peril of AI-Driven Governance

The administration sees significant promise in this shift. AI can potentially handle 80% to 90% of the manual work in drafting rules, turning a multi-month effort into a 20-minute task, thus aligning with the stated goal of modernizing operations and lightening the load on civil servants. The implied benefit is the rapid implementation of deregulation goals, ensuring that administrative policy keeps pace with the fast-evolving technological landscape that the administration seeks to dominate.

Conversely, critics and internal staffers point to the core mechanism of administrative law: public participation, expert consensus, and rigorous legal vetting. The move to bypass these steps for repeal actions and to adopt a “good enough” standard for new AI-generated drafts suggests a systemic de-prioritization of these checks. While the administration seeks to enhance AI infrastructure through accelerated permitting for data centers, the regulatory output itself is being generated at machine speed, with the human role reduced to error-checking a machine’s initial pass.

Conclusion: The New Speed of Government

The twenty twenty five policy pivot, epitomized by the Department of Transportation’s experiment, represented a definitive governmental decision to prioritize the acceleration of national technological progress—and the perceived necessity of winning the global AI race—over the traditional, slower mechanisms of administrative deliberation and meticulous regulatory refinement. The entire architecture, from revoking prior safety mandates to pushing for state preemption and utilizing generative AI for rule drafting under a ‘good enough’ ethos, illustrates a commitment to sheer velocity. This commitment forces a fundamental societal reckoning regarding the acceptable level of risk in governance when the speed of policy creation is treated as the preeminent national security objective, creating a landscape where the machine dictates the pace of law, with all the associated promise and peril that entails.