US President Donald Trump has taken a decisive step by signing an executive order that aims to prevent states from imposing their own regulations on artificial intelligence (AI). During a press conference in the Oval Office, Trump emphasized the need for “one central source of approval” to streamline the oversight of AI technology.
The administration, as noted by White House AI adviser David Sacks, plans to use this order to challenge what they perceive as excessively burdensome state regulations. Importantly, the federal government will retain a stance against AI regulations concerning children’s safety, acknowledging the need for some oversight in that domain.
This executive order is perceived as a significant victory for leading technology companies, which have long advocated for a standardized national framework for AI legislation. Industry leaders argue that a patchwork of state laws could stifle innovation and place the United States at a disadvantage in the global race for AI supremacy, particularly against countries like China.
Despite the absence of a cohesive national policy on AI, over 1,000 different AI bills have been introduced across various states in the U.S. In 2023 alone, 38 states—including California, a hub for major tech firms—have enacted around 100 new regulations related to AI, according to the National Conference of State Legislatures. These regulations vary significantly: in California, one law mandates that platforms must notify users when they are engaging with a chatbot to safeguard minors. Additionally, the state has implemented a rule requiring large AI developers to formulate plans addressing potential catastrophic risks from their technological models.
Meanwhile, other states have taken unique approaches. North Dakota enacted legislation to prevent the use of AI-powered robots for harassment, while Arkansas implemented restrictions on AI content to protect intellectual property rights. In Oregon, a new law prohibits non-human entities, including AI systems, from using licensed medical titles, such as “registered nurse.”
Opponents of Trump’s executive order argue that state regulations are essential given the current vacuum of federal guidelines. Advocacy groups are expressing concerns that undermining state authority to establish their own AI safeguards infringes upon states’ rights to protect their residents. Julie Scelfo of Mothers Against Media Addiction articulated this sentiment, emphasizing the importance of state-level guardrails in the absence of substantive federal oversight.
California Governor Gavin Newsom, a vocal critic of Trump, condemned the order as an ulterior motive to serve the interests of tech allies. He accused Trump of engaging in self-serving practices within the White House that prioritize corporate advantages over public safety and regulation.
While major AI firms such as OpenAI, Google, Meta, and Anthropic did not provide immediate responses regarding the executive order, the tech industry’s lobby group, NetChoice, expressed support. “We look forward to working with the White House and Congress to set nationwide standards and a clear rulebook for innovators,” stated Patrick Hedger, the organization’s director of policy.
Michael Goodyear, an associate professor at New York Law School, echoed the industry’s concerns about the potential complications arising from a fragmented set of state regulations. He argued that a singular federal law would be preferable to conflicting state statutes, provided that the federal law is comprehensive and effective. This executive order marks a pivotal moment in the ongoing discussion around AI regulation, stirring debates over the balance between innovation and public safety.


