AI Laws in the US: 2025 Overview
- Jarvy Sanchez
- Aug 28
- 10 min read
On January 23, President Trump signed the Executive Order “Removing Barriers to American Leadership in Artificial Intelligence”, repealing President Biden’s earlier order on safe and trustworthy AI. At the same time, states are passing their own rules, resulting in a mix of federal actions and state laws that companies must follow when building or deploying AI.
Instead of a single, consistent framework, businesses now face overlapping rules. Teams shipping AI products need to track both federal policy and state requirements to avoid compliance risks.
Let’s take a look at the latest AI laws at both the federal and state levels and see what they actually mean when you’re building or deploying AI systems.
Note: This article is for informational purposes only and does not constitute legal advice. Companies should consult qualified legal counsel to understand specific compliance requirements.
Federal Executive Actions and Legislation
The federal government's approach to AI regulation underwent significant changes in 2025, moving from restrictive oversight to promoting innovation and competitiveness.
Executive Order 14179 - Removing Barriers to AI Leadership

The Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," signed on January 23rd, 2025, aims to sustain and enhance America's global AI dominance by promoting human flourishing, economic competitiveness, and national security. The order represents a shift toward reducing regulatory barriers that might hinder AI development.
The administration's AI Action Plan identifies over 90 Federal policy actions across three pillars - Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. These actions focus on maintaining U.S. leadership in AI development rather than imposing restrictive regulations.
Repeal of Executive Order 14110 (Biden’s AI Safety Order)
On January 20, 2025, within hours of taking the oath of office, President Trump issued an executive order that revoked an executive order from the prior administration regarding the use of artificial intelligence. This repeal eliminated many of the safety reporting requirements and oversight mechanisms established under the previous administration.
The administration directed agencies to identify any actions taken pursuant to Executive Order 14110 that are or may be inconsistent with, or present obstacles to, the policy set forth in the new order. This review process aims to remove regulatory barriers while maintaining necessary security measures.
Legislative Proposals: Deepfakes and AI Transparency
Federal legislative proposals continue to emerge, though a lack of consensus within both major parties means federal AI legislation is unlikely. Current proposals focus on specific use cases rather than comprehensive regulation.
1. DEEPFAKES Accountability Act (H.R. 5586, 2023): Require clear labels and embedded provenance in altered/AI-generated media (audio, video, or mixed). Platforms would need tools to detect and label deepfakes. Penalties include up to 5 years imprisonment for malicious use (fraud, harassment, election interference) and civil fines up to $150,000 per violation. It also creates a DHS task force on detection tech, requires disclosure tools in deepfake creation software, and allows victims to sue for damages.
Status: The bill was introduced in September 2023 and referred to multiple House committees, but has not advanced further. It would take effect one year after enactment if passed.
2. CREATE AI Act (H.R. 2385, 2025): Proposes a National Artificial Intelligence Research Resource (NAIRR) to give researchers, nonprofits, and small businesses access to computing resources, datasets, training tools, and testbeds. NSF would manage the program, with oversight from the White House OSTP and input from advisory groups across sectors. Access would be free or low-cost, with priority for projects on safety, ethics, privacy, and trustworthiness. The bill aims to prevent AI research from being locked inside large tech firms by broadening access to infrastructure.
Status: Introduced March 26, 2025, with 17 cosponsors, referred to House Science Committee. The bill builds on recommendations from a 2023 task force report.
Federal Regulatory Framework
Currently, there is no comprehensive federal legislation in the US that regulates the development of AI or explicitly restricts its use. Businesses instead have to interpret how existing laws may apply to their systems.
Absence of Comprehensive Federal AI Law
Unlike the European Union’s structured approach under the EU AI Act, the United States has not enacted a nationwide AI law. Instead, regulation comes from a patchwork of federal agencies and state-level initiatives, each covering different aspects of AI.
The lack of a unified federal framework leaves companies managing compliance across overlapping rules, which increases planning complexity and resource demands.
Existing Federal Agencies Applying Authority
Federal agencies continue asserting jurisdiction over AI through existing statutory authorities:
Federal Trade Commission (FTC): Applies consumer protection laws to AI applications, particularly around deceptive practices and unfair business practices. The FTC has issued guidance on AI and algorithms, emphasizing truthfulness in AI marketing claims.
Department of Justice (DOJ): Enforces antitrust laws and civil rights violations related to AI systems, particularly in areas affecting competition and discrimination.
Equal Employment Opportunity Commission (EEOC): Addresses AI bias in hiring, promotion, and employment decisions under existing civil rights laws.
Consumer Financial Protection Bureau (CFPB): Oversees AI use in financial services, focusing on fair lending practices and consumer protection.
Copyright Office Guidance on AI Training (Fair Use Debate)
The U.S. Copyright Office has published two recent reports clarifying how copyright applies to AI.
From its January 2025 Part 2 Report:
Human authorship is required. AI outputs without meaningful human creative input are not protected.
Prompts are not enough. Even detailed or iterative prompting does not qualify as authorship.
Human contributions may qualify. Editing, modifying, or arranging AI outputs with creative judgment can support copyright claims.
Case-by-case assessment. The extent of human involvement determines copyrightability.
No new rights for AI outputs. The Office advises against creating copyright for machine-generated material, citing risks to human creators.
From its May 2025 Part 3 Report on AI Training:
Training may involve infringement. Copying protected works to build datasets or retaining expressive elements during training can implicate reproduction or derivative rights.
Fair use remains unsettled. Courts will decide based on context—purpose, transformation, amount used, and market impact. Outputs that substitute for human works weigh against fair use.
Licensing proposals face hurdles. Collective or compulsory licensing has been suggested but raises feasibility and legal concerns.
No legislative changes recommended. Existing fair use principles remain the framework for now.
Implications: Developers must weigh fair use carefully when training and deploying models, while creators continue pushing for licensing or opt-out mechanisms.
State-Level AI Governance
States are moving aggressively on AI. Nearly every state legislature has introduced AI-related bills, though most are pending or stalled, with only a handful enacted.
Policy Area | Example Bills | Status | Main Point |
Housing & Rent | Pending | Limits AI rent pricing; bans collusion. | |
Consumer Protection | Pending | Stops AI price discrimination & retail collusion. | |
Gov. Use of AI | Some Enacted | Creates AI offices, oversight, use in gov. | |
Education | Mostly Pending | Restrict AI in classrooms; safety studies. | |
Health Care | Mostly Failed | Human review in insurance, medical AI. | |
Labor & Jobs | IL S 2203 | Pending | Regulates AI hiring, anti-discrimination. |
Criminal Justice | Pending | Ban AI deepfakes, CSAM, crime use. | |
Elections | Failed | Regulate AI in political ads, elections. | |
Data Privacy | Mixed | Disclosure, transparency, provenance rules. | |
Cybersecurity | Pending | AI in infrastructure, model security. |
Early State Laws (2024)
Four states, California, Colorado, Texas, and Utah, have enacted forms of AI governance laws with dozens of other states considering similar legislation.
1. Tennessee’s ELVIS Act - Protecting Voice and Likeness
Tennessee's Ensuring Likeness Voice and Image Security Act (ELVIS Act), passed on March 21, 2024, and effective July 1, 2024, prohibits using AI to mimic an artist's voice without permission. The law specifically targets AI voice cloning technology used to replicate performers' voices without consent.
The ELVIS Act establishes both civil and criminal penalties for unauthorized AI voice replication. It covers not just commercial use but any use that could harm the original performer's reputation or economic interests.
2. Utah’s Artificial Intelligence Policy Act
Utah is among the first to pass and enact broad consumer protection and transparency laws regulating the use of AI tools by private organizations. Utah's law requires businesses using AI in consumer-facing applications to provide clear disclosures about AI use.
The act includes liability protections for businesses that follow disclosure requirements and implement reasonable AI governance practices. It also established an AI policy office to provide guidance and monitor compliance.
2025 State Legislation and Trends
State legislatures have become the most active bodies shaping AI regulation. In 2024, more than 700 AI-related bills were introduced or debated nationwide, covering areas such as deepfakes, chatbot outputs, consumer protection, and data use. The activity has carried into 2025, with many states advancing from study committees and hearings to passing enforceable statutes.
1. AI Chatbot Transparency: Disclosure Requirements
Bills such as Illinois SB 1792 and Utah SB 226 would require generative AI owners, licensees, or operators to display notices to users that disclose the use of generative AI or warn users that AI-generated outputs may be inaccurate, inappropriate, or harmful. These disclosure requirements aim to ensure users understand when they're interacting with AI systems.
Typical disclosure requirements include:
Clear identification when users interact with AI chatbots.
Warnings about potential inaccuracies in AI-generated content.
Information about data collection and use by AI systems.
Contact information for human customer service alternatives.
2. Estimated Scope: States Introducing and Enacting AI Bills
During the 2024 legislative session, U.S. states, along with Puerto Rico, the U.S. Virgin Islands, and Washington, D.C., introduced multiple AI-related bills, with some jurisdictions enacting legislation or adopting resolutions. In 2025, activity expanded further, with all 50 states, Puerto Rico, the Virgin Islands, and D.C. introducing bills addressing artificial intelligence.

Federal vs. State Regulatory Dynamics
Differences between federal and state AI regulations create uncertainty for organizations trying to ensure compliance, since rules can vary across jurisdictions.
1. 10-Year Moratorium on State AI Laws
In 2025, there were proposals to pause state-level AI legislation while federal standards are developed temporarily. The idea is to avoid conflicting rules that make compliance tricky when deploying AI across multiple states. This pause would give a clear window for building consistent national guidelines for AI systems.
2. Political Debate: Innovation vs. State Rights
The Trump administration may bring a light regulatory approach to AI, and states may fill this regulatory and legislative void. This creates tension between federal innovation priorities and state consumer protection goals.
Conservative states often favor lighter regulation to promote business development, while progressive states implement stricter consumer protection and civil rights measures. This division reflects broader debates about federal versus state authority in emerging technology regulation.
3. Tech Industry Push for Uniform Federal Regulation
Major tech companies increasingly lobby for uniform federal AI regulation to avoid compliance with multiple conflicting state laws. Developers and deployers of AI systems operate in an increasing patchwork of state and local laws, underscoring challenges to ensure compliance.
The industry argues that inconsistent state regulations stifle innovation and create competitive disadvantages for U.S. companies compared to international competitors operating under single regulatory frameworks.
AI Compliance Focus Areas
Current AI legislation addresses specific problem areas rather than taking comprehensive approaches to AI governance.
1. Safety, Security, and Ethical Use
AI safety regulations focus on high-risk applications including military AI systems, biosecurity applications, and critical infrastructure. Current approaches emphasize risk assessment and mitigation rather than blanket restrictions.
Security concerns center on protecting AI systems from adversarial attacks, ensuring data security in AI training and deployment, and preventing AI systems from being used for malicious purposes.
2. Transparency and Consumer Protection
Transparency requirements typically include:
Disclosure when consumers interact with AI systems.
Explanation of AI decision-making in high-stakes situations.
Clear information about data collection and use.
Access to human alternatives when AI systems make consequential decisions.
3. Discrimination, Fairness, and Civil Rights
AI bias prevention focuses on employment, lending, housing, and criminal justice applications. Regulations typically require:
Bias testing and impact assessments for high-risk AI applications.
Documentation of AI training data and model development processes.
Regular monitoring of AI system outputs for discriminatory patterns.
Remediation procedures when bias is detected.
4. Artistic Rights and Copyright in AI Training
The growing movement by artists and creators to protect intellectual property drives legislation like Tennessee's ELVIS Act. Key issues include:
Unauthorized use of copyrighted content in AI training.
AI-generated content that mimics existing creative works.
Compensation for creators whose work trains AI systems.
Attribution requirements for AI-assisted creative works.
Looking Ahead: What to Watch in AI Regulation
The regulatory environment will continue evolving rapidly as both federal and state governments respond to AI developments.
1. Potential for Bipartisan Federal Action
Limited bipartisan cooperation exists around specific AI applications like deepfakes, child safety, and national security. However, broader AI regulation remains politically divided between innovation-focused and safety-focused approaches.
Areas with potential bipartisan support include AI in government services, cybersecurity applications, and international competitiveness measures.
2. State Measures as Innovation Testing Grounds
States continue serving as laboratories for AI regulation approaches. Successful state programs often influence federal policy development and other states' legislation.
Watch for states experimenting with regulatory sandboxes, AI audit requirements, and algorithmic accountability measures. These pilots provide data on effective AI governance without stifling innovation.
3. Global Trends and U.S. Competitiveness
The EU AI Act sets regulatory standards that influence AI practices internationally. China’s Global AI Governance Initiative (GAIGI), started in 2023, focuses on state oversight, international cooperation, and the Global South. U.S. policy aims to support private sector AI development while ensuring safety and security.
Final Takeaways on AI Regulation
AI regulation continues to change, but the core principles remain clear: transparency, accountability, and adaptability. Follow updates from agencies (FTC, Copyright Office) and state legislatures. For businesses, proactive compliance, documenting AI use cases, auditing for bias, and monitoring legal changes reduces risk and builds trust.
Consult legal and compliance experts to ensure both development and deployment meet current requirements.
Frequently Asked Questions
What is the current US federal AI law?
As of 2025, there is no comprehensive federal AI law. Regulation is developing through executive orders, agency actions, and proposed legislation.
Which states have AI laws in 2025?
Four states, California, Colorado, Texas, and Utah, have enacted forms of AI governance laws. Tennessee also enacted the ELVIS Act protecting voice and likeness rights. Many other states have introduced AI legislation in 2025.
What does Executive Order 14179 do?
Executive Order 14179 aims to sustain and enhance America's global AI dominance by promoting human flourishing, economic competitiveness, and national security. It removes regulatory barriers to AI development and innovation.
What is the ELVIS Act?
Tennessee's Ensuring Likeness Voice and Image Security Act prohibits using AI to mimic an artist's voice without permission. It protects performers from unauthorized AI voice cloning and provides both civil and criminal remedies.
Are AI-generated works protected by copyright?
Current Copyright Office guidance generally excludes AI-generated works from copyright protection unless they contain sufficient human authorship. The fair use doctrine's application to AI training data remains an open legal question with ongoing litigation testing these boundaries.





.webp)








