July 17, 2025

The Current State of AI Disclosure Laws

Disclosure laws are rapidly evolving—learn how AI regulation, compliance, and legislation are shaping the future of responsible automation in 2025.
Loading the Elevenlabs Text to Speech AudioNative Player...

As AI conversations grow indistinguishable from human ones, states across the U.S. are passing AI legislation requiring businesses to disclose when consumers are talking to a machine. This article breaks down the latest AI legislation, revealing key trends and what they mean for companies using generative AI

Enacted State AI Legislation

The Utah Artificial Intelligence Policy Act: Utah became the first state to pass comprehensive AI consumer protection legislation when Governor Spencer Cox signed the Utah Artificial Intelligence Policy Act (UAIPA) on March 13, 2024. This AI legislation took effect on May 1, 2024.

The UAIPA applies to any business or individual using generative AI to interact with Utah consumers. It defines generative AI as “an artificial system that (a) is trained on data; (b) interacts with a person using text, audio, or visual communication; and (c) generates non-scripted outputs similar to those created by a human, with limited or no human oversight.”

The law imposes two key disclosure requirements:

Disclosure Requirements by Business Type: Businesses in regulated occupations (those requiring licensure or certification) must prominently disclose at the beginning of any communication that the consumer is interacting with generative AI. All other businesses covered by Utah consumer protection laws must clearly and conspicuously disclose the use of generative AI if the consumer directly asks.

The California Bot Disclosure Law: While Utah’s UAIPA was the first comprehensive AI disclosure law, California laid early groundwork with its 2019 Bot Disclosure Law. This law requires companies to disclose when Internet-based bots are used to knowingly deceive individuals for commercial gain or to influence voting in elections. Though not originally designed for generative AI, it likely applies to AI chatbots. However, it only governs bots deployed online—not those used in other media.

The California AI Transparency Act: Passed on September 19, 2024, and effective January 1, 2026, the California AI Transparency Act (SB 942) targets Covered Providers - those whose generative AI systems have over 1 million monthly users in California. The law requires these providers to offer:

  • A free, publicly available tool to detect AI-generated content
  • Options for both hidden and visible disclosures on AI-generated outputs
    Violations are subject to a $5,000 fine per incident.

The Colorado AI Act: Signed by Governor Jared Polis on May 17, 2024, the Colorado AI Act (SB 24-205) takes effect on February 1, 2026. Although its primary focus is preventing algorithmic discrimination in “consequential decisions” like healthcare and employment, the law also includes AI disclosure considerations.

Pending AI Legislation

Several states have introduced AI legislation bills in 2025 that would require notification when consumers interact with AI systems:

Alabama (House Bill 516):  Would make it a deceptive practice to engage in commercial transactions through chatbots or AI agents that could mislead consumers into believing they're communicating with humans without providing clear notification.

Hawaii (House Bill 639):  Would classify as unfair or deceptive the use of AI chatbots capable of mimicking human behavior without first disclosing this to consumers in a clear and conspicuous manner. Notably, the bill includes exemptions for small businesses that unknowingly utilize AI chatbots.

Illinois (House Bill 3021): Would declare it unlawful to engage in commercial transactions where consumers communicate with AI systems that could be mistaken for humans without clear notification, regardless of whether consumers are actually misled.

Maine (House Paper 1154): Would categorize as an unfair trade practice the use of AI chatbots in commercial transactions that could mislead consumers without proper notification.

Massachusetts (Senate Bill 243): Would designate as unfair and deceptive any commercial transaction where consumers interact with AI that might mislead them into believing they're engaging with humans, unless consumers receive clear notification.

Smarter conversations drive real results

Get a demo

AI Legislation At The Federal Level

The AI Disclosure Act of 2023 (H.R. 3831) was introduced on June 5, 2023. This bill would require generative artificial intelligence to include on any output a disclaimer stating: "this output has been generated by artificial intelligence." This AI legislation would grant enforcement authority to the Federal Trade Commission (FTC), treating violations as unfair or deceptive practices. However, the legislation has never made it out of the committee stage, and legislative trackers now list H.R. 3831 as dead.

Opposition to State Regulation:  A recent development threatens all existing and pending state legislation governing AI.  On May 13, 2025, House Republicans introduced a budget reconciliation bill that would prohibit states from enforcing "any law or regulation" concerning automated computing technologies for ten years following the bill's enactment. If passed, this would effectively end existing state-level AI laws and prevent new ones from taking effect.

Critics, including advocacy groups like Americans for Responsible Innovation, warn this could lead to "catastrophic consequences" for the public, while supporters (including major tech companies) argue for federal oversight rather than a fragmented system of state regulations.

Implications for Businesses

As states continue to lead the way in regulating AI transparency, businesses must navigate an increasingly complex regulatory landscape. The trend toward requiring disclosure when using generative AI for consumer interactions shows no signs of slowing, despite opposition from some quarters. Companies that proactively implement transparent AI disclosure practices will be better positioned to comply with both existing and future regulations while maintaining consumer trust in an increasingly AI-driven economy.

To meet the challenge presented by the growing patchwork of state AI legislation and disclosure laws, businesses should:

1.  Develop AI governance programs that include disclosure mechanisms adaptable to various state requirements.

2.  Consider notifying consumers when they're interacting with AI, regardless of disclosure is required.

3.  Monitor legislative developments as more states introduce and pass AI disclosure requirements.

4. Prepare for potential federal legislation that could eventually preempt state laws and create uniform national standards.

Monthly Newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.
* indicates required
Read about our privacy policy.
Thank you for signing up. You're all set for updates to your inbox!
Oops! Something went wrong while submitting the form.

FAQs

What are AI disclosure laws and why are they being introduced?

AI disclosure laws require businesses to notify consumers when they’re interacting with AI systems—covering chatbots, voice agents, and generative content. These laws address rising concerns over deceptive AI use and aim to maintain transparency in automated communications. Legislators introduced these mandates to safeguard user rights and preserve trust as AI interactions become more sophisticated.

Which U.S. states have enacted AI disclosure laws as of 2025?

As of July 2025, states like California (SB-942), Utah (SB-149), Illinois, Colorado, and Kentucky have enacted laws requiring AI-generated content to be labeled or users notified. Additionally, Tennessee’s ELVIS Act addresses voice deepfakes. Over 15 other states have proposed similar legislation focused on AI transparency and deepfake disclosure

How do AI disclosure laws differ between U.S. states and the EU AI Act?

U.S. AI disclosure laws are state-specific and label-driven, focusing on consumer notification. In contrast, the EU AI Act (effective Aug 1, 2024) implements a risk-based framework: high-risk systems require transparency, technical documentation, and watermarking of generative AI outputs. U.S. states often reinforce consumer protection but lack the EU's unified governance approach.

What business operations are covered under current AI disclosure regulations?

Disclosure obligations now apply to any AI-generated content used in customer service bots, automated voice agents, marketing campaigns, Telehealth, deepfake content, and even election communications in some states. Businesses must clearly identify AI-generated content and may need to provide detection tools (as mandated in California) or embed visible labels identifying “AI-generated content.”

Are there federal AI disclosure requirements in the U.S.?

Currently, no federal law mandates AI disclosure, but a national moratorium and the Generative AI Copyright Disclosure Act (H.R.7913) are under review. The AI Disclosure Act of 2023 proposed labeling requirements for AI-generated responses, but has yet to pass. This vacuum leaves compliance fragmented and driven by aggressive state-level legislation.

How quickly is the regulatory landscape evolving for AI disclosure?

AI lawmaking is accelerating—more than 550 AI-related bills emerged across 45+ U.S. states in 2024–2025. The EU AI Act is rolling out in stages through August 2027. U.S. deepfake, accuracy, and labeling laws are accelerating, signaling that all AI firms must proactively integrate evolving transparency measures.

How should companies prepare to comply with AI disclosure regulations?

Businesses must implement AI governance programs, including system inventories, user disclosures (“I’m an AI”), audit logs, staff training, and legal monitoring tools like the IAPP tracker. EU compliance requires technical documentation, watermarking standards and risk assessments . A proactive rollout ensures adaptability and minimizes future rework.

What are the risks of non-compliance with AI disclosure laws?

Violating disclosure laws can result in federal and state enforcement fines, consumer lawsuits, and reputational damage. Under the EU AI Act, non-compliance can lead to fines of up to 7% of global turnover or €35M. Additionally, deepfake misuse or deceptive advertising claims can escalate liability further—making proactive compliance essential for business longevity.