AFTF's Comment on State Laws Having Adverse Effects
Alliance for the Future Urges Federal Action to Prevent a Patchwork of State AI Laws
Alliance for the Future has submitted formal comments to the Department of Justice and the National Economic Council in response to their request for information on state laws with out-of-state economic impacts.
Our filing highlights two California statutes, SB 942 and SB 1001, as examples. These show how a single state can, in practice, regulate interstate AI commerce. These laws risk creating de facto national standards that raise costs, slow innovation, and extend California’s rules far beyond its borders.
We argue that this patchwork approach undermines America’s ability to lead in AI. Federal action is necessary to establish clear national standards that consistently protect consumers and provide innovators with certainty.
AFTF will continue to advocate for a uniform national framework that ensures AI strengthens the U.S. economy and maintains our global leadership.
Comment on Request for Information: State Laws Having Significant Adverse Effects on the National Economy or Interstate Economic Activity
Docket: DOJ–OLP–2025–0169
Submitted by: Alliance for the Future
Date: September 15, 2025
Introduction
Alliance for the Future appreciates the opportunity to provide comments responding to the joint request from the Department of Justice and the National Economic Council. The notice seeks examples of state laws that impose costs across state lines and hinder interstate commerce and national economic activity. There have been a number of such laws passed to date, and are even more such bills currently proposed in legislatures throughout the United States. We write to give examples of just two California statutes, SB 942 and SB 1001, that well illustrate how a single state can, in practice, regulate interstate commerce in artificial intelligence. California’s market size and concentration of AI developers mean that its statutes do not operate in isolation but rather shape national practices. These laws demonstrate why a federal framework is necessary to create consistent rules that allow innovation to flourish while protecting consumers.
California SB 942, the California AI Transparency Act
SB 942 requires providers of generative AI systems that produce images, video, or audio to build and maintain a free public detection tool. The tool must allow users to determine whether content was created or altered by the provider’s system, and it must reveal provenance information about the content. Additionally, the law requires durable provenance disclosures for AI generated content. This law is scheduled to take effect on January 1, 2026.
Although styled as a California statute, SB 942 has a reach well beyond California’s borders. The law applies to any provider whose system is accessed in the state. Because California is home to millions of users and nearly all major AI developers, compliance will not be limited to firms operating solely within California. In practice, most providers will implement these requirements systemwide rather than attempt to build and maintain separate versions of products for different jurisdictions. The compliance obligations will therefore be borne by providers and users nationwide, whether or not they are located in California.
The economic impacts of SB 942 travel beyond the state’s border. Providers would be required to design, test, and maintain provenance features and public detection tools across their entire product lines. These obligations require significant investments in compute resources, storage, and moderation systems, and it is not even clear that such systems could even be made practical. This would adversely affect the pace at which new products can be released, slowing down innovation cycles that are vital to competitiveness. Firms with little or no presence in California are nevertheless required to devote time and resources to compliance with California’s mandates. This is precisely the type of scenario where a state law imposes costs and constraints on interstate commerce in ways that Congress and federal agencies are better positioned to address.
California SB 1001, the Bot Disclosure Law
SB 1001 prohibits the use of a bot to communicate with a person in California when the intent is to mislead the person about the bot’s artificial identity in order to sell goods or influence a vote. The law requires the user of the bot to disclose clearly that the interaction is with a bot. Although perhaps well intentioned, this could have a chilling effect on a variety of communications systems nationwide.
Like SB 942, SB 1001 applies too broadly. Its trigger is communication with a person in California. Customer support tools, marketing campaigns, and civic outreach platforms used across the country inevitably reach California residents. Rather than create complex filters and logic to distinguish California users from everyone else, most organizations typically adopt one disclosure policy that applies across their entire user base. This means that California’s rule becomes the effective standard for users everywhere.
The compliance burdens are real. Firms must redesign communication systems, update training protocols for staff, and reconfigure outreach strategies to comply with California’s disclosure rules. The result is that organizations in other states, even those with no direct ties to California, are forced to adjust their national operations to accommodate a single state’s requirements. Once again, this is a situation where the absence of a federal baseline allows one state to shape interstate commerce in ways that ripple across the country.
Why Federal Action is Needed
California is a large and influential market. Its regulations are often adopted nationally, not because Congress or federal agencies have made a deliberate policy choice, but because firms cannot practically create separate compliance regimes for all markets. When that dynamic is applied to AI, the stakes are even higher. Artificial intelligence is a general purpose technology that is used in medicine, agriculture, education, national defense, and a host of other areas. It is not sustainable to have fifty different standards for AI provenance, disclosure, or communication rules.
The examples of SB 942 and SB 1001 underscore the risks. They show how a single state law can dictate product design, operational policies, and compliance costs far outside its borders. They demonstrate why the federal government must step in to provide clear national standards that preempt conflicting or duplicative state mandates. Federal leadership would allow companies to innovate with certainty, would protect consumers consistently, and would prevent the inefficiencies of a patchwork regulatory landscape.
Requested Federal Actions
The Department of Justice and the National Economic Council should recommend that federal agencies take up this challenge in a coordinated manner.
The Federal Trade Commission, working with the National Institute of Standards and Technology and the Department of Commerce, should establish a national baseline for AI provenance and disclosure. Such a baseline should preempt inconsistent state obligations so that providers can operate under one clear set of rules.
The FTC should also issue guidance making clear that compliance with the federal baseline is sufficient and that providers do not need to create separate policies for individual states.
The Department of Justice should prioritize enforcement in cases where state rules compel nationwide product redesigns or duplicate existing federal schemes.
The Office of Management and Budget should encourage federal agencies to align procurement contracts with the federal baseline to prevent vendors from being forced to build state-specific versions for federal work.
Conclusion
California’s SB 942 and SB 1001 show how a single state can, through the size of its market and the reach of its companies, impose costs and compliance burdens that travel well beyond its borders. Federal action is necessary to restore clarity and balance. Establishing a national baseline would lower duplicative costs, ensure consistent safeguards, and preserve America’s ability to lead in the development and deployment of artificial intelligence.
Alliance for the Future appreciates the opportunity to provide these comments and stands ready to provide additional information as the Department of Justice and the National Economic Council consider next steps.