Response to a NSF/OSTP Request for Information on the Proposed National Artificial Intelligence Action Plan
Alliance for the Future
Response to a NSF/OSTP Request for Information on the Proposed National Artificial Intelligence Action Plan
March 14, 2025
This is our response to the National Science Foundation/White House Office of Science and Technology Policy RFI of February 6, 2025 on the development of AI policy (2025-02305 (90 FR 9088))
1. The US Must Accelerate Development of Artificial Intelligence
Permissionless AI Development is Key to the United States’ Economy and Security
The world has seen many major technological developments that have transformed society, such as the inventions of agriculture, writing, metallurgy, printing, steam engines, electricity, the telephone, automobiles, aircraft, computers, and the internet. In each case, society was far different after than before, though almost entirely to the better.
Artificial Intelligence is only the latest in a long line of powerful new technologies, and as with all the previous ones, AI is likely to have pervasive effects across our society. As with previous revolutionary technologies, mastery of AI will become one of the foundations of economic growth and national power for the United States.
Also as with many previous technological revolutions, fear-mongers have wasted little time in trying to convince our society that the risks associated with the new technology are too great, that the general public should be afraid of the effects of the new development, and that the government must intervene immediately to prevent a disaster.
Although we agree that AI will change our society in profound and likely unforeseen ways, just as the telegraph and modern medicine and precision machining have, we also believe that, as with previous technological revolutions, the upsides dramatically outweigh potential downsides.
It is also true that all previous technological revolutions have had less than savory uses in crime and warfare. Automobiles are sometimes used as getaway cars or for smuggling drugs. Metallurgy makes possible everything from railroads to cooking knives to city water systems, but it also makes possible swords and guns. However, the overwhelming arc of history has been the improvement of human welfare through technological development, even if downsides do appear.
The long experience of our species has also been that some will always seek to claim that this time, things are different, and this time, we must prevent a new technology from being freely developed, or from being developed at all.
History has taught us, repeatedly, that societies that fall behind on technological development or which attempt to prevent it are inevitably overwhelmed by those that explore and embrace new frontiers.
For example, Japanese society largely renounced Western technology and contact with the West in 1603. This ended in 1853 when Matthew Perry’s expedition sailed into Edo Bay, with the Japanese powerless to prevent it because they lacked the military technology to do so. Japan discovered that even if the nation did not wish to interact with outside technologies, outsiders equipped with those technologies would not necessarily give them the choice.
The geopolitical survival of the United States as a great power with a high standard of living and substantial physical and economic safety depends on the US remaining at the cutting edge of technological development.
The long history of technological development also demonstrates that free experimentation and innovation is required for a technology to be adopted. Even in relatively recent times, the adoption of rigid, fear-driven regulation in the nuclear industry resulted in a near halt in nuclear power plant construction in the United States for well over half a century. Heavy government control and regulation of industries as diverse as telecommunications, airlines, and energy production stifled innovation and growth before substantial decontrol in the 1970s and 1980s liberated those sectors to grow and become engines of prosperity.
One of the reasons US technology companies have come to dominate the globe is that the US employs a vastly lighter touch on regulation than even most other developed nations. If the US is to sustain and enhance our dominance in the field of AI to permit our continued economic competitiveness and national security, we must sustain robust AI research and development. Without it, we risk US prosperity, geopolitical leadership, and even our safety.
US AI policy must have at its core the explicit aim of encouraging our industries to freely develop and adopt Artificial Intelligence in an atmosphere of permissionless innovation. Google, Microsoft, Amazon and other major US technology companies grew and prospered in recent decades because they did not require licenses or the permission of bureaucrats to innovate. Artificial Intelligence will grow best and fastest in the United States in a similar environment. The risks associated with premature regulation vastly outweigh any risks associated with permitting free development of the technology.
AI Will be Critical to the Future of the US Military
Recent wars in Ukraine and the Middle East have demonstrated that the face of the modern battle field has changed. Drones, both remote controlled and autonomous, have been responsible for the overwhelming majority of casualties in the Ukrainian conflict, with a rapid acceleration of development in both offensive and defensive capabilities on both sides.
In the Middle East, missile, artillery, and drone defense systems have become a major factor in recent conflicts, with large scale attacks being successfully thwarted by such systems.
In the near future, operating both offensive and defensive systems at a high degree of capability will almost certainly require the development of advanced AI systems possessing substantial autonomy, especially in circumstances where an opponent may overwhelm human decision-makers with their own small, autonomous systems.
Another pattern has emerged in recent conflicts. Traditional but expensive weapons systems are being overwhelmed by cheap systems that are easy to mass manufacture. Although one can easily shoot down $10,000 drones with missiles costing millions of dollars, such an asymmetry in attacker and defender costs cannot be sustained over the long term.
Companies both in the United States and abroad are also beginning to develop powerful autonomous robots, both humanoid and non-humanoid. It is difficult to imagine that such systems will not see use in warfare. Future conflicts may involve mass coordinated assaults by both ground based and airborne autonomous systems.
It is not an exaggeration to say that the United States has maintained its safety over the course of the last century partially as a result of overwhelming military supremacy, and this supremacy has had its root in both the productive capacities of the US and its lead in technological development. This trend will only continue as we move further into coming decades. US security will thus depend on maintaining an edge in the development of AI systems.
The US must therefore both maintain a robust AI and robotics sector to supply its military, and adapt for defense in a future where AI and autonomous drones and robots are at the center of military power. The US Department of Defense must begin planning now to permit it to maintain operational dominance in such a future.
AI Will Be Required to Maintain and Enhance US Industrial Competitiveness
We live in a world of eight billion people. The United States has a population of 340 million, only 4.25% of the total. China alone has over four times the population of the US and eight times the number of engineers.
The US must maintain its ability to compete in this world, not only for economic reasons, but also because industrial productivity is key to maintaining the military deterrence required to indefinitely sustain safety.
AI and robotics will be a key factor that allows the US to accelerate industrial development and to achieve far higher levels of manufacturing capacity even without a much larger population. This will permit the US to maintain its dominant positions in global trade and defense production.
US policy must continue to allow permissionless development of AI technologies if it is to maintain a competitive industrial capacity.
2. Artificial Intelligence Policy Recommendations
The US Must Encourage Open Source AI Development
Open source software has been a major driver of US technological development in recent decades, with the bulk of currently deployed computer systems, networking, and telecommunications equipment depending, to a greater or lesser extent, on at least some open source software.
Open source AI models have already begun to have a similar industrial impact, with numerous open models having been released to date, and a wide community of users across both business and the research community. Open source AI — where model weights, code, and training mechanisms are publicly available — fosters collaboration and accelerates innovation. By openly sharing AI models, researchers and commercial users can study, improve, and repurpose them, leading to faster scientific and engineering progress. Open source AI also lowers entry barriers for entrepreneurs; open source models let developers build upon and adapt prior work, which broadens AI’s availability to small companies, researchers, nonprofits, and individuals.
Open source AI models are already are in wide use by commercial organizations in the US. There are many reasons for this; some organizations require stronger privacy or security guarantees than can be provided by commercial AI model providers, some require special customization and training for AI models to make them suitable for local requirements, some require models that can be run at low latency locally on mobile or isolated devices, some have cost requirements that cannot be easily met by other offerings, etc.
US companies previously dominated in open source AI releases, but foreign AI models have gained considerable attention in the last year, with companies in countries like China and France releasing powerful AI models to the world. Some have even argued that China is now the predominant provider of open source models, with Alibaba’s Qwen models and the DeepSeek models being among the most famous. Many companies worldwide will depend on the availability of open source models for their work, and if Chinese developed models dominate over US ones, the predominant models in use may end up coming from China.
This might create a variety of problems both expected and unexpected. In addition to conveying an obvious industrial advantage to China firms, AI models are to a great extent carriers of the culture in which they originate, because, unlike less sophisticated forms of software, AIs are used as sources of information and convey opinions. Chinese open source models, not unexpectedly, respond to questions about topics the Chinese state considers sensitive with the sorts of answers that the CCP considers acceptable. Dominance by such models may also mean dominance of Chinese narratives.
Ensuring American leadership in AI will require that the US also remain competitive in the provision of open source AI models. Attempts to restrict the development, release, or use of open source AI models in the United States could have a variety of deleterious effects on our nation’s interests, ranging from crippling US industry to providing other countries with dominance in the global AI marketplace and in the equally important marketplace of ideas. It would be dangerous for the US government to restrict or interfere with the development and use of open source models while other countries suffer no such impediments.
Beyond ensuring continued permissionless innovation in open source AI, it is likely important that the US government take an active role in encouraging the development and release of future open source models. Traditional US government funders of critical research such as DARPA should be encouraged to advance research and development in this area.
Most “AI safety” Claims Lack Real Evidence and Should Not Drive Regulation
Many of the speculative extreme “AI safety” scenarios (such as AI becoming an uncontrollable existential threat) are not backed by evidence, but have been seized upon both by those with ideological motives to restrict AI development and by companies seeking to engage in regulatory capture to restrict competition for their own products. (A continuous stream of papers from a tiny group of AI “safety researchers” have been published, but largely consist of rigged experiments intended to produce the desired result to back up the claims made by those same researchers.)
However, the previous administration’s heavy-handed safety regulations (prompted by these unfounded worst-case fears) imposed burdens that disproportionately hurt U.S. innovators. AI is currently at an early stage of development. It is still not widely deployed, and real world experience with such systems remains minimal. We as yet can only speculate, often likely inaccurately, about potential risks. Broad AI rules and compliance regimes made in the absence of real world experience with actual problems would slow down R&D without concomitant benefits, and would threaten both U.S. economic innovation and national security leadership by throttling innovation and competition.
The US Government Should Work to Preempt State AI Regulation
Hundreds of proposed AI regulation bills are currently working their way through most of America’s state legislatures. However, a patchwork of conflicting state-level AI laws would create compliance nightmares for almost any rm in the field, and stifle innovation. If every state imposes its own rules on AI development or use, AI researchers and companies will face 50 different legal standards — a huge deterrent to experimentation and deployment. State-by-state AI regulation could very well stifle innovation and deter competition due to a maze of varying law , and even large, well funded firms may find it hard to comply with such a regime. Such fragmentation raises costs and uncertainty for AI developers, likely driving research to more friendly jurisdictions or causing companies to hold back on U.S. projects. Hundreds of AI laws are currently working their way through state legislatures, many of them ill considered and poorly drafted.
To keep America at the cutting edge of AI, it’s important to avoid a scenario where well-intentioned but onerous state rules hamper nationwide progress in research, development, and deployment.
America needs a coherent, unified national AI strategy — important questions about AI governance should be answered through national policymaking, and not via premature state mandates.
Given the risks of a regulatory patchwork, federal policymakers should consider using various forms of preemption to override or limit harmful state AI laws. Targeted federal preemption could prevent the most disruptive state interventions in AI. For example, preempting states from creating their own AI liability or licensing regimes would stop any one state from severely restricting the development, distribution, and use of AI models nationwide.
In capital-intensive, global fields like AI, a coherent national framework is crucial — otherwise, overly restrictive state rules could impede long-term investments. Preemption is prudent to ensure that crucial AI research isn’t bottlenecked by local regulations. Carefully crafted preemption can preserve room for innovation while still allowing states to address legitimate local concerns in narrower domains.
Short of outright preemption, there are policy tools to discourage states from imposing their own broad AI rules. Financial incentives or conditional federal grants could also be used to nudge states toward a light-touch approach. Overall, by taking strong national leadership and offering guidance, the US can dissuade individual states from pursuing divergent AI regulations that might collectively hinder innovation.
AI Regulation Should Focus on Preventing Misuse, Not Preventing Capabilities
AI will not only be used for beneficial purposes. Every technology developed in human history from fire to the telephone has been abused by sociopaths and criminals for their own ends. We drive our cars with gasoline, but it is also used to commit arson. Computers can be used to write and print ransom notes for kidnappers.
As AI becomes ubiquitous in our society, it too will be misused by criminals. It is important, however, that regulation and criminalization of misuse focus on the misuse of the technology and not upon the technology itself. Merely because cars can be driven by intoxicated people does not mean that we have banned cars; we have instead banned drunk driving. We have not banned the possession of kitchen knives, but we do have criminal penalties for stabbing people. We have not banned telephones or the internet but we do have criminal penalties for wire fraud.
In a hypothetical past in which all automobile companies were held to be liable for harm caused by drunk driving, the use of cars in bank robberies, etc., we might never have developed a modern automotive industry.
As criminals inevitably find that AI is as useful for them as it is for ordinary legitimate users, we must focus on attacking the misuse of the technology, and never the technology itself, which is neutral. Regulation and law should therefore focus on abuses of AI, and should not penalize the development or distribution of AI systems themselves.
AI also presents us with new opportunities to defend against various sorts of criminal behavior; for example, AI systems may be used by law enforcement to filter through tremendous amounts of collected video recordings in ways that were scarcely feasible in the past. AI systems may be used in the future to screen your calls and warn you that a phone caller purporting to be an IRS agent or from your bank is in fact a scammer attempting to cheat you. US policy should encourage such protective uses of AI, including through appropriate research and development programs within law enforcement agencies.
Foreign Regulators and AI Treaties Could Undermine American AI Development
Some foreign regulatory regimes — notably the European Union’s approach — impose stringent rules on AI that could disadvantage the faster-moving U.S. tech sector. Not only should the United States avoid copying the European regulatory approach to AI, which would cripple America as it competes with countries like China, the United States should also oppose attempts by European regulators to impose their rules outside their own borders. European laws and regulators often seek to restrict the freedom of American companies outside of Europe. Consequently, they interfere with America’s own internal policies.
In addition, recent international agreements (“AI treaties”) driven by foreign priorities may shackle American AI progress. For instance, the first global AI treaty, opened for signature in late 2023, is aimed at European-style precautionary regulation. While it purports to protect rights, it risks further innovation in AI by adding complex and wholly unnecessary compliance burdens.
If the U.S. submits to restrictive frameworks set by foreign regulators or multilateral treaties, it could inadvertently hamstring its own developers with red tape, allowing some overseas competitors to pull ahead. Protecting America’s agile, innovation-friendly environment may sometimes require pushing back on external regulatory schemes that don’t align with U.S. interests.
The US should therefore not sign on to AI treaties, should withdraw from any existing agreements that do not comport with American policy, and cancel partnerships with foreign regulators unless they conform with American policy goals.
International cooperation is valuable, but it must not come at the cost of U.S. technological leadership. If an AI treaty or partnership imposes foreign-driven constraints that conflict with America’s goals (for example, overly restrictive provisions that hinder innovation), the U.S. should reconsider its involvement. The key to progress is continued American leadership in AI innovation, not more attempts at international regulation.
This means the US should not sign on to agreements that undermine its competitive edge or strategic priorities. Unless international AI frameworks advance American values and interests (such as promoting innovation, ensuring security, and maintaining a level playing field for U.S. companies), the U.S. should be prepared to withdraw or abstain.
Instead, emphasis should be on collaborative efforts that complement the American approach — for example, voluntary R&D partnerships or alignment on principles allowing fair use of training materials — rather than binding accords that handcuff US policy flexibility.
The US Should Ensure That Friendly Nations Can Develop and Use AI Models and that Policies Do Not Drive Them Towards Hostile Suppliers
Export policies on AI technologies need to be finely tuned so they target adversaries without damaging American interests. Sweeping restrictions risk driving other countries toward non-US technology sources, including ones potentially inimical to American interests. Countries that cannot purchase or work with US suppliers will inevitably opt to work with whoever lets them access advanced AI, even if that means turning to China.
That could potentially result not only in direct economic harm to US suppliers, but also to a national security issue for the United States in which Chinese technology gains an ever broader foothold worldwide. In addition to the more obvious risks, this could result in dominance by information retrieval and generative AI systems that promote values contrary to our own.
The U.S. should ensure friendly nations are not cut off from American AI hardware or software, and that American companies are not prevented from competing in the global marketplace, lest we inadvertently encourage dominance in the AI marketplace by nations who are not friendly to U.S. interests.
Clarity is Required on the Intellectual Property Status of Training Data and the Copyright Status of Generated Works
Current intellectual property law in the United States was not created with artificial intelligence systems in mind. In particular, although it is unquestionable that it is fair use for humans to use copyrighted works to educate and train themselves and others, and to use such works to further the development of arts and sciences, the fair use status of the use of copyrighted materials in educating and training AI systems is currently the subject of considerable legal dispute. Clarity is required in this area. Many foreign jurisdictions will almost certainly continue to permit AIs to be trained quite freely on existing materials. If US law inadvertently restricts the development of AI by enforcing an excessively strict interpretation of intellectual property law while foreign countries are not so hampered, this will slow US AI development and deployment and give a competitive advantage to those countries. This may ultimately result in severe economic and even national security damage, as has been described earlier in this document.
Similarly, it should be noted that in the future, much of the software, educational materials, entertainment, art, etc. produced in the United States may be partly or wholly created using AI assistance, as AI is likely to become ubiquitous within our society. Permanent findings that would result in most or all such works ceasing to have copyright protection would be unusually disruptive. The copyright status of such materials must be settled in such a way as to continue to permit the smooth functioning of American commerce and industry.
Final Notes
The rise of Artificial Intelligence presents the United States with tremendous opportunities to increase economic growth, cure disease, reduce the cost of food, expand America’s industrial output, improve the security of our nation, and an almost endless variety of other benefits. It will also, doubtless, dramatically transform our society much as dozens of previous technologies have dramatically transformed our society. However, if we permit ourselves to be ruled by fear rather than allow ourselves to take advantage of the opportunities AI provides, not only will America potentially miss out on those opportunities, it may also create a vacuum into which potentially hostile countries may step, gaining tremendous economic and military advantages over us. We must craft our government policies keeping in mind not only the possibilities before us and the potential cost to us of smothering a vital new industry in its infancy, but also the risk we face if others take advantage of the opportunities we have rejected by regulating them into oblivion.