Policy research from Alliance for the Future.
Research
Policy Brief: Research-Level Pre-Emption for Artificial Intelligence Models
Executive Summary America’s system of laws is focused around holding individual actors responsible for their own actions, as opposed to trying to regulate and control every aspect of technology like Europe or China. That is why the AI revolution started in the United States and why the United States is poised to lead in this new technology.
However, there are many who are trying to import European-style controls onto AI, even before we know how AI will develop or how it will be used.
Research
Newscard - SB 1047 Vetoed
Newscard - SB 1047 is Dead, Open Source AI Lives On
Updates Governor Newsom Vetoed SB 1047 this Sunday. SB 1047 was written and co-sponsored by the Center for AI Safety, a group linked to Effective Altruism donors Sam Bankman-Fried, Dustin Moskovitz, and Jaan Tallinn It was later supported by the Screen Actors Guild, SAG-AFTRA The bill faced bipartisan opposition, including Speaker Emerita Nancy Pelosi, House Science Democrats, and California Republican Jay Obernolte.
Research
Newscard - o1 'Strawberry'
At a conference last week, Anthropic co-founder Jack Clark said that LLMs are approaching a technical depth that most non-experts cannot evaluate. As if to immediately prove him correct, his competitor OpenAI released a model specialized for complex math and programming problems, prompting a dozen people to ask me for review requests.
https://openai.com/index/learning-to-reason-with-llms/
Newscard
GPT o1 is the long-awaited strawberry OpenAI once again takes a substantial research lead o1 makes clear improvements in math, coding, and knowledge questions OpenAI describes it as a form of both Chain of Thought and Reinforcement Learning: ““Our large-scale reinforcement learning algorithm teaches the model how to think productively using its chain of thought in a highly data-efficient training process.
Research
Research Progress in AI is Continuous and Slowly Diminishing
A common anti-AI argument is that AI is inherently dangerous because the ability to improve AI research by using AI products makes a scenario where “recursive self-improvement” leads to a near-infinite amount of improvement in weeks, if not faster. Some versions of the argument say that if there is even a 0.01% chance this happens, the consequences would be so disastrous that it’s worth worrying about despite the low probability. I will address that version of the argument.
Research
Statement: The Spectre of Kitchen Sink AI Regulation
The recent NTIA Artificial Intelligence Accountability Policy Report is a warning shot to AI ecosystem developers. The report proposes a future that threatens to impose new compliance costs and higher barriers to entry without articulating how these costs help advance NTIA policy goals for a large fraction of affected developers. Instead, we encourage an evidence-based approach to AI policy, prioritizing points of intervention that are likely to lead to their stated policy goals, rather than one that makes as many interventions as possible.
Research
AI as a Statistical Process
The Policy-Relevant History of Artifical Intelligence Artificial Intelligence (AI) is a broad statistical process used to process data. Media coverage of AI has largely focused on applications related to text and image models, such as ChatGPT or DALLE respectively. A common misconception is that these are the only applications of AI. There are diverse applications of these techniques across industries, including manufacturing, product design, construction, medicine, and agriculture.[1][2][3]
A useful policy lens for AI is to look at similar but less complex statistical processes throughout history.