The Trump administration is discussing oversight on artificial intelligence models before they are made publicly available. The proposal is a departure from their long noninterventionist approach to AI.
Sarah Kreps, director of the Tech Policy Institute, focuses on the intersection of international politics, technology, and national security.
Kreps says: “The question of how to oversee AI models is harder than it looks. Two things are simultaneously true. The first is that Mythos and the models like it are real national security concerns. The second is that the obvious response, government vetting, carries risks of its own.
“Mythos and the models like it are real national security concerns. The recent demonstrations of AI-enabled cyberattack capability have made that concrete in a way the abstract debate never did. Anthropic and the other labs are now part of the national security complex whether they want to be or not, and that requires a closer working relationship with the government than the current arrangement allows.
“But once you build a government vetting process for technology, you get the good with the bad. The definition of ‘safe’ is contested. The process can be politicized. Whoever holds power gets to shape how the vetting works. The Biden administration tried to advance regulation with the 2023 executive order, which the Trump administration revoked on its first day in office. Now new kinds of measures are being considered again, just under different political auspices. The challenge is doing the coordination without building an approach that is either quickly obsolete because of the fast-moving technology or that gets weaponized by the next administration, whoever that is. Neither administration has fully figured this out.”