President Joe Biden will meet with the CEOs and presidents of seven of the biggest AI tech companies on Friday to score a non-binding agreement that will govern how artificial intelligence is developed and released to the public.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all agreed on Friday to a set of eight rules, which include external testing of AI systems before release, investing in cybersecurity protections for unreleased designs and using a watermarking system for AI-generated content. The list of attendees includes Microsoft President Brad Smith, Meta President Nick Clegg and Google President Kent Walker.
The companies’ commitment to these safeguards is intended to “underline three key principles that must be fundamental to the future of AI: safety, security and trust,” the White House official said. The key AI principles underscore the administration’s stated focus on protecting AI models from cyberattacks, combating deep-seated infringements, and opening a dialogue about companies’ AI risk management systems.
The White House decision comes as Congress defines legally binding safeguards to be put on AI. It’s unclear whether Congress will pass AI legislation this session, which means non-binding, voluntary White House guidelines are the primary direction for addressing some of the broader concerns surrounding the technology.
And industry groups have already signaled their support for the White House’s voluntary code of conduct. “Today’s announcement can help form part of the architecture of regulatory safeguards around AI,” BSA, the Software Alliance, said in a statement.
The White House is also preparing an executive order on AI to be signed by the president, although its contents and when it will arrive are still unknown. Given the breadth of AI applications and the potential for misuse, the administration is “looking at actions across agencies and departments,” the White House official said.