Meta, Google and Other Tech Companies Agree to White House AI Safety Measures
As artificial intelligence continues to loom large, the White House has garnered voluntary commitments from seven major tech companies working on developing AI to help regulate and manage risks posed by the new technology. The White House announced commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI on Friday and shared a number of the safeguards the companies have agreed to.
What they're agreeing to? Three fundamental principles surrounding the commitments include safety, security, and trust. These principles include:
A commitment to internal and external security testing of AI systems prior to release, investing in cybersecurity and "insider threat" safeguards.
Prioritizing research on the societal risks that AI systems can pose (including avoiding harmful bias and discrimination).
Developing "robust technical mechanisms" to ensure that users know when content is AI-generated, publicly reporting AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use (covering both security risks and societal risks).
Facilitating third-party discovery and reporting of vulnerabilities in AI systems.
Developing and deploying advanced AI systems "to help address society’s greatest challenges," ranging from cancer prevention to mitigating climate change.
What does this mean? The agreements are voluntary and not enforceable and the Biden administration is planning to pursue bipartisan legislation surrounding AI. In addition to the seven companies named, the White House says it has consulted on AI safety commitments with other countries including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the U.K.