Skip to Content

Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development

<i>d3sign/Moment RF/Getty Images</i><br/>Lead technology companies begin to work together on safe Artificial Intelligence
d3sign/Moment RF/Getty Images
Lead technology companies begin to work together on safe Artificial Intelligence

By Brian Fung, CNN

(CNN) — Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

“In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Social Media/Technology

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KTVZ NewsChannel 21 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content