Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
U.S. Secretary of Commerce Gina M. Raimondo speaks at the "Senior Chinese Leader Event" held by the National Committee on US-China Relations and the US-China Business Council on the sidelines of the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco, California, U.S., November 15, 2023. REUTERS/CARLOS BARRIA
Washington — The Biden administration on Thursday said leading artificial intelligence (AI) companies are among more than 200 entities joining a new US consortium to support the safe development and deployment of generative AI.
Commerce secretary Gina Raimondo announced the US AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet’s Google, Anthropic and Microsoft along with Facebook-parent Meta Platforms, Apple, Amazon.com, Nvidia Palantir, Intel, JPMorgan Chase and Bank of America.
“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.
The consortium, which also includes BP, Cisco Systems, IBM, Hewlett Packard, Northop Grumman, Mastercard, Qualcomm, Visa, academic institutions and government agencies, will be housed under the US AI Safety Institute (USAISI).
The group is tasked with working on priority actions outlined in President Biden’s October AI executive order “including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to US Cold War simulations where the enemy was termed the “red team”.
Biden’s order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.
In December, the US commerce department said it was taking the first step towards writing standards and guidance for the safe deployment and testing of AI.
The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” it said.
Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.
While the Biden administration is pursuing safeguards, efforts in Congress to pass legislation addressing AI have stalled despite numerous high-level forums and legislative proposals.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Leading AI firms join US safety consortium
Washington — The Biden administration on Thursday said leading artificial intelligence (AI) companies are among more than 200 entities joining a new US consortium to support the safe development and deployment of generative AI.
Commerce secretary Gina Raimondo announced the US AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet’s Google, Anthropic and Microsoft along with Facebook-parent Meta Platforms, Apple, Amazon.com, Nvidia Palantir, Intel, JPMorgan Chase and Bank of America.
“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.
The consortium, which also includes BP, Cisco Systems, IBM, Hewlett Packard, Northop Grumman, Mastercard, Qualcomm, Visa, academic institutions and government agencies, will be housed under the US AI Safety Institute (USAISI).
The group is tasked with working on priority actions outlined in President Biden’s October AI executive order “including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to US Cold War simulations where the enemy was termed the “red team”.
Biden’s order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.
In December, the US commerce department said it was taking the first step towards writing standards and guidance for the safe deployment and testing of AI.
The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” it said.
Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.
While the Biden administration is pursuing safeguards, efforts in Congress to pass legislation addressing AI have stalled despite numerous high-level forums and legislative proposals.
Reuters
JOHAN STEYN: Good prompts vastly enhance the results of AI
Microsoft teams up with Semafor to tackle news AI
NEWS FROM THE FUTURE: The Magnificent Seven blow out
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.