Britain expands AI safety institute to San Francisco, home of OpenAI

by Pelican Press
45 views 4 minutes read



Britain expands AI safety institute to San Francisco, home of OpenAI

An aerial view of the city of San Francisco skyline and the Golden Gate Bridge in California, October 28, 2021.

Carlos Barria | Reuters

LONDON — The British government is expanding its facility for testing “frontier” artificial intelligence models to the United States, in a bid to further its image as a top global player tackling the risks of the tech and to increase cooperation with the U.S. as governments around the world jostle for AI leadership.

The government on Monday announced it would open a U.S. counterpart to its AI safety summit, a state-backed body focused on testing advanced AI systems to ensure they’re safe, in San Francisco this summer.

The U.S. iteration of the AI Safety Institute will aim to recruit a team of technical staff headed up by a research director. In London, the institute currently has a team of 30. It is chaired by Ian Hogarth, a prominent British tech entrepreneur who founded the music concert discovery site Songkick.

In a statement, U.K. Technology Minister Michelle Donelan said the AI Safety Summit’s U.S. rollout “represents British leadership in AI in action.”

“It is a pivotal moment in the U.K.’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the U.S. and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”

The expansion “will allow the U.K. to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the United States to advance AI safety for the public interest,” the government said.

San Francisco is the home of OpenAI, the Microsoft-backed company behind viral AI chatbot ChatGPT.

The AI Safety Institute was established in November 2023 during the AI Safety Summit, a global event held in England’s Bletchley Park, the home of World War II code breakers, that sought to boost cross-border cooperation on AI safety.

The expansion of the AI Safety Institute to the U.S. comes on the eve of the AI Seoul Summit in South Korea, which was first proposed at the U.K. summit in Bletchley Park last year. The Seoul summit will take place across Tuesday and Wednesday.

The government said that, since the AI Safety Institute was established in November, it’s made progress in evaluating frontier AI models from some of the industry’s leading players.

It said Monday that several AI models completed cybersecurity challenges but struggle to complete more advanced challenges, while several models demonstrated PhD-level knowledge of chemistry and biology.

Meanwhile, all models tested by the institute remained highly vulnerable to “jailbreaks,” where users trick them into producing responses they’re not permitted to under their content guidelines, while some would produce harmful outputs even without attempts to circumvent safeguards.

Tested models were also unable to complete more complex, time-consuming tasks without humans there to oversee them, according to the government.

It didn’t name the AI models that were tested. The government previously got OpenAI, DeepMind, and Anthropic to agree to opening their coveted AI models up to the government to help inform research into the risks associated with their systems.

The development comes as Britain has faced criticism for not introducing formal regulations for AI, while other jurisdictions, like the European Union, race ahead with AI-tailored laws.

The EU’s landmark AI Act, which is the first major legislation for AI of its kind, is expected to become a blueprint for global AI regulations once it is approved by all EU member states and enters into force.





Source link

Breaking News: Business,Technology,Breaking News: Technology,Media,Microsoft Corp,business news
#Britain #expands #safety #institute #San #Francisco #home #OpenAI

You may also like