Artificial intelligence

The UK AI Safety Institute Expands To US

In an effort to strengthen its position as a leading global player addressing the risks of technology and to forge closer ties with the United States as governments compete for leadership in AI, the British government is expanding its facility for testing “frontier” artificial intelligence models in the United States.

TakeAway Points:

  • The United Kingdom announced on Monday that this summer in San Francisco, it will launch the American version of its AI Safety Summit, a government-sponsored facility for testing cutting-edge AI systems.
  • The U.S. launch of the AI Safety Summit, according to a statement from U.K. Technology Minister Michelle Donelan, “represents British leadership in AI in action.”
  • The AI Safety Institute’s growth was first suggested at the AI Safety Summit at Bletchley Park, United Kingdom, last year, and it coincides with the AI Seoul Summit in Seoul, South Korea.

American Version of its AI Safety Summit

On Monday, the government declared that this summer in San Francisco, it will launch the American version of the AI Safety Summit, a government-sponsored organisation dedicated to testing cutting-edge AI systems to make sure they are safe.

The U.S. launches AI Safety Summit

The AI Safety Institute in the United States will seek to assemble a group of technical personnel under the direction of a research director. The institute now employs thirty people in London. Ian Hogarth, a well-known British internet entrepreneur who started the website Songkick, which helps people find concerts, serves as its chair.

The U.S. launch of the AI Safety Summit, according to a statement from U.K. Technology Minister Michelle Donelan, “represents British leadership in AI in action.”

“It is a pivotal moment in the U.K.’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the U.S. and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”

According to the government, the expansion “will allow the U.K. to engage with the largest AI labs in the world, headquartered in both London and San Francisco, tap into the wealth of tech talent available in the Bay Area, and cement relationships with the United States to advance AI safety for the public interest.”

OpenAI, the Microsoft-backed startup behind the popular AI chatbot ChatGPT, is based in San Francisco.

AI Safety Institute

The AI Safety Institute was founded in November 2023 as part of the AI Safety Summit, an international gathering that aimed to strengthen cross-border collaboration on AI safety and was hosted at England’s Bletchley Park, the site of World War II code breakers.

The AI Safety Institute was first proposed during the U.K. meeting at Bletchley Park last year, and it is now expanding to the United States on the eve of the AI Seoul meeting in South Korea. The summit in Seoul is scheduled for Tuesday and Wednesday.

According to the government, the AI Safety Institute has made headway in assessing cutting-edge AI models from some of the top companies in the market since its founding in November.

On Monday, the report stated that while some AI models showed Ph.D.-level understanding of chemistry and biology, others just managed to finish cybersecurity tests.

While all models tested by the institute remained highly vulnerable to “jailbreaks,” where users trick them into producing responses they’re not permitted to under their content guidelines, some would produce harmful outputs even without attempts to circumvent safeguards.

The government also claimed that tested models could not do longer, more intricate activities without human supervision.

The names of the tested AI models were omitted. Previously, the government managed to persuade DeepMind, Anthropic, and OpenAI to grant the government access to their highly sought-after AI models in order to facilitate research into the potential risks connected with these systems.

AI Restrictions

This development coincides with criticism levelled at Britain for failing to enact explicit AI restrictions at a time when other countries, such as the European Union, are drafting laws specifically targeted at AI.

Following its approval by every member state in the bloc and implementation into law, the EU’s historic AI Act—the first significant piece of artificial intelligence legislation of its kind—is anticipated to serve as a model for international AI laws.

Comments
To Top

Pin It on Pinterest

Share This