Artificial intelligence

US Demands New AI Safeguards For Government Use

The White House announced today that in order to preserve American rights and guarantee security as the government grows the use of artificial intelligence in a variety of applications, federal agencies that employ the technology must implement “specific protections” by December 1, according to a Reuters report.

TakeAway Points:

  • The White House is requiring federal agencies using artificial intelligence to adopt “concrete safeguards” to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.
  • These precautions include thorough public disclosures that let the public know when and how the government uses artificial intelligence.
  • Generative AI has spurred concerns that it could lead to job losses, upend elections, potentially overpower humans, and have catastrophic effects.

AI Safeguards

The Office of Management and Budget issued an order to federal agencies to keep an eye on, evaluate, and test the effects of AI “on the public, reduce the risks of algorithmic discrimination, and provide the public with transparency into how the government employs AI.” In addition, agencies need to define operational and governance indicators and do risk assessments.

Agencies “will be obliged to establish meaningful protections when utilising AI in a way that could damage Americans’ rights or safety,” according to the White House. These precautions include thorough public disclosures that let the public know when and how the government uses artificial intelligence.

In October, President Joe Biden issued an executive order under the Defence Production Act mandating that AI developers who create systems that could endanger public health, safety, the economy, or national security in the United States notify the government of their findings before making them widely available.

The Transportation Security Administration’s use of facial recognition technology will allow passengers to quickly and easily opt out of screening, according to new measures announced by the White House on Thursday. In the federal healthcare system, a human must supervise “the procedure to validate the tools’ outcomes” when AI is utilised to support diagnostic judgements.

Generative AI Concerns

Both excitement and worry have been sparked by generative AI, which can produce text, images, and videos in response to open-ended cues. It has the ability to dominate people and have disastrous consequences, as well as cause job losses and upset political systems.

Government departments are being forced by the White House to publish lists of AI use cases, provide data on the usage of AI, and reveal government-owned AI models, code, and data if they do not present any concerns.

Federal AI Uses

The Biden administration highlighted current federal AI applications, such as the Centres for Disease Control and Prevention’s use of AI to identify opioid use and anticipate the spread of disease and the Federal Emergency Management Agency’s use of AI to evaluate structural hurricane damage. Artificial Intelligence is being used by the Federal Aviation Administration to “deconflict air traffic in major metropolitan areas to enhance travel time.

To encourage the responsible use of AI, the White House intends to recruit one hundred AI specialists, and it has mandated that government agencies appoint chief AI officers within sixty days.

The Biden administration suggested in January that US cloud providers check if foreign organisations are using US data centres to build AI models by using “know your customer” regulations.

Comments
To Top

Pin It on Pinterest

Share This