VP Harris says US agencies must show their AI tools aren’t harming people’s safety or rights
U.S. federal agencies must show that their artificial intelligence tools aren’t harming the public, or stop using them, under new rules unveiled by the White House on Thursday.
“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris told reporters ahead of the announcement.
Each agency by December must have a set of concrete safeguards that guide everything from facial recognition screenings at airports to AI tools that help control the electric grid or determine mortgages and home insurance.
The new policy directive being issued to agency heads Thursday by the White House’s Office of Management and Budget is part of the more sweeping AI executive order signed by President Joe Biden in October.
While Biden’s broader order also attempts to safeguard the more advanced commercial AI systems made by leading technology companies, such as those powering generative AI chatbots, Thursday’s directive targets AI tools that government agencies have been using for years to help with decisions about immigration, housing, child welfare and a range of other services.
As an example, Harris said, “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”
Agencies that can’t apply the safeguards “must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” according to a White House announcement.
The new policy also calls for two other “binding requirements,” Harris said. One is that federal agencies must hire a chief AI officer with the “experience, expertise and authority” to oversee all of the AI technologies used by that agency, she said. The other is that each year, agencies must make public an inventory of their AI systems that includes an assessment of the risks they might pose.
Some rules exempt intelligence agencies and the Department of Defense, which is having a separate debate about the use of autonomous weapons.
Shalanda Young, the director of the Office of Management and Budget, said the new requirements are also meant to strengthen positive uses of AI by the U.S. government.
“When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services,” Young said.
Matt O’brien, The Associated Press
News Related-
The best Walmart Cyber Monday deals 2023
-
Jordan Poole took time to showboat and got his shot blocked into the stratosphere
-
The Top Canadian REITs to Buy in November 2023
-
OpenAI’s board might have been dysfunctional–but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it’s no contest
-
Russia-Ukraine Drone Warfare Rages With Dozens Headed for Moscow, Amid Deadly Winter Storm
-
Trump tells appeals court that threats to judge and clerk in NY civil fraud trial do not justify gag order
-
Can Anyone Take Paxlovid for Covid? Doctors Explain.
-
Google this week will begin deleting inactive accounts. Here's how to save yours.
-
How John Tortorella's Culture Extends from the Philadelphia Flyers to the AHL Phantoms
-
Tri-Cities' hatcheries report best Coho return in years
-
Wild release Dean Evason of head coaching duties
-
Air New Zealand’s Cyber Monday Sale Has the 'Lowest Fares of 2023' to Auckland, Sydney, and More
-
NDP tells Liberals to sweeten the deal if pharmacare legislation is delayed
-
'1,000 contacts with a club': Tiger Woods breaks down his typical tournament prep to college kids in fascinating video