President Joe Biden on Monday he will sign a sweeping executive order to guide its development Artificial Intelligence — Requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list for overseeing the rapidly evolving technology.
The mandate reflects the government’s effort to shape the way artificial intelligence evolves in a way that can maximize its potential and limit its risks. Artificial intelligence has been a source deep personal interest for Biden, with his potential to influence the economy and national security.
White House Chief of Staff Jeff Ziedz recalled that Biden had given his staff a directive to move urgently on the issue, having made technology a top priority.
“We can’t move at a normal pace of government,” Ziedz said the Democratic president told him. “We have to move as fast, if not faster, than the technology itself.”
In Biden’s view, the government has been slow to address the dangers of social media, and now America’s youth are struggling with related mental health issues. Artificial intelligence has the positive potential to accelerate cancer research, model the effects of climate change, boost economic performance, and improve government services among other benefits. But it could also distort basic notions of truth with false images, deepen racial and social inequalities, and provide a tool for fraudsters and criminals.
The series is based on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools like ChatGPT that can generate new text, images and sounds.
Using the Defense Production Act, the order would require top AI developers to share security test results and other information with the government. The National Institute of Standards and Technology is set to create standards to ensure AI tools are safe and secure before public release.
The Commerce Department is set to issue guidelines for labeling and watermarking AI-generated content to help differentiate between genuine interactions and those generated by software. The provision also touches on issues of privacy, civil rights, consumer protection, scientific research and workers’ rights.
An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order would be implemented and met within a range of 90 days to 365 days, with safety and security components facing the earliest deadlines. The official spoke to reporters on condition of anonymity, as requested by the White House.
Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched into 70 minutes despite other pressing issues, including the mass shooting in Maine, the Israel- Hamas and the choice of the new president of the Parliament.
Biden was very curious about the technology in the months of meetings leading up to drafting the order. Its scientific advisory board focused on artificial intelligence in two meetings, and the cabinet discussed it in two meetings. The president also pressed tech executives and civil society advocates on the technology’s potential at several gatherings.
“He was as impressed and concerned as anyone,” White House deputy chief of staff Bruce Reid said in an interview. “He saw fake AI images of himself, his dog. He saw how he can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”
The possibility of false images and sounds has led the president to prioritize labeling and watermarking anything produced by artificial intelligence. Biden also wanted to prevent the risk of elderly Americans receiving a phone call from someone who sounded like a loved one, only to be tricked by an AI tool.
The meetings could go beyond schedule, with Biden telling civil society supporters at a ballroom at San Francisco’s Fairmont Hotel in June: “This is important. Take as much time as you need.”
The president also spoke with scientists and saw the upside created by artificial intelligence if harnessed for good. Hear a Nobel Prize-winning physicist talk about how artificial intelligence could explain the origin of the universe. Another scientist showed how artificial intelligence could model extreme weather events such as 100-year floods, as previous data used to assess these events have lost their accuracy due to climate change.
The topic of artificial intelligence was seemingly unavoidable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise movie “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the entity” who sinks a submarine and kills its crew in the first few minutes of the film.
“If he wasn’t already worried about what could go wrong with artificial intelligence before that movie, he saw a lot more to worry about,” said Reed, who watched the film with the president.
With Congress still in the early stages on the AI safeguards debate, Biden’s order jeopardizes a U.S. perspective as countries around the world scramble to create their own instructions. After more than two years of consultation, the European Union is putting the final touches a comprehensive set of regulations which targets the most dangerous applications for the technology. China, a key US AI rival, has also set some rules.
UK Prime Minister Rishi Sunak also hopes to play a prominent role for Britain as an AI security hub at a summit this week that Vice President Kamala Harris plans to attend.
The US, particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft, and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement security mechanisms as they build new AI models.
But the White House has also faced significant pressure from Democratic allies, including labor and political groups, to make sure its policies reflect their concerns about AI’s real-world harms.
The American Civil Liberties Union is among the groups that met with the White House to try to ensure that we “hold the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us, not just a few “, he said. ReNika Moore, director of the ACLU’s racial justice program.
Suresh Venkatasubramanian, a former Biden administration official who helped create principles to approach AI, said one of the biggest challenges within the federal government was what to do with law enforcement’s use of AI tools, including the US border.
“These are all places where we know the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said.
Read the original at Defence247.gr