The US government should take the lead in AI development and governance, as it has done in the past with technologies including Internet, GPS and mapping the human genomesaid House Republicans and Democrats in a rare example of bipartisan consensus on Capitol Hill.
What steps to take, and how quickly to act, is where the two sides find least agreement, though not necessarily along traditional lines between government and free markets.
Biden management moves quickly to regulate artificial intelligence through a strain Series and pending follow-up Regulations, a strategic research plan, a risk management framework, a skills list for job seekers, and a blueprint for an AI Bill of Rights. Critics say such regulatory actions could stifle the development of a promising technology that could transform the world.
In a hearing of the House Oversight Committee’s Committee on Cybersecurity, Information Technology and Government Innovation on Wednesday, Rep. Gerry Connolly, D-Va., argued that while the government is often characterized as nefarious and bureaucratic, sometimes that narrative is “really skewed.”
“We have to recognize that the federal government has done some spectacular things,” he said. “We wouldn’t have had the Internet but for what was called the ARPANET for 25 years — a 100% federally funded R&D project. We would not have mapped the human genome without 100% federally funded research programs that are going to transform medicine. We wouldn’t have GPS, which is now universal. We would have no radar. There’s a whole series.”
As artificial intelligence expands, agencies have begun to identify ways the technology can make government work more efficient or accurate. Raj Iyer, former chief intelligence officer of the U.S. military and now head of global public sector at ServiceNow, said in a statement issued after the hearing ended that nearly every new contract signed by the federal government in 2024 will likely have a manufacturing component. AI.
Samuel Hammond, senior economist for the Foundation for American Innovation, testified that progress in artificial intelligence is accelerating so quickly that the current forecast for an intelligence system that can match or surpass humans is as early as 2026.
In light of this, regulators and Congress are grappling with questions about how to write policy and risk management frameworks that aim for an ever-moving target. Lawmakers from both parties also questioned whether too many rules might scare off the industry or allow opponents to get ahead.
“I’m just wondering whether because of the limitations that we have, the strength of our government itself, and our desire in government to make sure that individual rights are respected in this technological process, whether we’re losing too much or not China being way ahead of us,” Rep. Stephen Lynch, D-Mass., said at the hearing.
“I am concerned that business will simply relocate overseas if our regulatory framework becomes too complex or burdensome,” said Rep. William Timmons, RS.C.
Although studies have shown bipartisan support to regulate the development of artificial intelligence, some have expressed concerns that further expansion of these efforts may be too late or very heavy handed. Where Republicans and Democrats sometimes differ is whether to fund development federally, according to a poll by Ipsos, a market research company.
Another challenge mentioned in the hearing was the need to hire and retain a technologically savvy workforce to implement top-down directives.
“Government can’t govern AI if it doesn’t understand AI,” Daniel Ho, a professor at Stanford Law School, said at the hearing.
of President Biden October 30 executive order on AI sets out 150 requirements, as tracked by Stanford, meaning there are large workforce requirements to implement. But among other persistent skills shortages in cyber and IT, Ho said less than 1 percent of AI PhDs pursue careers in public service.
There are also gaps in who will lead this workforce once it is up and running. The Office of Management and Budget proposed a requirement to appoint chief artificial intelligence officers in its draft guidance, which closed to public feedback this week.
As the role of AI leadership in government takes shape, witnesses said there may still be some variation in the actual structure of these offices, as there with chief diversity officers.
Ho said it may not always make sense for an AI chief to report to the chief information officer, depending on available resources. Depending on OMB’s final guidance, companies may also have the option to appoint an existing employee, such as the data or technology officer.
“There needs to be some systematic set of management standards and practices, principles and titles with commensurate responsibility,” Connolly said.
Molly Weisner is a staff reporter for the Federal Times, where she covers labor, policy and contracts related to the government workforce. She had previous stops at USA Today and McClatchy as a digital producer and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.
Read the original at Defence247.gr