WASHINGTON — The US Department of Defense has released a new strategy for using data analytics and artificial intelligence as it pushes for additional investment in artificial intelligence, advanced pattern recognition and autonomous technologies, including drones.
The document is a more mature version of a plan first published in 2018, in which the Pentagon predicted that artificial intelligence would “transform every industry” and affect all aspects of national security. It takes into account the significant development of artificial intelligence in the defense industrial base, according to Chief Digital and AI Officer Craig Martell.
“Accelerating adoption of advanced data, analytics and artificial intelligence technologies presents an unprecedented opportunity to equip department heads at all levels with the data they need to make better decisions, faster,” Martel told reporters on 2 November at the Pentagon.
Among other goals outlined in the strategy are better data sets, improved infrastructure, more collaboration with groups outside the department and reformed internal barriers, which often prevent technology from advancing faster than the department can adopt.
With the document, the Pentagon further explains its thinking about artificial intelligence as it builds internal structures to govern it.
The CDAO position was established in 2021. As the overseer and publisher of all things AI and analytics, it included the Joint Center for Artificial Intelligence, the Defense Digital Agency, the Advana data platform and the role of chief data officer.
“The secretary and I are ensuring that CDAO has the authority to lead change with urgency,” Acting Defense Secretary Kathleen Hicks said in prepared remarks from the Pentagon briefing room on Thursday.
The use of genetic artificial intelligence in the military is controversial. Its main advantage is the ability to simplify simple or simple tasks such as finding files, searching for contact information and answering simple questions. But the technology has also been used for cyberattacks, counterfeiting attempts and disinformation campaigns.
Hicks warned that humans will remain responsible for the use of lethal force and, as outlined in the Pentagon’s latest review of its nuclear weapons, will retain control of all decisions about the strategic arsenal.
“We are mindful of the potential dangers of artificial intelligence and determined to avoid them,” he said.
The development and deployment of semi-autonomous or fully autonomous weapons is governed by what is known as directive 3000.09, originally signed a decade ago and updated in January.
The directive is intended to reduce the risks of autonomy and firepower failure. Not so in cyberspace, an area where leaders are increasingly championing autonomous capabilities.
Artificial intelligence has advanced rapidly this year, in part through the development of large language models such as ChatGPT, which analyzes massive amounts of data to predict responses that would otherwise seem human. These commercial programs do not yet meet the department’s standards, and Hicks admitted that much of the innovation in this space “happens outside of DOD and government.”
However, in her remarks Hicks said the Pentagon already uses its own models. He cited “DoD elements” who were working on similar programs before ChatGPT became popular. These were trained on Pentagon data, Hicks said, and are at different levels of maturity.
“Some are actively being experimented with and even used as part of people’s regular workflows,” he said.
If they become more operational, the Pentagon has identified problems they could solve. Hicks said the Defense Department has identified “over 180 cases” that could benefit from the use of artificial intelligence – from analyzing battlefield assessments to summarizing datasets, including classified ones. The Defense Department was already managing more than 685 AI-related ventures as of early 2021, according to a count released by the Government Accountability Office.
The Lima Task Force, overseen by the CDAO, was created earlier this year to assess and guide the application of genetic artificial intelligence for national security purposes.
The new strategy comes alongside an AI summit in London this week, which was attended by Vice President Kamala Harris. Shortly before, the Biden administration issued an executive order on artificial intelligence security and privacy. Despite efforts to do so by Senate Majority Leader Chuck Schumer, D-NY, Congress has yet to act on the issue.
The Pentagon has requested $1.4 billion for artificial intelligence in fiscal 2024, which began on Oct. 1. A continuing resolution, which maintains the previous fiscal year’s funding levels, is in effect until mid-November.
The Department of Defense was handling more than 685 AI-related ventures in early 2021, according to a count released by the Government Accountability Office. The military led the pack with at least 232, the federal watchdog said. The Marine Corps, on the other hand, was dealing with at least 33.
Noah Robertson is the Pentagon correspondent for Defense News. He previously covered national security for the Christian Science Monitor. He holds a bachelor’s degree in English and government from the College of William & Mary in his hometown of Williamsburg, Virginia.
Colin Demarest is a reporter at C4ISRNET, where he covers military networks, cyber and IT. Colin previously covered the Department of Energy and the National Nuclear Security Administration – specifically Cold War liquidation and nuclear weapons development – for a daily newspaper in South Carolina. Colin is also an award winning photographer.
Read the original at Defence247.gr