Insider Q&A: The Pentagon’s AI Chief for Cybercentric Warfare, ChatGPT

The Pentagon’s chief of digital and artificial intelligence, Craig Martell, said he was concerned about the potential for artificial intelligence generation systems like ChatGPT to deceive and spread misinformation. His tech talk at the DefCon hacker conference in August was a huge hit. But it’s anything but sour for reliable AI.

Not a soldier but a data scientist, Martell led machine learning at companies like LinkedIn, Dropbox and Lyft before taking the job last year.

Gathering the US military’s data and determining which artificial intelligence is reliable enough for combat is a major challenge in an increasingly volatile world where many countries are racing to develop lethal autonomous weapons.

The interview has been edited for length and clarity.


Q: What is your primary mission?

A: Our job is to scale the decision-making edge from the boardroom to the battlefield. I don’t see it as our job to address a few specific missions, but rather to develop the tools, processes, infrastructure, and policies that allow the department as a whole to scale.

Q: So the goal is global information dominance? What do you need to succeed?

A: We are finally in a web-centric war — how to get the right data to the right place at the right time. There is a hierarchy of needs: quality data at the bottom, analytics and metrics in the middle, artificial intelligence at the top. For this to work, high quality data is most important.

Q: How should we think about the use of artificial intelligence in military applications?

A: All artificial intelligence, in fact, measures the past to predict the future. I don’t think the modern wave of artificial intelligence is any different.

China, Ukraine

Q: Is China winning the AI ​​arms race?

A: I find this metaphor to be somewhat of a misnomer. When we had a nuclear arms race it was with a monolithic technology. AI is not that. Nor is it Pandora’s box. It is a set of technologies that we apply on a case-by-case basis, empirically verifying whether it is effective or not.

Q: The US military is using AI technology to help it Ukraine. How do you help?

A: Our team is not involved with Ukraine except to help build a database of how allies are providing aid. It’s called Skyblue. We just help make sure it stays organized.

Q: There is a lot of talk about autonomous lethal weapons – like attack drones. The consensus is that humans will eventually be reduced to a supervisory role – able to abort missions but mostly not interfere. Does that sound right?

A: In the military we train with a technology until we develop a justified confidence. We understand the limits of a system, we know when it works and when it might not. How does this map to standalone systems? Take my car. I trust the adaptive cruise control on this one. The technology that’s supposed to keep it from changing lanes, on the other hand, is terrible. Therefore, I have no justified confidence in this system and do not use it. Extend that to the military.

“Loyal Wing Commander”

Q: The Air Force’s “loyal wingman” program under development would have drones fly alongside manned fighter jets. Is computer vision good enough to tell friend from foe?

A: Computer vision has made amazing strides in the last 10 years. Whether it is useful in a particular situation is an empirical question. We need to determine the accuracy we are willing to accept for the use case and build against that criteria – and test. So we cannot generalize. I’d love for us to stop talking about technology as a monolith and talk about the capabilities we want.

Q: You are currently studying genetic models of artificial intelligence and large languages. When can it be used in the Ministry of Defense?

A: Big-language commercial models are certainly not obligated to tell the truth, so I’m skeptical. That said, through the Lima Task Force (which started in August) we are studying over 160 use cases. We want to decide what is low risk and safe. I’m not setting an official policy here, but let’s assume.

Low risk might be something like making first drafts of writing or computer code. In such cases, humans are going to edit, or in the case of software, compile. It could also potentially work for information retrieval — where facts can be validated to ensure they are correct.

Q: A big challenge with AI is recruiting and retaining the talent needed to test and evaluate systems and tag data. AI data scientists earn far more than the Pentagon has traditionally paid. How big of a problem is that?

A: This is a huge can of worms. We’ve just launched a digital talent management agency and are thinking hard about how to fill a whole new set of jobs. For example, do we really need to hire people who want to stay in the Department of Defense for 20-30 years? Probably not.

But what if we can get them for three or four? What if we paid for their college and they paid us in three or four years and then they left with that experience and got hired in Silicon Valley? We think creatively like that. Could we, for example, be part of a diversity pipeline? Admission to HBCUs (Historically Black Colleges and Universities)?

Read the original at

See also  New helicopters for the Armed Forces

Related Posts