Ethical constraints the biggest challenge for artificial intelligence in defense research

Ethical and legal constraints will be the biggest challenge to the application of artificial intelligence (AI) in defense in the next five years, industry experts believe.

An ongoing survey by GlobalData found that, when asked what the biggest challenge for AI in defense will be in the next five years, 47.1% of respondents point to ethical and legal constraints, while 34.6% cite errors of bias and “hallucinations”.

The survey has received 208 responses to date across all GlobalData stores.

Israel’s ‘Gospel’ AI airstrike system sparks backlash

Concerns about the ethics and legality of using artificial intelligence in conflict scenarios have been exposed by the ongoing conflict in the Middle East.

“There are certainly huge ethical concerns, especially if AI is involved in making potentially lethal decisions in a real conflict scenario,” says James Marques, a defense analyst at GlobalData.

The Gaza Strip has become such a military theater. As the Israeli Defense Forces (IDF) launch airstrikes on Rafah, a city on the Palestinian territory’s southwestern border with Egypt, the role played by AI in “Round the clock” bombing campaign becomes more and more apparent.

Get access to the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain a competitive edge.

company profile unit

Company Profile – free sample

Your download email will arrive shortly

We are confident in the unique quality of our company profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the form below

From GlobalData

The IDF has drastically increased the number of targets it can select for airstrikes through its ‘Gospel’ AI target recognition platform.

In an interview before the most recent Israeli-Palestinian conflict, former IDF chief Aviv Kochaviv he said the Gospel has increased the number of targets the IDF can target in Gaza from 50 a year to 100 a day.

Questions about the data the Gospel uses to select targets – and how accurate its airstrikes are in reducing civilian harm – remain unanswered.

In addition to these growing ethical and legal concerns, respondents also expect bias and “hallucinations” to delay the implementation of AI in defense.

Hallucinationsthat occur when an AI-powered chatbot broadcasts misinformation as fact, is expected to decrease as the accuracy of AI increases.

That trend, however, is far from guaranteed, according to Marques.

“Overall, I think hallucinations will decrease, but it depends on the data set the AI ​​systems are trained on, and the progress may not be as positive linear,” Marques said. Army Technology. “There is a possibility that introducing AI into larger data sets and more complex subjects will actually increase the chances of hallucinations, but these are all early stages in the long run.”

Read the original at

Related Posts