What war elephants can teach us about the future of artificial intelligence in combat

The use of artificial intelligence in combat poses a thorny moral dilemma for Pentagon leaders. The conventional wisdom is that they must choose between two equally bad alternatives: either impose full human oversight of AI systems at the cost of speed and accuracy, or allow the AI ​​to operate without any oversight at all.

In the first option, our military builds and deploys “human in the loop” AI systems. These systems adhere to the moral standards and laws of war, but are limited by the abilities of the human beings who oversee them. It is widely believed that such systems are bound to be slower than any unsupervised, “unethical” systems used by our adversaries. Unethical autonomous systems seem to boast a competitive advantage that, if left unchallenged, has the potential to erode Western strategic advantage.

The second option is to completely sacrifice human supervision for machine speed, which could lead to unethical and undesirable behavior of AI systems on the battlefield.

Realizing that none of these options are sufficient, we must adopt a new approach. Much like the emergence of the cyberwarrior in the realm of cybersecurity, the realm of artificial intelligence calls for a new role – that of the “AI operator”.

With this approach, the goal is to create a synergistic relationship between military personnel and artificial intelligence without compromising the moral principles that underpin our national identity.

We must strike a balance between maintaining the human oversight that informs our ethical framework and embracing the flexibility and responsiveness of automated systems. To achieve this, we need to promote a higher level of human interaction with AI models than stop/go. We can navigate this complex duality by incorporating the innate human strengths of diversity, contextualization, and social interaction into the governance and behavior of intelligent combat systems.

What we can learn from ancient war elephants

Remarkably, there is a historical precedent that parallels the current challenge we face in integrating artificial intelligence and human decision-making. For thousands of years, “war elephants” were used in battle and logistics throughout Asia, North Africa and Europe. These highly intelligent creatures needed specialized training and a special handler, or “mahout,” to ensure the animals remained under control during battles.

War elephants and their mahouts are a powerful example of a complementary relationship. Just as we seek to direct the speed and precision of AI on the battlefield, humans were once tasked with directing the power and prowess of war elephants — directing their actions and minimizing the risk of unpredictable behavior.

Inspired by the historical relationship between humans and war elephants, we can develop a similarly balanced partnership between military personnel and artificial intelligence. By allowing artificial intelligence to complement, rather than replace, human input, we can preserve the ethical issues that are central to our core national values ​​while benefiting from the technological advances that autonomous systems offer.

Operators as masters of AI

Introducing and integrating AI into the battlefield presents a unique challenge, as many military personnel lack deep knowledge of the development process behind AI models. These systems are often correct, and as a result, users tend to rely too much on their capabilities, ignoring errors when they occur. This phenomenon is referred to as the “automation conundrum” – the better a system is, the more likely the user is to trust it when it’s wrong, even when it’s obviously wrong.

To bridge the gap between military users and the AIs they depend on, there must be a modern mahout, or AI operator. This specialized new role will emulate the mahouts who raised war elephants: overseeing their training, rearing and eventual deployment to the battlefield. By cultivating a close bond with these intelligent creatures, mahouts have gained invaluable knowledge of their elephants’ behavior and limitations, leveraging this knowledge to ensure regular success and long-term cooperation.

AI operators would assume the responsibilities of mahouts for AI systems, guiding their development, training and testing to optimize combat advantages while maintaining the highest ethical standards. Possessing a deep understanding of the artificial intelligence they would be responsible for, these operators serve as links between advanced technology and the warriors who depend on them.

Different trainers, models can overcome the risk of system bias

Just like war elephants and humans have their own strengths, weaknesses, biases and specialized abilities, so do AI models. However, due to the cost of building and training AI models from scratch, the national security community has often chosen to modify and adapt existing “foundation” models to accommodate new use cases. While this approach may seem reasonable on the surface, it increases risk by relying on models with exploitable data, gaps, and biases.

This approach envisions the creation of AI models by different teams, each using unique datasets and different training environments. Such a change would not only spread the risk of ethical lapses associated with individual models, but would also provide AI operators with a wider range of options, tailored to the changing needs of the mission. By adopting this more nuanced approach, AI operators can ensure the ethical and strategic application of AI in warfare, ultimately enhancing national security and reducing risk.

The mahouts who trained their war elephants did not do so with the intention of sending these magnificent creatures into battle alone. Instead, they cultivated a deeply symbiotic relationship, enhancing the collective strengths of both humans and animals through cooperation and leading to greater overall results. Today’s AI operators can learn from this historical precedent as they strive to create a similar partnership between humans and AI in the context of modern warfare.

By cultivating synergy between human operators and AI systems, we can turn our commitment to ethical values ​​from a perceived limitation to a strategic advantage. This approach embraces the fundamental unpredictability and confusion of the battlefield by leveraging the combined power of human judgment and AI capabilities. Furthermore, the potential of this collaborative method extends beyond the battlefield, implying additional applications where ethical considerations and adaptability are essential.|

Eric Velte is Chief Technology Officer, ASRC Federal, the government services subsidiary of Arctic Slope Regional Corp. and Aaron Dant is Chief Data Scientist, ASRC Federal Mission Solutions.

Do you have an opinion?

This article is a letter to the editor and the views expressed are those of the author. If you would like to respond, or have a letter or article of your own that you would like to submit, please do email C4ISRNET and Federal Times Senior Managing Director Cary O’Reilly.

Read the original at Defence247.gr

Related Posts