OpenAI lifts ban on military use of AI tools for national security scenarios

OpenAI deleted part of the terms and conditions that prohibited the use of its AI technology for military and war purposes.

said an OpenAI spokesperson Verdict that while the company’s policy does not allow its tools to be used to harm people, develop weapons, monitor communications, or injure others or destroy property, there are, however, national security uses that align with its mission.

“For example, we are already working with DARPA to advance the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” the spokesperson said, adding: “It was unclear whether these beneficial use cases would have been allowed under the term “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have those discussions.”

The ChatGPT maker The usage policy initially included a ban on any activity that involved “weapons development” and “military warfare.”

However, the new update released on January 10, did not include the “military and war” ban.

OpenAI left the blanket ban on using the service “to harm yourself or others” with an example that included using AI to “develop or use weapons.”

Get access to the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain a competitive edge.

“We’ve updated our usage policies to make them easier to read and added service-specific guidelines,” OpenAI said in a blog post.

“We cannot predict all beneficial or abusive uses of our technology, so we proactively monitor for new trends in abuse,” the blog post added.

Sarah Meyers, CEO of the AI ​​Now Institute, told the Restrain that AI being used to target civilians in Gaza makes now a notable time for OpenAI to change its terms of service.

said Fox Walker, an analyst at research firm GlobalData Verdict that the new guidelines “could well lead to further proliferation of the use of artificial intelligence in defence, security and military contexts”.

“Whether it’s using non-lethal technology, developing military strategy, or just using budgeting tools, there are many areas where AI can help military leaders without harming others or creating new weapons,” Walker said.

In October, OpenAI formed a new team to monitor, predict and try to protect against “catastrophic risks” posed by artificial intelligence, such as nuclear threats and chemical weapons.

The team, called Preparedness, will also work to address other risks such as autonomous overlay and adaptation, cyber security, biological and radioactive attacks, as well as targeted persuasion.

In 2022, OpenAI researchers authored a study that highlighted the dangers of using large language models for warfare.

See also  Meeting of YFETHA Mr. Ioannis Kefalogiannis with the Bureau of POE-HYETHA

An OpenAI spokesperson previously said: “We believe that frontier AI models, which will exceed the capabilities currently found in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly serious risks.”

Research firm GlobalData estimates the total artificial intelligence market will be worth $383.3 billion in 2030, representing 21% compound annual growth rate between 2022 and 2030.



Read the original at Defence247.gr

Related Posts