Representatives from 28 countries, including the US and China, agreed on Wednesday to work together to limit the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
The first international AI Safety Summit, held in a former code-breaking spy base near Londonfocused on state-of-the-art “frontier” artificial intelligence that some scientists warn could threaten humanity’s very existence.
British Prime Minister Rishi Sunak said the declaration was “a landmark achievement that sees the world’s biggest AI powers agree on the urgency of understanding the risks of AI – helping to secure the long-term future of our children and grandchildren.” .
But Vice President Kamala Harris urged Britain and other countries to go further and faster, highlighting the transformations already being brought about by artificial intelligence and the need to hold tech companies accountable — including through legislation.
In a speech at the US Embassy, Harris said the world must start acting now to address the “full spectrum” of AI risks, not just existential threats such as mass cyberattacks or artificially engineered bioweapons. intelligence.
“There are additional threats that also require our action, threats that are currently causing harm and to many people also feel existential,” he said, citing a senior citizen who had his health care plan triggered by a faulty AI algorithm or a woman being threatened from an abusive partner with deeply fake photos.
The AI Safety Summit is a labor of love for Sunak, a tech-loving ex-banker who wants the UK to be a hub for computing innovation and has framed the summit as the start of a global debate on the safe development of AI intelligence.
Harris is due to attend the summit on Thursday, with government officials from more than a dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China invited over the protests of some members of Sunak’s ruling Conservative Party.
Getting nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of artificial intelligence. The countries pledged to work towards “shared agreement and responsibility” on the risks of artificial intelligence and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.
China’s Vice Minister of Science and Technology Wu Zhaohui said AI technology is “uncertain, inexplicable and lacks transparency”.
“It brings ethical, security, privacy and justice risks and challenges. Its complexity is emerging,” he said, noting that Chinese President Xi Jinping launched the country’s Global AI Governance Initiative last month.
“We call for global collaboration to share knowledge and make artificial intelligence technologies available to the public under open source terms,” he said.
Tesla CEO Elon Musk is also scheduled to discuss artificial intelligence with Sunak in a live chat Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the dangers artificial intelligence poses to humanity.
European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from US AI companies such as Anthropic, Google’s DeepMind and OpenAI, and prominent computer scientists such as Yoshua Bengio, one of “godfathers” of artificial intelligence, are also involved. the meeting at Bletchley Parka former top secret base for World War II coders that is considered the birthplace of modern computing.
Attendees said the closed-door format encouraged healthy dialogue. Informal networking sessions help build trust, said Mustafa Suleyman, CEO of Inflection AI.
Meanwhile, in official discussions “people have been able to make very clear statements, and that’s where you see significant disagreements, both between countries in the North and the South (and) countries that are more open source and less open source . source,” Suleiman told reporters.
Open source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an open-source system is released, “anyone can use it and tune it for malicious purposes,” Bengio said on the sidelines of the meeting.
“There is this incompatibility between open source and security. So how do we deal with it?”
Only governments, not companies, can protect people from the dangers of artificial intelligence, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it must first be fully understood.
Instead, Harris emphasized the need to address it in the here and now, including “social harms that are already happening, such as bias, discrimination and the spread of misinformation.”
He pointed to President Biden executive order this week, setting out AI safeguards as evidence that the US is leading by example in developing AI rules that work in the public interest.
Harris also encouraged other countries to sign a US-backed pledge to adhere to the “responsible and ethical” use of artificial intelligence for military purposes.
“President Biden and I believe that all leaders have a moral, ethical and societal duty to ensure that artificial intelligence is adopted and developed in a way that protects the public from potential harm and ensures that all can enjoy its benefits,” he said. he said.
Read the original at Defence247.gr