AI can ‘disproportionately’ help defend against cyber security threats, says Google CEO Sundar Pichai

Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and will last until November 17.

Justin Sullivan | News Getty Images | Getty Images

Munich, GERMANY — Rapid advances in artificial intelligence could help bolster defenses against cyber security threats, according to Google CEO Sundar Pichai.

Amid growing concerns about the potentially malicious uses of artificial intelligence, Pichai said the intelligence tools could help governments and companies speed up the detection of — and response to — threats from hostile actors.

“We are right to be concerned about the impact on cyber security. But artificial intelligence, I think, is actually, on the contrary, strengthening our defense in cyber security,” Pichai told delegates at the Munich Security Conference late last week.

Cyber ​​attacks are growing in volume and sophistication as malicious actors increasingly use them as a way to exert power and extort money.

Cyber ​​attacks are estimated to have cost the global economy 8 trillion dollars in 2023 — an amount expected to grow to $10.5 trillion by 2025, according to cyber research firm Cybersecurity Ventures.

A January report from Britain’s National Cyber ​​Security Center – part of GCHQ, the country’s intelligence agency – said AI will only increase these threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyber activity, including ransomware attacks.

“AI disproportionately helps people in defense because you get a tool that can affect it at scale.

Sudar Pichai

CEO at Google

But Pichai said AI also reduces the time it takes for defenders to detect attacks and react against them. He said this would reduce what is known as the defenders’ dilemma, whereby hackers only need to be successful once on a system, while a defender needs to be successful every time to protect it.

“AI disproportionately helps people who are defending because you get a tool that can affect it at scale versus people who are trying to exploit,” he said.

“So, in a way, we’re winning the race,” he added.

Google last week announced a new initiative that offers artificial intelligence tools and infrastructure investments designed to strengthen online security. A free, open-source tool called Magika aims to help users detect malware — malware — the company he said in a statement, while a white paper proposes measures and research and creates guardrails around artificial intelligence.

Pichai said the tools are already being used in the company’s products, such as Google Chrome and Gmail, as well as its internal systems.

“Artificial intelligence is at a definitive crossroads – a crossroads where policymakers, security professionals and civil society have the opportunity to finally tip the balance of cybersecurity from attackers to cyber defenders.

The launch coincided with the signing of an agreement by major companies at the MSC to take “reasonable precautions” to prevent the use of artificial intelligence tools to disrupt democratic votes in the 2024 election year and beyond.

Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X, formerly Twitter, were among the signatories to the new agreement, which includes a framework for how companies should respond to “deepfakes” generated by artificial intelligence and designed to deceive voters. .

It comes as the Internet becomes an increasingly important sphere of influence for both individuals and malicious state-backed actors.

Former US Secretary of State Hillary Clinton on Saturday described cyberspace as “a new battlefield”.

“The technological arms race has just gone up another notch with genetic artificial intelligence,” he said in Munich.

“If you can run a little faster than your opponent, you’ll do better. That’s what AI really gives us defensively.

Mark Hughes

security president at DXC

ONE report published last week by Microsoft found that state-sponsored hackers from Russia, China and Iran are using OpenAI’s large language model (LLM) to boost their efforts to defraud targets.

Russian military intelligence, Iran’s Revolutionary Guards and the governments of China and North Korea are said to have relied on the tools.

Mark Hughes, president of security at IT services and consulting firm DXC, told CNBC that bad actors were increasingly relying on a ChatGPT-inspired hacking tool called WormGPT to perform tasks such as reverse engineering code.

However, he said he also saw “significant gains” from similar tools that help engineers detect and guard against engineering attacks at speed.

“It gives us the ability to accelerate,” Hughes said last week. “Most of the time in cyberspace, what you have is time when attackers have an advantage against you. This often happens in any conflict situation.

“If you can run a little bit faster than your opponent, you’ll do better. That’s what AI really gives us defensively right now,” he added.

Germany has benefited from a 'peace dividend' for years, says defense minister

Read the original at

Related Posts