Securing the Frontiers of Artificial Intelligence: A Proposal for Federal Regulation

With Generative AI evolving into full throttle, a bright future beckons. Technology has the potential to enable improved medical diagnostics, personalized education, optimized allocation of resources to humanitarian efforts, and solutions to mitigate the most harmful effects of global warming. However, these same tools can also cause profound social disruption.

Technology may soon enable malicious actors to unleash pandemics, accelerate mass cyber attacks and accelerate the spread of disinformation. Most worryingly, AI systems may evolve to outwit their human creators, establishing their own goals in stark contrast to social norms. Genetic artificial intelligence is poised to break through thresholds we never expected and for which society is unprepared.

This concern is not fear-mongering. It is based on the skill that is there now and the speed at which it is progressing. ChatGPT creator Sam Altman predicts superintelligence AI capable of surpassing a level of specialist skill in most fields and more powerful than any technology ever developed the next decade.

Dario Amodei, former vice president of research at Open AI and current CEO of Anthropic believes that AI can grow too independent for human control in the next two years. Finally, computer scientist Geoffrey Hinton, considered by many to be “The godfather of AIresigned from Google earlier this year due to concerns that AI may be smarter than we know currently.

Despite these caveats, genetic AI has no governing body, either internationally or within the United States. Technology is it moves so fast without any assurance that the risks increase exponentially. The time to build guardrails is now, and the U.S. must lead the effort.

Championed by Senators Richard Blumenthal and Josh Hawley a bipartisan framework for AI legislation; which deserves attention. The proposal includes the creation of a federal independent watchdog to manage future developments in the technology. This new US government entity will monitor the sale, purchase or transfer of computing resources that exceed certain limits. Microsoft President Brad Smith and Center for AI Policy publicly support the framework.

Sam Altman, CEO of OpenAI, the Senate Judiciary Committee warned on May 16 of the need for a federal agency that licenses any effort that exceeds a certain scale of capability and is capable of shutting down non-compliant companies.

Given the devastating potential for misuse, developing such a licensing structure for general-purpose AI is a proactive step in creating a gatekeeping mechanism. The framework should set regulatory thresholds for computing capacity, development costs, and benchmark performance. As technology advances, these thresholds should be periodically reviewed and, where necessary, updated.

While the Blumenthal-Hawley legislation offers a structure around which legislation and a regulatory body can be created, it lacks important details. The idea must be developed around the regulation of the three key resources in AI development: computing hardware, talent and data. All three are necessary for significant advances in Generative AI models. While any regulatory body will be called upon to monitor and legislate the use of data, a regulatory guideline may consider both hardware and talent. Limiting one, hardware or talent, would allow a safety valve over technology.

Computing hardware is the currency of AI power. Access to high-performance Graphics Processing Units (GPUs) that can enable deep learning is a defining factor for the future of technology. GPUs represent a physical aspect of artificial intelligence that can be monitored and registered. In reality, massive amounts of GPUs are required to train models on large datasets.

A regulatory framework must require a federal license to purchase anything that exceeds a specified threshold of high-risk computer hardware, for example, sets of GPUs capable of developing an artificial intelligence that can process more than one billion (1024) operations. Legislation should require tracking and reporting of the transfer of any GPU assemblies beyond the high-risk threshold.

Human talent

It takes human talent to use computing hardware. Trained AI professionals train AI sets to unlock the next developments. Talent costs – a lot – both to train and employ AI scientists and researchers. Here again, the development of a specific high-risk threshold, for example, $50 million for the development and employment of human talent, is necessary for a regulatory framework. Any quantity above the high hazard threshold must be reported and examined for possible destructive use.

In the meantime, to keep pace with the rest of the world and prevent the technology’s beneficial aspects from being stifled, the regulation must include a fast-track system for benign AI applications. This system removes AI developers who do not pose significant security risks from the government’s authority. Engineers working on non-dangerous AI tools – self-driving cars, fraud detection systems, and recommendation engines – keep their jobs even as they push past hardware or talent limits.

The US cannot wait for the international community to develop an AI regulatory body. The Blumenthal-Hawley framework offers an opportunity for the US to lead the world in regulating artificial intelligence. International collaborations with organizations such as The UK AI Frontiers Working Group, a group of senior academics appointed to advise the Tory government, can promote information sharing. Framework-based multi-stakeholder AI security dialogues and global forums will be instrumental in navigating the global challenges of AI. Over time, the approach to NATO countries should focus on development an international organization overseeing the governance of artificial intelligence, with the goal of incorporating the People’s Republic of China.

The Blumenthal-Hawley concept is an indispensable guide on the journey of artificial intelligence in the world. It presents a vision of a future where artificial intelligence is both safe and beneficial. By embracing and leveraging this framework, Congress has an opportunity to set a global gold standard for AI regulation. America, with its penchant for innovation and security, is poised to lead the world not only in the development of artificial intelligence, but also in the management of artificial intelligence.

Joe Buccino is a retired US Army colonel who serves as an artificial intelligence research analyst at the US Department of Defense’s Defense Innovation Council. His views do not necessarily reflect those of the US Department of Defense or any other organization.

Read the original at Defence247.gr

Related Posts