Americas

  • United States

Asia

rob_enderle
Contributor

Pausing AI development is a foolish idea

opinion
Apr 06, 20234 mins
Artificial IntelligenceAugmented RealityGenerative AI

The recent call by tech leaders for a slowdown in the development of generative AI tools won't work now — the AI horse is already out of the barn.

ai artificial intelligence circuit board circuitry mother board nodes computer chips

A group of influential and informed tech types recently put forward a formal request that AI rollouts be paused for six months. I certainly understand concerns that artificial intelligence is advancing too fast, but trying to stop it in its tracks is a recurring mistake made by people who should know better.

Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. 

The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything.

Why a pause won’t work

I’ll step around arguing that a pause is a bad idea and instead focus on why it won’t work. Take the example of a yellow flag during a car race. This is kind of how a pause should work: everyone holds their position until the danger is passes — or in the case of AI, until we understand better how to mitigate its potential dangers. 

But just as in a car race, there are countries and companies that are ahead and other at various distances behind. Under a yellow flag in a car race, the cars that are behind can catch up to the leading cars, but they aren’t allowed to pass. The rules are enforced by track referees who have no analogue in the real world of companies and countries. Even organizations like the UN have little to no visibility into AI development labs, nor could they assure those labs stand down. 

As a result, those leading the way in Ai technology are unlikely to slow their efforts because they know those following won’t — and those playing catch up would use any pause to, well, catch up. (And remember, the people working on these projects are unlikely to take a six-month, paid vacation; they’d continue to work on related technology, regardless.)

There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market. Even development on clones, which is broadly outlawed, continues across the world. What has stopped is almost all transparency about what and where it’s being done (and clone efforts have never reached the level of use that generative AI has achieved in a few short months.)

The request was premature; regulation matters more

Fortunately, generative AI isn’t yet general purpose AI. This is the AI that should bring with it the greatest concern, because it would have the ability to do most anything a machine or person can do. And even then, a six-month pause would do little beyond perhap shuffling the competitive rankings, with those adhering to any pause falling behind those who don’t.

General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount.

Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons. Unfortunately, we haven’t done a great job of regulating those either. 

Since there’s no way to get global consensus (let alone enforce a six-month pause on AI), what’s needed now is global oversight and enforcement coupled with backing for initiatives like the Lifeboat Foundation’s AIShield or some other effort to create an AI defense against hostile AI. 

One irony associated with the recent letter is that signatories include Elon Musk, who has a reputation of being unethical (and tends to rebel against government direction), suggesting such a mechanism wouldn’t even work with him. That’s not to say an effort wouldn’t have merit. But the correct path, as Gates lays out in his post, is setting up guard rails ahead of time, not after the AI horse has left the barn.

rob_enderle
Contributor

Rob Enderle is president and principal analyst of the Enderle Group, a forward looking emerging technology advisory firm. With more than 25 years’ experience in emerging technologies, he provides regional and global companies with guidance in how to better target customer needs with new and existing products; create new business opportunities; anticipate technology changes; select vendors and products; and identify best marketing strategies and tactics.

In addition to IDG, Rob currently writes for USA Herald, TechNewsWorld, IT Business Edge, TechSpective, TMCnet and TGdaily. Rob trained as a TV anchor and appears regularly on Compass Radio Networks, WOC, CNBC, NPR, and Fox Business.

Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group. While there he worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, GM, Ford, and Siemens.

Before Giga, Rob was with Dataquest covering client/server software, where he became one of the most widely publicized technology analysts in the world and was an anchor for CNET. Before Dataquest, Rob worked in IBM’s executive resource program, where he managed or reviewed projects and people in Finance, Internal Audit, Competitive Analysis, Marketing, Security, and Planning.

Rob holds an AA in Merchandising, a BS in Business, and an MBA, and he sits on the advisory councils for a variety of technology companies.

Rob’s hobbies include sporting clays, PC modding, science fiction, home automation, and computer gaming.

The opinions expressed in this blog are those of Rob Enderle and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.