Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Q&A: Experts say stopping AI is not possible — or desirable

feature
Jun 01, 202316 mins
Artificial IntelligenceAugmented RealityChatbots

Generative AI systems such as ChatGPT are evolving at an exponential rate. Already some believe it's impossible to slow their progress, despite the threats they pose. So should we focus on the benefits instead?

Artificial intelligence and digital identity
Credit: Thinkstock

As generative AI tools such as OpenAI’s ChatGPT and Google’s Bard continue to evolve at a breakneck pace, raising questions around trustworthiness and even human rights, experts are weighing if or how the technology can be slowed and made more safe.

In March, the nonprofit Future of Life Institute published an open letter calling for a six-month pause in the development of ChatGPT, the AI-based chatbot created by Microsoft-backed OpenAI. The letter, now signed by more than 31,000 people, emphasized that powerful AI systems should only be developed once their risks can be managed.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked.

Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined thousands of other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

In May, the nonprofit Center for AI Safety published a similar open letter declaring that AI poses a global extinction risk on par with pandemics and nuclear war. Signatories to that statement included many of the very AI scientists and executives who brought generative AI to the masses.

Jobs are also expected to be replaced by generative AI — lots of jobs. In March, Goldman Sachs released a report estimating generative AI and its ability to automate tasks could affect as many as 300 million jobs globally. And in early May, IBM said it would pause plans to fill about 7,800 positions and estimated that nearly three in 10 back-office jobs could be replaced by AI over a five-year period, according to a Bloomberg report.

While past industrial revolutions automated tasks and replaced workers, those changes also created more jobs than they eliminated. For example, the steam engine needed coal to function — and people to build and maintain it.

Generative AI, however, is not an industrial revolution equivalent. AI can teach itself, and it has already ingested most of the information created by humans. Soon, AI will begin to supplement human knowledge with its own.

geoff schaefer at booz allen hamilton Geoff Schaefer

Geoff Schaefer, head of Responsible AI, Booz Allen Hamilton

Geoff Schaefer is head of Responsible AI at Booz Allen Hamilton, a US government and military contractor, specializing in intelligence. Susannah Shattuck is head of product at Credo AI, an AI governance SaaS vendor.

Computerworld spoke recently with Schaefer and Shattuck about the future of AI and its impact on jobs and society as a whole. The following are excerpts from that interview.

What risks does generative AI pose? Shattuck: “Algorithmic bias. These are systems that are making predictions based on patterns in data that they’ve been trained on. And as we all know, we live in a biased world. That data we’re training these systems on is often biased, and if we’re not careful and thoughtful about the ways that we’re teaching or training these systems to recognize patterns in data, we can unintentionally teach them or train them to make biased predictions.

“Explainability. A lot of the more complex [large language] models that we can build these days are quite opaque to us. We don’t fully understand exactly how they make a prediction. And so, when you’re operating in a high-trust or very sensitive decision-making environment, it can be challenging to trust an AI system whose decision-making process you don’t fully understand. And that’s why we’re seeing increasing regulation that’s focused on transparency of AI systems.

“I’ll give you a very concrete example: If I’m going to be deploying an AI system in a medical healthcare scenario where I’m going to have that system making certain recommendations to a doctor based on patient data, then explainability is going to be really critical for that doctor to be willing to trust the system.

“The last thing I’ll say is that AI risks are continuously evolving as the technology evolves. And [there are an] emerging set of AI risks that we haven’t really had to contend with before — the risk of hallucinations, for example. These generative AI systems can do a very convincing job of generating information that looks real, but that isn’t based in fact at all.”

While we cannot predict all the future risks, what do you believe is most likely coming down the pike? Schaefer: “These systems are not imputed with the capability to do all the things that they’re now able to do. We didn’t program GPT-4 to write computer programs but it can do that, particularly when it’s combined with other capabilities like code interpreter and other programs and plugins. That’s exciting and a little daunting. We’re trying to get our hands wrapped around risk profiles of these systems. The risk profiles, which are evolving literally on a daily basis.

“That doesn’t mean it’s all net risk. There are net benefits as well, including in the safety space. I think [AI safety research company] Anthropic is a really interesting example of that, where they are doing some really interesting safety testing work where they are asking a model to be less biased and at a certain size they found it will literally produce output that is less biased simply by asking it. So, I think we need to look at how we can leverage some of those emerging capabilities to manage the risk of these systems themselves as well as the risk of what’s net new from these emerging capabilities.”

So we’re just asking it to just be nicer? Schaefer: “Yes, literally.”

These systems are becoming exponentially smarter over short periods of time, and they’re going to evolve at a faster pace. Can we even rein them in at this point? Schaefer: “I’m an AI optimist. Reining it in is, I think, both not possible and not desirable. Coming from an AI ethics standpoint, I think about this a lot. What is ethics? What is the anchor? What is our moral compass to this field of study, etc. And I turn offen to the classical philosophers, and they were not principally concerned with right and wrong per se, the way we normally conceive of ethics. They were principally concerned with what it meant to live a good life…. Aristotle termed this Eudaimonia, meaning human happiness, human flourishing, some kind of a unique combination of those two things.

“And I think if we apply that…lens to AI systems now, what we would consider to be ethical and responsible would look quite different. So, the AI systems that produce the most amount of human flourishing and happiness, I think we should consider responsible and ethical. And I think one principal example of that is [Google’s] DeepMind’s AlphaFold system. You’re probably familiar with this model, it cracked the major challenge in biology of deciphering protein folds, which stands to transform modern medicine, here and into the future. If that has major patient outcomes, that equals human flourishing.

“So, I think we should be focused just as much on how these powerful AI systems can be used to advance science in ways we literally could not before. From improving services that citizens experience on a daily basis, everything from as boring as the postal service to as exciting as what NOAA is doing in the climate change space.

“So, on net, I’m less worried than I am fearful.”

shattuck headshot2 small Susannah Shattuck

Susannah Shattuck, head of product, Credo AI

Shattuck: “I also am an optimist. [But] I think the human element is always a huge source of risk for incredibly powerful technologies. When I think about really what is transformational about generative AI, I think one of the most transformational things is that the interface for having an AI system do something for you is now a universal human interface of text. Whereas before, AI systems were things that you needed to know how to code to build right and to guide in order to have them do things for you. Now, literally anybody that can type, text [or] speak text and can interact with a very powerful AI system and have it do something for them, and I think that comes with incredible potential.

“I also am an optimist in many ways, but [that simple interface] also means that the barrier to entry for bad actors is incredibly low. It means that the barrier to entry for just mistaken misuse of these systems is very low. So, I think that makes it all the more important to define guardrails that are going to prevent both intentional and unintentional misuse or abuse of these systems to define.”

How will generative AI impact jobs? Will this be like previous industrial revolutions that eliminated many jobs through automation but resulted in new occupations through skilled positions? Schaefer: “I take the analysis from folks like Goldman Sachs pretty seriously — [AI] impacting 300 million-plus jobs in some fashion, to some degree. I think that’s right. It’s just a question of what that impact actually looks like, and how we’re able to transition and upscale. I think the jury is still out on that. It’s something we need to plan for right now versus assuming this will be like any previous technological transition in that it will create new jobs. I don’t know that’s guaranteed.

“This is new in that the jobs that it’s going to impact are of a different socioeconomic type, more broad based, and has a higher GDP impact, if you will. And frankly, this will move markets, move industries and move entire educational verticals in ways that the industrial revolution previously didn’t. And so, I think this is of a fundamentally different type of change.”

Shattuck: “My former employer [IBM] is saying they’re not going to hire [thousands of] engineers, software engineers that they were originally planning to hire. They have made…statements that these AI systems are basically allowing them to get the same kind of output [with fewer software engineers]. And if you’ve used any of these tools for code generation, I think that is probably the perfect example of the ways in which these systems can augment humans [and can] really drastically change the number of people that you need to build software.

“Then, the other example that’s currently unfolding right now, is there is a writer strike right in Hollywood. And I know that one of the issues on the table right now, one of the reasons why the writers are striking, is that they’re worried that ChatGPT [and other generative AI systems] are going to be used increasingly to replace writers. And so, one of the labor issues on the table right now is a minimum number of writers, you know, human writers that have to be assigned to work on a show or to work on a movie. And so I think these are very real labor issues that are currently unfolding.

“What regulation ends up getting passed to protect human workers? I do think that we’re increasingly going to see that there is a tension between human workers and their rights and truly the incredible productivity gains that we get from these tools.”

Let’s talk provenance. Generative AI systems can simply steal IP and copyrighted works because currently there’s no automated, standardized method to detect what’s AI generated and what’s created by humans. How do we protect original works of authorship? Shattuck: “We’ve thought a lot about this at Credo because this is a very top-of-mind risk for our customers and you know they’re looking for solutions to solve it. I think there are a couple of things we can do. There are a couple of places to intervene right in the AI workflow, if you will. One place to intervene is right at the point where the AI system produces an output. If you can check AI systems’ outputs effectively against the world of copyrighted material, whether there is a match, then you can effectively block generative AI outputs that would be infringing on somebody else’s copyright.

“So, one example would be, if you’re using a generative AI system to generate images, and that system generates an image that contains probably the most copyright fought-over image in the world — the Mickey Mouse ears — you want to automatically block that output because you do not want Disney coming for you if you accidentally use that somewhere in your website or in your marketing materials. So being able to block outputs based on detecting that they’re already infringing on existing copyright is one guardrail that you could put in place, and this is probably easiest to do for code.

“Then there’s another level of intervention, which I think is related to watermarking, which is how do we help humans make decisions about what generated content to use or not. And so being able to understand that an AI system generated a piece of content reliably, through water marking, is certainly one way to do that. I think in general, providing humans with tools to better evaluate generative AI outputs against a wide variety of different risks is going to be really critical for empowering humans to be able to confidently use generative AI in a bunch of different scenarios.”

Schaefer: “I think that there’s a bigger philosophical question that society’s going to have to contend with here, which is basically what does IP even mean in the age of AI? So, for example, the case I used was basically this: I’m an artist myself. You know, some of my paintings look like [Jean-Michel] Basquiat’s. I think there is no functional difference between me looking at all of Basquiat’s paintings and having that inform, inspire, or influence my style, and then AI seeing all of his works in pictures and generating its own images. You know, art that looks similar. And so that immediately means that we have a big society-wide question about what we actually protect and how we protect it. And what if if that doesn’t actually trigger an IP infringement? That means we have a new way of generating art that can be scaled up….

“The majority of our cultural output could be produced by the flipping of bits and not the human experience, and that could be all perfectly legal. And so, I think again, how do we reshape IP and determine what we want to protect and why? What does this mean for artists and their livelihoods, and what does this mean for the average person that should genuinely be interested in artistic expression and now has this tool to help them do that in a different and better way?”

In March, the Future of Life Institute issued that open letter asking Open AI to pause development of ChatGPT because they think it’s getting out of hand. Do you agree? Should OpenAI pause its development of ChatGPT? Shattuck: “We discussed this a lot at Credo and ultimately we agreed that we do not believe that a pause is feasible. Even if Open AI agrees to pause, there are certainly other actors in the world who would not be pausing. And so pausing, while we agreed with the spirit and the sentiment of being concerned about the very real risks of these systems and the way in which we are racing ahead in technological capability…, we do not believe that a pause is the right answer.

“There were a bunch of other really great recommendations in that letter that I’m sad were sort of overshadowed by the fact that they were mainly calling for a pause. For example, the watermarking of AI outputs was one of the very concrete recommendations in that letter.”

Schaefer: “We think a pause is not a shortcut to safety. In fact, the opposite is probably the case. With these emergent capabilities of generative AI systems, [it] also means emerging safety capabilities. And so, in addition to asking GPT-4 to be less biased…, there are also other potential safety guard rails that we could stumble upon by maximally engaging with these systems. And so, as an emergent behavior happens a novel risk is produced. The only way to really understand that and how those risks manifest, in what ways through what vectors under what conditions, is to engage with the systems.

“There are many, many reasons that we think a pause is bad, but I think the most salient is simply that it will probably achieve the opposite outcome of what the spirit of that pause is trying to accomplish.”

What do future large language models (LLMs) look like? Do they just get more enormous in capacity and capabilities, or do the datasets shrink to accommodate unique business case uses? Schaefer: “We don’t yet know. I don’t think we’ve fully contended with the fact that we have almost gobbled up the world’s data in these systems. Any data past, let’s say nine months from now, that we’re feeding these systems is going to be synthetic or generated to some degree by these self-salient systems. So, what does that mean for the training and capabilities of LLMs and our current way of approaching that work when all of the unique data from human history we’ve processed already?

“I don’t know if that impacts things or not. I genuinely don’t know. And then the second thing would be precisely because we’ve almost gobbled up all the world’s historical data and we’re getting smarter and smarter about how we design these systems, and make them more efficient, etc. — zero-shot learning, one-shot learning, emergent capabilities. So, I think the desired capabilities of these systems we’re going to want to see across sectors and industries will be able to be produced by the systems we already have just by tweaking them with a bit more data and a bit more turning. I think we’ll also see different approaches, smaller, different conceptual approaches. I think it will be a multipolar world.”

Shattuck: “I think progress in the AI space is often a stepwise function rather than a continuous curve. The paper that pushed this generative AI revolution into reality was called All You Need Is Attention. It was published in 2019 by some Google Researchers. We’re still riding the tailwinds of the approaches proposed in that paper. Who knows when the next paper will be published by researchers who’ve discovered another approach.

“There are many different things we can tweak or change about the way we’re designing, building and training these systems that could result in massive gains. I’m skeptical that the answer is for these systems to just keep getting larger and larger with more and more data. I think there will be new approaches that will completely change the way we’re able to build these systems and train them, potentially with much less data.”