Americas

  • United States

Asia

rob_enderle
Contributor

Three issues with generative AI still need to be solved

opinion
Apr 20, 20235 mins
Artificial IntelligenceAugmented RealityGenerative AI

While generative AI has captured the attention and imagination of many in the tech world, underlying issues could hinder its spread.

Generative AI, tipping point
Credit: Gearstd/Shutterstock

Disclosure: Qualcomm and Microsoft are clients of the author.

Generative AI is spreading like a virus across the tech landscape. It’s gone from being virtually unheard a year ago to being one of, if not the, top trending technology today. As with any technology, there are issues that tend to surface with rapid growth, and generative AI is no exception.

I expect three main problems to emerge before the end of the year that few people are talking about today.

The critical need for a hybrid solution

Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized datacenters aren’t structured to handle this kind of load. They are I/O-constrained, processor-constrained, database-constrained, cost-constrained, and size-constrained, making a massive increase in centralized capacity unlikely in the near term, even though the need for this capacity is going vertical. 

These capacity problems will increase latency, reduce reliability, and over time could throttle performance and reduce customer satisfaction with the result. The need is for more of a more hybrid approach where the AI components necessary for speed are retained locally (on devices) while the majority of the data resides centrally to reduce datacenter loads and decrease latency.

Without a hybrid solution — where smartphones and laptops can do much of the work — use of the technology is likely to stall as satisfaction falls, particularly in areas such as gaming, translation, and conversations where latency will be most annoying. With translation, this will be especially problematic because the way translations are done will always introduce some latency. If the AI system adds more, it is could make the related tool unusable. 

Qualcomm has released a performance report that looks particularly troubling in this regard, but it is likely only the tip of a nasty performance problem to come.

The problem of security

The language models generative AI uses include information that has not been fully vetted. Just this week, Elon Musk threatened to sue Microsoft for its use of Twitter to train its generative AI model. Given that Twitter’s data comes from its customers (and wasn’t created by Twitter), the lawsuit is likely to fail on its merits. That is, if Musk ever files the suit to begin with; his history is longer on threats than actions. It still highlights a growing concern about who owns the results AI tools generate

You can derive a lot by successfully analyzing readily available information. I was a competitive analyst for a time, and it was amazing how much we could find out about a competitor’s activities using publicly available information. But that research was all done by hand using comparatively little data. These new AI models can pull petabytes of data now and will quickly grow to use exabytes and even zetabytes in the future. 

The ability of these tools to uncover secrets from businesses, governments, and individuals will be unprecedented and the security technology needed to mitigate the problem not only doesn’t exist today, it may also be impossible to create. 

Worse for some is that these tools can analyze data after the fact, often decades after the fact, surfacing things that were thought to be safely buried. 

Relationship trouble?

Generative AI tools can interact with others on our behalf and present a very different personality than our own. The tools can recreate our image, voices, and even our unique mannerisms while acting as proxies. 

I was once an actor. One of the problems actors have is that others tend to conflate an actor with a role that has little to do with who they really are. In relationships, your significant other might well be in love with a character you play, not the person you are. 

With generative AI, this will be a problem at scale as we increasingly allow these tools to interact with co-workers and maybe even interact on our behalf on dating sites. The disconnects between who people think you are (based on an AI proxy) and who you really are could damage trust and make lasting relationships, both personal and career, problematic.

We often go through a lot of trouble when building a relationship to hide our faults; AI tools could make that even easier. As a result, we may find it nearly impossible to trust those around us. 

Despite these issues, generative AI has the potential to massively improve productivity, act as proxies, provide near instant translation, and deliver answers to questions that have been unanswered. But the issues with latency, security, and trust are real.  And they come in addition to fears about job losses that have been hanging over the technology since it arrived.

I’m not arguing against generative AI; I doubt we could stop its advance even if we wanted to. But we have to begin thinking about how to deal with these before the damage is becomes unmanageable.

rob_enderle
Contributor

Rob Enderle is president and principal analyst of the Enderle Group, a forward looking emerging technology advisory firm. With more than 25 years’ experience in emerging technologies, he provides regional and global companies with guidance in how to better target customer needs with new and existing products; create new business opportunities; anticipate technology changes; select vendors and products; and identify best marketing strategies and tactics.

In addition to IDG, Rob currently writes for USA Herald, TechNewsWorld, IT Business Edge, TechSpective, TMCnet and TGdaily. Rob trained as a TV anchor and appears regularly on Compass Radio Networks, WOC, CNBC, NPR, and Fox Business.

Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group. While there he worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, GM, Ford, and Siemens.

Before Giga, Rob was with Dataquest covering client/server software, where he became one of the most widely publicized technology analysts in the world and was an anchor for CNET. Before Dataquest, Rob worked in IBM’s executive resource program, where he managed or reviewed projects and people in Finance, Internal Audit, Competitive Analysis, Marketing, Security, and Planning.

Rob holds an AA in Merchandising, a BS in Business, and an MBA, and he sits on the advisory councils for a variety of technology companies.

Rob’s hobbies include sporting clays, PC modding, science fiction, home automation, and computer gaming.

The opinions expressed in this blog are those of Rob Enderle and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.