Americas

  • United States

Asia

rob_enderle
Contributor

Three problems with generative AI have yet to be addressed

opinion
May 19, 20234 mins
Artificial IntelligenceAugmented RealityGenerative AI

While AI-based tools could deliver a major boost in productivity and efficiency, there's a dark side to them, as well.

A network of linked question marks.
Credit: Igor Kutyaev / Getty Images

Generative AI continues to take the world by storm, but there are growing concerns this technology could, if not aggressively managed and regulated, do a great deal of harm. In addition to fears about the technology making decisions and doing things autonomously against our interests, other concerning aspects involve the related training sets.

These training sets will increasingly capture everything an employee does, that data can be used to assess employee productivity; track the creation of confidential documents, offerings and products; and eventually be used to create a digital twin of the organization that has employed the technology. 

Let’s talk about each in turn.

Misusing training sets to gauge employee ‘productivity’

As employees increasingly use generative AI, it will capture everything they do. Using that data to monitor what an employee does during the workday would seem an obvious use for this training data. But employees will likely feel their privacy is being violated. And if care isn’t taken to tie worker behavior to results, companies could make bad decisions.

For instance, an employee who works long hours but is relatively inefficient might be seen as better than an employee who works short hours but is highly efficient. If the focus is on hours worked instead of results, not only will the training set favor behavior that is inefficient, but efficient employees that should be kept on board will be managed out.  

The right way is to do this is with the permission of the employee and the assurance that AI will be used to enhance, not replace; the focus should be on efficiency, not outright hours worked. This way, the training set can be used to create more efficient human tools and digital twins and train employees on how to be more efficient.

Employees who know AI-based tools will be more helpful than punitive are more likely to embrace the technology.

Security is a must

There is the potential for another danger: the data sets being created by capturing employee behavior could themselves be highly risky. That’s because they could include highly proprietary products, processes and internal operations that could be used by competitors, governments, and hostile actors to gain insights about a firm’s operations. 

Access to a training set from an engineer, engineering manager, or executive could provide deep insights into how they make decisions, what decisions they’ve made, plans for future products and their status, problems within the company — and secrets a company would prefer to remain secret. 

Even if a specific source is hidden, a smart researcher could, just from the nature and detail of the content, determine who contributed it and what the employee does in substantial detail. That information could be highly beneficial to a hostile actor or corporate rival and needs to be protected. Since these tools enhance individual employees’ work, the likelihood of it escaping with a departing employee or one that is compromised in their home office is high. 

Protecting against that is critical to the continued operations of a company.

It gets better — and worse

Once you aggregate training sets across a company, you could gain insights about the firm’s operations that could lead to a far more efficient and profitable company. (Of course, this same information in the hands of a regulator or hostile attorney could provide nearly unimpeachable evidence of wrongdoing.) Or imagine a competitor gaining access to this kind of information; they could effectively create a digital clone of the firm — and use it to better anticipate and more aggressively respond to competitive actions by the company using generative AI. 

This level of competitive exposure is unprecedented and, should a competitor gain access to the firm’s training files, a rival could effectively push the compromised company out of business. 

Generative AI is a real game-changer, but it comes with risks. We know it’s not yet mature, we know its answers can’t always be trusted, and we know it can be used to create avatars designed to fool us into buying things we don’t seem to need. And while it brings opportunities to help employee productivity, it can become a massive security risk.

Here’s hoping you and your company learn how to use it right. 

rob_enderle
Contributor

Rob Enderle is president and principal analyst of the Enderle Group, a forward looking emerging technology advisory firm. With more than 25 years’ experience in emerging technologies, he provides regional and global companies with guidance in how to better target customer needs with new and existing products; create new business opportunities; anticipate technology changes; select vendors and products; and identify best marketing strategies and tactics.

In addition to IDG, Rob currently writes for USA Herald, TechNewsWorld, IT Business Edge, TechSpective, TMCnet and TGdaily. Rob trained as a TV anchor and appears regularly on Compass Radio Networks, WOC, CNBC, NPR, and Fox Business.

Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group. While there he worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, GM, Ford, and Siemens.

Before Giga, Rob was with Dataquest covering client/server software, where he became one of the most widely publicized technology analysts in the world and was an anchor for CNET. Before Dataquest, Rob worked in IBM’s executive resource program, where he managed or reviewed projects and people in Finance, Internal Audit, Competitive Analysis, Marketing, Security, and Planning.

Rob holds an AA in Merchandising, a BS in Business, and an MBA, and he sits on the advisory councils for a variety of technology companies.

Rob’s hobbies include sporting clays, PC modding, science fiction, home automation, and computer gaming.

The opinions expressed in this blog are those of Rob Enderle and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.