While AI-based tools could deliver a major boost in productivity and efficiency, there's a dark side to them, as well. Credit: Igor Kutyaev / Getty Images Generative AI continues to take the world by storm, but there are growing concerns this technology could, if not aggressively managed and regulated, do a great deal of harm. In addition to fears about the technology making decisions and doing things autonomously against our interests, other concerning aspects involve the related training sets. These training sets will increasingly capture everything an employee does, that data can be used to assess employee productivity; track the creation of confidential documents, offerings and products; and eventually be used to create a digital twin of the organization that has employed the technology. Let’s talk about each in turn. Misusing training sets to gauge employee ‘productivity’ As employees increasingly use generative AI, it will capture everything they do. Using that data to monitor what an employee does during the workday would seem an obvious use for this training data. But employees will likely feel their privacy is being violated. And if care isn’t taken to tie worker behavior to results, companies could make bad decisions. For instance, an employee who works long hours but is relatively inefficient might be seen as better than an employee who works short hours but is highly efficient. If the focus is on hours worked instead of results, not only will the training set favor behavior that is inefficient, but efficient employees that should be kept on board will be managed out. The right way is to do this is with the permission of the employee and the assurance that AI will be used to enhance, not replace; the focus should be on efficiency, not outright hours worked. This way, the training set can be used to create more efficient human tools and digital twins and train employees on how to be more efficient. Employees who know AI-based tools will be more helpful than punitive are more likely to embrace the technology. Security is a must There is the potential for another danger: the data sets being created by capturing employee behavior could themselves be highly risky. That’s because they could include highly proprietary products, processes and internal operations that could be used by competitors, governments, and hostile actors to gain insights about a firm’s operations. Access to a training set from an engineer, engineering manager, or executive could provide deep insights into how they make decisions, what decisions they’ve made, plans for future products and their status, problems within the company — and secrets a company would prefer to remain secret. Even if a specific source is hidden, a smart researcher could, just from the nature and detail of the content, determine who contributed it and what the employee does in substantial detail. That information could be highly beneficial to a hostile actor or corporate rival and needs to be protected. Since these tools enhance individual employees’ work, the likelihood of it escaping with a departing employee or one that is compromised in their home office is high. Protecting against that is critical to the continued operations of a company. It gets better — and worse Once you aggregate training sets across a company, you could gain insights about the firm’s operations that could lead to a far more efficient and profitable company. (Of course, this same information in the hands of a regulator or hostile attorney could provide nearly unimpeachable evidence of wrongdoing.) Or imagine a competitor gaining access to this kind of information; they could effectively create a digital clone of the firm — and use it to better anticipate and more aggressively respond to competitive actions by the company using generative AI. This level of competitive exposure is unprecedented and, should a competitor gain access to the firm’s training files, a rival could effectively push the compromised company out of business. Generative AI is a real game-changer, but it comes with risks. We know it’s not yet mature, we know its answers can’t always be trusted, and we know it can be used to create avatars designed to fool us into buying things we don’t seem to need. And while it brings opportunities to help employee productivity, it can become a massive security risk. Here’s hoping you and your company learn how to use it right. Related content opinion Extended-reality employee training — a beginning for robotic onboarding? The use of extended reality to train employees could open the door for more involved robotic use in some industries in a few short years. By Rob Enderle Oct 19, 2023 4 mins Augmented Reality Technology Industry opinion Why we need to focus AI on relationships We are increasingly using AI to create and maintain relationships with customers, but the real short-term need is in creating and maintaining relationships with our employees. By Rob Enderle Oct 12, 2023 5 mins Employee Experience Generative AI Artificial Intelligence opinion Poly and Nureva make big hybrid conference rooms work at Zoomtopia The companies highlighted conferencing products that are flexible, reconfigurable, and interoperable, allowing for a best-of-class, multi-vendor solution that lets you selectively swap out components as technology advances. By Rob Enderle Oct 06, 2023 5 mins Remote Work Zoom Video Communications Videoconferencing opinion Wi-Fi 7 could make thin clients much more viable The days when bandwidth and networking issues could hold back thin client devices may be over — thanks to Wi-Fi 7. By Rob Enderle Sep 28, 2023 4 mins Internet Cloud Computing Computers and Peripherals Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe