If companies want their employees to embrace the potential of generative AI, they're going to have to make sure those workers actually trust it won't cost them their jobs. As time goes by, people are becoming less trusting of artificial intelligence (AI), according to the results of a recent Pew Research study. The study, which involved 11,000 respondents, found that attitudes had changed sharply in just the past two years. In 2021, 37% of respondents said they were more concerned than excited about AI. That number stayed pretty much the same last year (38%), but has now jumped to 52%. (The percentage of those who are excited about AI declined from 18% in 2021 to just 10% this year). This is a problem because, to be effective, generative AI tools have to be trained, and during training there are a number of ways data can be compromised and corrupted. If people do not trust something, they’re not only unlikely to support it, but they are also more likely to act against it. Why would anyone support something they don’t trust? This lack of trust could well slow down the evolution of generative AI, possibly leading to more tools and platforms that are corrupted and unable to do the tasks set out for them. Some of the issues appear to follow from users intentionally trying to undermine the technology, which only underscores the troubling behavior. What makes generative AI unique Generative AI learns from users. It might initially be trained with large language models (LLMs), but as more people use it, the tools learn from how people use it. This is meant to create a better human interface that can optimize communication with each user. It then takes this learning and spreads it across its instances, similar to how a child learns from its parents and then shares that knowledge with peers. This can create cascading problems if the information being provided is incorrect or biased. The systems do seem to be able to handle infrequent mistakes and adjust to correct them, but if AI tools are intentionally misled, their ability to self-correct from that type of attack has so far proven inadequate. It’s unlike when an employee acts out and destroys their own work product; with AI, they could destroy the work of anyone using an AI tool once the misbehaving employee’s data is used to train other instances. This suggests that an employee who undercuts genAI tools could do significant damage to their company beyond just the tasks they’re doing. Why trust matters People who are worried they will lose their job typically do not do a good job training another employee because they fear they might be terminated and replaced by the employee they trained. If those same people are asked to train AI tools and fear they’re being replaced, they could either refuse to do the training or damage it so it cannot replace them. That’s where we are now. There is little in the media about how AI tools will help users have a better work/life balance, become more productive without doing more work, and (if properly trained) make fewer mistakes. Instead we get a regular litany of how AI will be taking jobs, how it’s throwing all kinds of errors and how it will be used to hurt people. No wonder people are wary of it. Change is hard (and risky) IT types have long known that rolling out technology that users don’t like is tricky — because users will either avoid it or seek to break it. This is particularly troubling for a technology that will eventually impact every white- and some blue-collar jobs in a company. Positioning AI tools as employee aids, not employee replacements, and highlighting how those who properly use the technology are more likely to get ahead would go a long way to assuring they are on board with the technology and will it mature. Forcing new technology on employees that don’t want it is always risky, and if workers believe it will cost them their job, forcing it on them can be a big mistake. Rolling out genAI right should include a significant effort to get employees excited about, and supportive of, the technology. Otherwise, not only will the deployment fall short of expectations, but it could end up doing substantial damage to the companies trying to embrace it. Related content opinion Extended-reality employee training — a beginning for robotic onboarding? The use of extended reality to train employees could open the door for more involved robotic use in some industries in a few short years. By Rob Enderle Oct 19, 2023 4 mins Augmented Reality Technology Industry opinion Why we need to focus AI on relationships We are increasingly using AI to create and maintain relationships with customers, but the real short-term need is in creating and maintaining relationships with our employees. By Rob Enderle Oct 12, 2023 5 mins Employee Experience Generative AI Artificial Intelligence opinion Poly and Nureva make big hybrid conference rooms work at Zoomtopia The companies highlighted conferencing products that are flexible, reconfigurable, and interoperable, allowing for a best-of-class, multi-vendor solution that lets you selectively swap out components as technology advances. By Rob Enderle Oct 06, 2023 5 mins Remote Work Zoom Video Communications Videoconferencing opinion Wi-Fi 7 could make thin clients much more viable The days when bandwidth and networking issues could hold back thin client devices may be over — thanks to Wi-Fi 7. By Rob Enderle Sep 28, 2023 4 mins Internet Cloud Computing Computers and Peripherals Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe