Americas

  • United States

Asia

rob_enderle
Contributor

Pew Research finds a big problem with AI: People don’t trust it

opinion
Aug 31, 20234 mins
Artificial IntelligenceAugmented RealityGenerative AI

If companies want their employees to embrace the potential of generative AI, they're going to have to make sure those workers actually trust it won't cost them their jobs.

man in boat surrounded by sharks risk fear decision attack threat by peshkova getty

As time goes by, people are becoming less trusting of artificial intelligence (AI), according to the results of a recent Pew Research study. The study, which involved 11,000 respondents, found that attitudes had changed sharply in just the past two years.

In 2021, 37% of respondents said they were more concerned than excited about AI. That number stayed pretty much the same last year (38%), but has now jumped to 52%. (The percentage of those who are excited about AI declined from 18% in 2021 to just 10% this year).

This is a problem because, to be effective, generative AI tools have to be trained, and during training there are a number of ways data can be compromised and corrupted. If people do not trust something, they’re not only unlikely to support it, but they are also more likely to act against it. Why would anyone support something they don’t trust?

This lack of trust could well slow down the evolution of generative AI, possibly leading to more tools and platforms that are corrupted and unable to do the tasks set out for them. Some of the issues appear to follow from users intentionally trying to undermine the technology, which only underscores the troubling behavior.

What makes generative AI unique

Generative AI learns from users. It might initially be trained with large language models (LLMs), but as more people use it, the tools learn from how people use it. This is meant to create a better human interface that can optimize communication with each user. It then takes this learning and spreads it across its instances, similar to how a child learns from its parents and then shares that knowledge with peers. This can create cascading problems if the information being provided is incorrect or biased.

The systems do seem to be able to handle infrequent mistakes and adjust to correct them, but if AI tools are intentionally misled, their ability to self-correct from that type of attack has so far proven inadequate. It’s unlike when an employee acts out and destroys their own work product; with AI, they could destroy the work of anyone using an AI tool once the misbehaving employee’s data is used to train other instances.

This suggests that an employee who undercuts genAI tools could do significant damage to their company beyond just the tasks they’re doing.

Why trust matters

People who are worried they will lose their job typically do not do a good job training another employee because they fear they might be terminated and replaced by the employee they trained. If those same people are asked to train AI tools and fear they’re being replaced, they could either refuse to do the training or damage it so it cannot replace them.

That’s where we are now. There is little in the media about how AI tools will help users have a better work/life balance, become more productive without doing more work, and (if properly trained) make fewer mistakes. Instead we get a regular litany of how AI will be taking jobs, how it’s throwing all kinds of errors and how it will be used to hurt people

No wonder people are wary of it.

Change is hard (and risky)

IT types have long known that rolling out technology that users don’t like is tricky — because users will either avoid it or seek to break it. This is particularly troubling for a technology that will eventually impact every white- and some blue-collar jobs in a company. Positioning AI tools as employee aids, not employee replacements, and highlighting how those who properly use the technology are more likely to get ahead would go a long way to assuring they are on board with the technology and will it mature.

Forcing new technology on employees that don’t want it is always risky, and if workers believe it will cost them their job, forcing it on them can be a big mistake. Rolling out genAI right should include a significant effort to get employees excited about, and supportive of, the technology. Otherwise, not only will the deployment fall short of expectations, but it could end up doing substantial damage to the companies trying to embrace it.

rob_enderle
Contributor

Rob Enderle is president and principal analyst of the Enderle Group, a forward looking emerging technology advisory firm. With more than 25 years’ experience in emerging technologies, he provides regional and global companies with guidance in how to better target customer needs with new and existing products; create new business opportunities; anticipate technology changes; select vendors and products; and identify best marketing strategies and tactics.

In addition to IDG, Rob currently writes for USA Herald, TechNewsWorld, IT Business Edge, TechSpective, TMCnet and TGdaily. Rob trained as a TV anchor and appears regularly on Compass Radio Networks, WOC, CNBC, NPR, and Fox Business.

Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group. While there he worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, GM, Ford, and Siemens.

Before Giga, Rob was with Dataquest covering client/server software, where he became one of the most widely publicized technology analysts in the world and was an anchor for CNET. Before Dataquest, Rob worked in IBM’s executive resource program, where he managed or reviewed projects and people in Finance, Internal Audit, Competitive Analysis, Marketing, Security, and Planning.

Rob holds an AA in Merchandising, a BS in Business, and an MBA, and he sits on the advisory councils for a variety of technology companies.

Rob’s hobbies include sporting clays, PC modding, science fiction, home automation, and computer gaming.

The opinions expressed in this blog are those of Rob Enderle and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author