In addition to controlling the use of ChatGPT and other standalone tools, companies now have to grapple with generative AI being built into the productivity apps their employees use every day. Credit: amperespy44 / Shutterstock We’re in the “iPhone moment” for generative AI, with every company rushing to figure out its strategy for dealing with this disruptive technology. According to a KPMG survey conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months. Companies have been using machine learning and AI for years now, said Kalyan Veeramachaneni, principal research scientist at MIT’s Laboratory for Information and Decision Systems, which is working on developing custom generative models to use for tabular data. What’s different now, he said, is that generative AI tools are accessible to people who are not data scientists. “It opens new doors,” he said. “This will enhance the productivity of a lot of people.” According to a recent study by analyst firm Valoir, 40% of the average workday can be automated with AI, with the highest potential for automation in IT, followed by finance, operations, customer service, and sales. It can take years for enterprises to build their own generative AI models and integrate them into their workflows, but one area where generative AI can make an immediate and dramatic business impact is when it’s embedded into popular productivity apps. According to David McCurdy, chief enterprise architect and CTO at Insight, a Tempe-based solutions integrator, 99% of companies that adopt generative AI will start by using genAI tools embedded into core business apps built by someone else. Microsoft 365, Google Workspace, Adobe Photoshop, Slack, and Grammarly are among the many popular productivity software tools that now offer a generative AI component. (Some are still in private beta testing.) Employees already know and use these tools every day, so when the vendors add generative AI features, it immediately makes the new technology widely accessible. In fact, according to a recent study conducted by Forrester on behalf of Grammarly, 70% of employees are already using generative AI for some or all of their writing — but 80% of them are doing this at companies that haven’t officially implemented it yet. Embedding AIs like OpenAI’s ChatGPT into productivity apps is one quick way for vendors to add generative AIs to their platforms. Grammarly, for instance, added genAI capabilities to its writing assistance platform in March, using OpenAI’s GPT 3.5 in a private Azure cloud environment. But soon vendors will be able to build their own custom models as well. It doesn’t take millions of dollars and billions of training data records to train a large language model (LLM), the foundation for a genAI chatbot, if a company starts with a pre-trained foundational model and then fine-tunes it, said Omdia analyst Bradley Shimmin. “The amount of data required for that type of training is dramatically smaller.” Commercially licensed LLMs are already available, the biggest recent release being Meta’s Llama 2. This means that the amount of AI built into popular productivity tools is about to explode. “The genie is out of the bottle,” said Juan Orlandini, Insight’s CTO for North America. Generative AI can also be useful for vendors whose products aren’t focused on creating new text or images. For instance, it can be used as a natural-language interface to complex back-end systems. According to Doug Ross, VP and head of insights and data at Sogeti, part of Capgemini, there are already hundreds — if not thousands — of companies adding conversational interfaces to their products. “That would indicate that there’s value there,” he said. “It’s a different way of interacting with various databases or back ends that can help you explore data in ways that were more difficult before. “ While generative AI may be a groundbreaking technology that brings a new set of risks, the traditional SaaS playbook can work when it comes to getting it under control: educating employees on the risks and benefits, setting up security guardrails to prevent employees from accessing malicious apps or sites or accidentally sharing sensitive data, and offering corporate-approved technologies that follow security best practices. But first, let’s talk about what can go wrong. Generative AI risks and challenges ChatGPT, Bard, Claude, and other genAI tools — as well as every productivity app that’s now adding generative AI as a feature — all share a few problems that could pose risks to companies. The first and most obvious risk is the accuracy issue. Generative AI is designed to generate content — text, images, video, audio, computer code, and so on — based on patterns in the data it’s been trained on. Its ability to provide answers to legal, medical, and technical questions is a bonus. And in fact, often the AIs are accurate. The latest releases of some popular genAI chatbots have passed bar exams and medical licensing tests. But this can give some users a false sense of security, as when a couple of lawyers got in trouble by relying on ChatGPT to find relevant case law — only to discover that it had invented the cases it cited. That’s because generative AIs are not search engines, nor are they calculators. They don’t always give the right answer, and they don’t give the same answer every time. For generating code, for example, large language models can have extremely high error rates, said Andy Thurai, an analyst at Constellation Research. “LLMs can have rates as high as 50% of code that is useless, wrong, vulnerable, insecure, and can be exploited by hackers,” he said. “After all, these models are trained based on the GitHub repository, which is notoriously error-prone.” As a result, while coding assistants can improve productivity, they can also sometimes create even more work, as someone has to check that all the code passes corporate standards. The picture gets even more complicated when you move beyond the big generative AI tools like ChatGPT to vendors adding proprietary AI models into their productivity tools. “If you put bad data into the models, you’re not going to have very happy customers,” said Vrinda Khurjeka, senior director of Americas business at technology consulting firm Searce. “If you’re really just going to use it for the sake of having a feature, and not think about whether it will really help your customers, you will be in a lose-lose situation.” Then there’s the risk of bias, she said. “You are only going to get the outputs based on what your input data is.” For example, if a tool that helps you generate customer emails is trained on your internal communications, and company culture includes a lot of swearing, then outward-bound emails created by the tool can have the same language, she said. This kind of bias can have more significant implications, as well, if it results in employment discrimination or biased lending practices. “It’s a very real problem,” she said. “What we are recommending to all of our customers is that it’s not just about implementing the model once and being done with it. You need to have audits and checks and balances.” According to the KPMG survey, accuracy and reliability are among the top ten concerns that companies have about generative AI. But that’s just the start of the problems that generative AI can create. For example, some AIs get ongoing training based on interactions with users. The publicly available version of ChatGPT, for example, uses conversations with its users for its ongoing training unless users specifically opt out. So, for example, if an employee uploads their company’s secret plans and asks for the AI to write some text for a presentation about these plans, the AI will then know those plans. Then, if another person, possibly at a competing company, asks about those plans, the AI might well answer them and provide all the details. Other information that could potentially leak out this way includes personally identifiable information, financial and legal data, and proprietary code. According to the KPMG survey, 63% of executives say that data and privacy concerns are a top priority, followed by cybersecurity at 62%. “It’s a very real risk,” said Forrester analyst Chase Cunningham. “Anytime you’re leveraging these types of systems, they’re reliant on data to improve their models, and you might not necessarily have control or knowledge of what’s being used.” (Note that OpenAI has just announced an enterprise version of ChatGPT that it says does not use customers’ data to train its models.) Another potential liability created by generative AI is the legal risk associated with improperly sourced training data. There are several lawsuits currently making their way through the courts having to do with the fact that some AI companies have — allegedly — used pirate sites to read copyrighted books and scraped images from the web without artists’ permission. This means that an enterprise that heavily uses these AIs might also inherit some of this liability. “I think you’re exposing yourself to risk of being tied to some sort of litigious action,” said Cunningham. Indeed, legal exposure was cited as a top barrier to implementing generative AI by 20% of the KPMG survey respondents. Plus, in theory, generative AI creates original, new works, inspired by the content it was trained on — but sometimes the results can wind up almost identical to the training data. So an enterprise might accidentally wind up using content that comes too close to copyright infringement. Here’s how to address these potential risks. Employee education It is not too early to start running generative AI training for employees. Employees need to understand both the capabilities and the limitations of generative AI. And they need to know which tools are safe to use. “There needs to be education at the enterprise level,” said IDC analyst Wayne Kurtzman. “It’s incumbent on companies to set up specific guidelines, and they need an AI policy to guide users in this.” For instance, genAI output should always be treated as a starting draft that employees review closely and amend as needed, not as a final product ready to send out into the world. Enterprises need to help their employees develop critical thinking skills around AI, Kurtzman said, and to set up a feedback loop that includes an array of users who can flag some of the issues that crop up. “What companies want to see is productivity improvements,” he said. “But they also hope that the productivity improvements are greater than the time necessary to fix any challenges that may have occurred in the adoption. This will not go as smoothly as everyone would like, and we all know that.” Companies have already started on this journey, elevating data literacy among their employees as part of their push to becoming a data-driven enterprise, said Omdia’s Shimmin. “This is really no different,” he said, “except that the stakes are higher.” At Insight, for example, IT and corporate leaders have created a generative AI policy for the company’s 14,000 global employees. The starting point is a safe, company-approved generative AI tool that everyone can use — an instance of ChatGPT running on a private Azure cloud. This lets employees know, “Here’s a safe place,” said Orlandini. “Go ahead and use it, because we verified that this is a secure environment to do it in.” For any other tool Insight employees use that has recently added generative AI capabilities, the company cautions them to be careful not to share any proprietary information. “Unless we’ve given you permission, treat every one of those like you would Twitter, or Reddit, or Facebook,” Orlandini said. “Because you don’t know who’s going to see it.” Security tools The unsanctioned use of generative AI is just part of the broader unsanctioned SaaS problem, with many of the same challenges. It’s hard for companies to track what apps employees are using and the security implications of all the different apps. According to the 2023 BetterCloud State of SaaSOps report, 65% of all SaaS apps used in the enterprise are unsanctioned. But there are cybersecurity products that track — or block — employee access to particular SaaS applications or websites and that block sensitive data from being uploaded to outside sites and apps. CASB (cloud access security broker) tools can help companies protect themselves from unsanctioned SaaS use. In 2020, the top vendors in this space included Netskope, Microsoft, Bitglass, and McAfee — now SkyHigh Security. There are standalone CASB vendors, but CASB features are also included in security service edge (SSE) and secure access service edge (SASE) platforms. This is a good time for companies to talk to their CASB vendors and ask about how they track and block both standalone generative AI tools and those embedded into SaaS applications. “Our advice to security folks is to make sure they apply these web tracking tools to understand where people are going, and potentially blocking them,” said Gartner’s Wong. “You also don’t want to lock it down too much and inhibit productivity,” he added. DLP (data loss prevention) tools can help companies keep sensitive data from leaving the company. There are standalone tools, but DLP features are also included in email security solutions as well as security service edge (SSE). secure access service edge (SASE), and endpoint protection platforms. These vendors are starting to address genAI specifically. For example, Forcepoint DLP promises to help enterprises control who has access to generative AI, prevent the uploading of sensitive files, and prevent the pasting of sensitive information. According to a Netskope report released in July, many industries are already starting to use such tools to help secure generative AI. In the financial services, for example, 18% of companies are blocking ChatGPT, while 19% use data loss prevention tools. In healthcare, the numbers are 18% and 21%, respectively. It’s too early to tell to what extent these tools will work on embedded generative AI, since the way that the AI is embedded is still evolving. Additionally, the degree to which these AIs can access sensitive data can vary vendor by vendor — and day by day. ‘Enterprise-safe’ generative AI Addressing unsanctioned genAI use requires a carrot-and-stick approach. It’s not enough to explain to employees why some generative AI is risky and to block generative AI tools. Employees also need to have safe, approved generative AI tools that they can use. Otherwise, if the need is great enough, they’ll figure out a way around the blocks by finding apps that haven’t hit IT’s radar yet or using personal devices to access the applications. For example, most AI-powered image generation tools are trained on questionable data and do not provide security guarantees to users. But Adobe announced in June that it uses only fully licensed data to train its Firefly generative AI tools, and no content uploaded by users is used to train its AIs. The company says it’s also planning to offer legal indemnification against IP lawsuits related to Firefly output for its enterprise users. In July, Shutterstock followed suit, offering similar legal indemnification to enterprise customers. Shutterstock was already using fully licensed images to train its AI, and last fall announced a compensation fee for artists whose images are used for training data. The company said it’s already compensated “hundreds of thousands of artists” and plans to make payments to millions more. Similarly, with text generation, some enterprise-focused platforms are working hard to build systems that can help companies feel safe that their data will not be leaked to other users. For example, Microsoft’s Azure OpenAI Service promises that customer data will not be used to train its foundational models, and the data is protected by enterprise-grade compliance and security controls. Some text AI companies have begun signing deals with content providers to get fully licensed training data. In July, for example, OpenAI signed a deal with the Associated Press to get access to part of its text archive. But these efforts are still in their infancy. Enterprise-grade productivity tools that add generative AI features — like Microsoft 365 Copilot — are likely to have good data protections in place, according to Insight CTO McCurdy. “If they’re doing it correctly, they’ll use all the same controls they have today to protect your information,” he said. As of this writing, Microsoft 365 Copilot is in the early access phase. This is a fast-moving area, however, and it’s hard to keep up with which vendors are rolling out which features, and how secure they are. Some enterprise vendors have already run into problems with their generative AI privacy and security policies. Zoom, for example, updated its terms of service to indicate that the company could use customer video calls to train its models. After a public outcry, the company backtracked, and today the company says none of this information is used to train its AI models. Plus, account owners or administrators can choose to enable or disable its AI-powered meeting summary feature. Other vendors are getting out in front of this issue. Grammarly, for example, which uses OpenAI for its generative AI, promises that customers’ data is isolated so that information won’t leak from one user to another, and says that no data is used by partners or third parties to train AIs. Continuing complications But not all vendors are going to be forthcoming on where their training data comes from or how customer data is secured. Clients need to be able to understand how a particular AI will interact with their data, said Gartner analyst Jason Wong. “But it’s hard to get that level of transparency,” he added. Until there are regulations in place, organizations can try to push their vendors for more information, but they should be prepared for some resistance. With productivity apps that are embedding new generative AI features, things might get even more complicated, he added. “In an embedded solution, you’re probably going to be using lots of different models,” he said. Ideally, all software vendors would provide customers with a full AI bill of materials, explaining what models they use, how they are trained, and how customer data is protected. “There’s some early conversation about that going around, but it will be exceptionally difficult,” said Forrester’s Cunningham. “We have a problem getting traditional vendors to do this with regular software. If you’re a big enough customer, you could ask… but if you ask for specifics, it’s unlikely that they would have the totality of what customers would need because it’s so new and nuanced.” Often, a company’s security team might not even know that a particular application used by the company has now added generative AI tools, he said. “The majority of folks I’m talking to are pulling their hair out trying to deal with this,” he said. “The Pandora’s Box has been ripped open.” If all software will now have AI built in, companies will have to review their entire software portfolio, he said. “It’s a real problem.” Companies can block everything, but that would be a sledgehammer solution — and hurt productivity, he said. That’s why it’s imperative to educate employees about the risks and liabilities of using generative AI, he added. “It’s perfectly valid for an organization to say, ‘You can use this at home, but you’re potentially endangering the company and your job by using these tools in the wrong way.’” Related content feature 8 AI-powered apps that'll actually save you time Most AI apps are buzzword-chasing hype-mongers. These eight off-the-beaten-path supertools are rare exceptions. By JR Raphael Jul 01, 2024 15 mins Generative AI Productivity Software news analysis EU commissioner slams Apple Intelligence delay Margrethe Vestager, Europe's chief gatekeeper, takes a shot at Apple's decision to delay rolling out the company's AI. By Jonny Evans Jun 28, 2024 7 mins Regulation Apple Generative AI how-to Download our unified communications as a service (UCaaS) enterprise buyer’s guide Does your phone system date back to the last century? If so, you’re missing out on new technologies that can increase productivity and support a more distributed workforce. That’s where unified communications as a service, or UCaaS, comes By Andy Patrizio Jun 28, 2024 1 min Unified Communications Enterprise Buyer’s Guides Cloud Computing feature Enterprise buyer’s guide: Android smartphones for business Security is the biggest — but not only — factor when deciding what Android devices to support in your enterprise. See how Google, Honor, Huawei, Infinix, Itel, Motorola, Nokia, OnePlus, Oppo, Realme, Samsung, Tecno, Vivo, and Xiaomi stack By Galen Gruman Jun 28, 2024 23 mins Google Samsung Electronics Smartphones Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe