Athenahealth has used AI for years, but generative AI opened up new efficiencies in processing healthcare requests and records. There were challenges, too, as Heather Lane, Athenahealth’s senior architect of data science and platform engineering, explained. Credit: Shutterstock/Elnur Athenahealth provides software and services for medical groups and health systems around the country, and finding efficiencies through the use of artificial intelligence (AI) has long been a part of its DNA, so to speak. For example, the healthcare technology provider has already been using machine learning to sift through tens of millions of faxes it receives electronically each year so they can be attached to the proper patient record. But, the company’s use of AI changed dramatically when little more than a year ago OpenAI announced ChatGPT. Athenahealth recognized the generative AI (genAI) platform’s promise of creating new efficiencies, both for clients and its own internal processes. Earlier this month, Athenahealth unveiled a range of new generative AI-driven capabilities across its product line, including Athenahealth’s cloud-based suite of electronic health records (EHR), revenue cycle management, and patient engagement tools. One newly deployed genAI capability can summarize the labels on patient healthcare documents intelligently so providers can more easily find the information most relevant at the point of care. Another feature will identify missing or incorrect information before a prior authorization for care is submitted to maximize the chance the authorization will be approved. Heather Lane, Athenahealth’s senior architect of data science and platform engineering, has technical oversight of the company’s AI strategy and oversaw not only genAI product deployments but the creation of a team that continues to explore new ways of using the tech. Athenahealth Heather Lane, Athenahealth’s senior arcitect of data science and platform engineering. Lane spoke with Computerworld about how genAI such as ChatGPT through Microsoft’s Azure platform has been deployed and what the organization hopes to gain in coming years. The following are excerpts from that interview: Is generative AI as promising as many claim? “I think the discussion in the industry is between the people who believe it’s an ‘iPhone moment’ and the people who believe it’s hype. I think it remains to be seen who’s right. Personally, I’m betting on an iPhone moment.” How did you create an AI team to address the rollout of the technology and who did that consist of? “We have a data science team and we’re gradually, broadly calling it the AI team. We’re not the only ones at Athena who do AI, but we are the majority team that does machine learning and artificial intelligence. The team has been around about a year and a half now.” What sort of things did your team do to learn AI skills? Did you educate employees on AI internally or hire talent to address an AI skills shortage? “We have mostly taught. The effort to level-up in generative AI goes well beyond just the AI team. We took on a significant internal education activity this year. We called it a codefest, the next step up from a hackathon. And we framed it around … three use cases … that are just going to alpha now. The objective was to get those three cases to early deployment and educate a bunch of engineers on how this technology works, and not just engineers but our product people, our [user experience] people, and so on. They all have to have some understanding of this technology. “Finally, we needed to build some institutional understanding of this technology of where the costs and benefits are and strengths and weaknesses are — also the legal and regulatory and safety and security issues that need to be considered. “We had these multiple goals and went into it pretty heavily organizationally. Along with the three use cases we were targeting for alpha deployment, we also had 10 other projects that were exploratory level and about 40 that we reviewed at an internal board level, but didn’t launch into exploring. I think the discussion in the industry is between the people who believe it’s an iPhone moment and the people who believe it’s hype. I think it remains to be seen who’s right. “We had about 300 developers going through a generative AI bootcamp. We logged over 2,200 hours of generative AI training time internally. We had externally run knowledge sessions where we invited in speakers from organizations like Microsoft and OpenAI. We logged over 700 attendees. We logged 167 employees getting hands-on with internal, secure, data-compliant versions of ChatGPT. We produced over 100 pages of documentation, and on the order of 10,000 lines of code. So there was quite a bit of work and a serious organizational commitment going into learning about this.” You created 10,000 lines of code. For what purpose? “There was a certain amount of infrastructure investment we had to do underneath the hood to support all of them. All of those needed code in order to enable a generative AI capability. OpenAI will rent you an API — or in this case we rented through Microsoft — in order to get data privacy. But it’s OpenAI’s machinery rented through Microsoft…. That’s just an API, that’s not a feature. You need layers of software to go from the API to the feature, so that’s where those thousands of lines of code came in.” How has AI assisted you internally and how has it assisted your clients? “The two actually end up coupling together, because we have a number of workflows we do on behalf of our providers that is part of our value prop to them. But in turn if we can automate parts of them it turns into internal savings and efficiencies. “For example, the fax processing. The healthcare system still runs dramatically on faxes. It’s kind of frightening, but true. Athenahealth receives in the vicinity of 160 million to 170 million faxes on behalf of our clinical providers. The volume keeps increasing, and those are just the inbound ones. Somebody has to deal with those. They [the electronic faxes] have to get attached to patient charts. We have to know who the right patient is to go to. What is the paperwork about? All that has to be done before that information becomes even moderately useful to physicians. “Now, if Athena was not doing that work, physicians would be doing it. So, Athena is doing that work on behalf of physicians. Historically, we did that through outsourcing and human effort, but at the scale of documentation we’re talking about, even if you outsource it, that becomes a sizeable expense. “So, beginning seven-and-a-half years ago, our data science team began building out a natural language processing system that could do a lot of the information extraction from those fax documents and do a lot of that automated filing without human intervention. We use machine learning to build natural language process capabilities that can read the faxes and extract the information we need from them.” So, you were doing this well before ChatGPT — using natural language processing? “AI has been around a while. You can trace its origins to at least 1950 with the paper by Turing. It’s been a pretty rich field for at least the last quarter of the 20th century and it’s only grown in prominence since then with the advent of big data and companies like Google, Netflix and Amazon realizing considerable the value of large data coupled to machine-learning capabilities. “So, ChatGPT dropped a little over a year ago, now. And it made big reverberations in the media. It definitely represented a step-forward in the AI capability space and can do things we couldn’t do before. That said, there’s plenty of AI technology that’s been around for decades and has been continually getting better over that time that’s still incredibly valuable that isn’t ChatGPT.” How do we ensure that [the AI] extracted the information it should have and it didn’t extract some irrelevant information or it didn’t hallucinate something altogether. These are well-known dangers of large language models, and so you have to test for them. What changed when ChatGPT and generative AI came along? “The big change has been what are we going to do with those capabilities, and how can we use them to improve our customer’s lives, how can we use them to improve our capabilities and workflows? It’s been a lot of focused work on trying to figure out those things and trying to bring up demonstrations, and capabilities and alpha-level product features we can then put in the marketplace…. We can then see whether they’re useful to our users and their staffs.” What is GenAI’s greatest potential? One capability I’ve heard from others is its ability to augment software development. Are you seeing that? “I think we’re still discovering that. We’ve absolutely looked at it from a software development assistance tool, and we’re quite excited by its capabilities in that space — especially when offered through some really well-created user interfaces, such as Copilot, Codium and a few others playing in that space. They’re essentially remarketers of the underlying AI capability, but their value is in integration with powerful software development toolsets, like VS Code and so on. “So, yes, there definitely seems to be value there. That’s just one example of the space of using generative AI to assist in creating content — draft content for human review, revision, give people a starting place for content. “Second category I think it’s very useful in our world is in summarization. One challenge physicians, nurses and their staff face is an overwhelming tidal wave of information. I mentioned the hundreds of millions of fax documents a year, but that’s just a fraction of what comes through electronically. Then there’s the patient data itself; the individual patient charts. Every time we go to see a primary care physician it produces additional records about us. “Our individual primary care physicians may know that information well, but as soon as you go see a specialist they have to review 20 years of case material. They can’t spend two hours reviewing my case history. They need to get it in 10 minutes or something like that. So being able to digest 20 years of material down to 10 minutes, that’s a capability that large language models do seem to offer and we’re very excited about it.” Is there anything live that you’ve deployed that’s aimed at addressing the deluge of electronic data payors and providers are dealing with? “We definitely have some things in pilot. I can thumbnail at least three things that are going out in pilot to a limited number of customers now. One is the summarization tool. Specifically, summarizing patient records that we exchange from other EHRs [electronic health records]. We import your patient record from some other EHR in order to make it available to one of the physicians in our network; how do we digest it so that the provider can read it easily? “Another capability is generating novel content…specifically when a patient sends a question or request to a provider’s office — usually through a patient portal. For example, a patient may ask if they can have an appointment next week. It turns out the responses that providers create take up an enormous amount of time. If we can draft those ahead of time and say, here’s some text that’s a good starting point for you, that can shave some time off in the same way drafting computer code can shave time off for developers. “There’s also a question involving prior authorizations. We help identify when there’s missing information in prior authorizations so that providers can fix it at the point when they’re creating the request rather than having it recycle through the system just to get rejected because of missing information. That creates delay and introduces more work for the provider and a delay for the patient. We can use the genAI systems to catch when there’s missing information right at the point of creation and get it corrected then rather than cycling it through the system.” How does the genAI ID missing information? “It uses the few-shot learning capability of large language models. You can show LLMs examples and they can emulate them. In this case, what we do…is they know what the prior authorization being requested is. Someone may ask for authorization to do cataract surgery, for example. So we have many records of those surgeries. We pull those records, put them in front of an [LLM], and say this is what it is supposed to look like and this is what the prior authorization that the physician produced looks like. Tell us where its off or where there are gaps. It will come back with, ‘Normally, people provide this information that you don’t have in this case.’” Where you concerned Athenahealth’s sensitive healthcare data would be used to train other LLMs outside of your organization, thereby making it public? “That is absolutely a concern when you’re dealing with healthcare data — that is the highest tier of sensitive data. So, we have to be very careful with how we protect our data and how we use our data. We had infosec involved and legal involved and procurement involved — all these people were involved in evaluating the contracts we had with Microsoft and OpenAI. What are the data pathways? What [are] the data security guardrails in place? We invested quite a bit of work to ensure the data was going to be secure and not used to train someone else’s model that would then be released in the wild. “Along with infosec work, there was a lot of contractual work that was done to ensure that we’re consuming OpenAI’s systems and not feeding OpenAI’s systems.” Aren’t you still using a cloud service, though? “Athena has been a cloud-based company from day one. We built most of our infrastructure on private clouds — data centers we build out — but increasingly we’ve used third-party clouds. We have a substantial footprint in AWS and a non-negligible footprint in Microsoft Azure. Azure is the gateway to OpenAI in a data private way that we required. “So, the AI machinery is running in the Azure cloud, we’re accessing it through an API in the Azure cloud, and then it interoperates with the rest of our machinery, which is in private data centers and AWS.” Were there significant hurdles? Even in a pilot of AI, what were the most difficult challenges? “There’s always hurdles whenever you roll out any new feature, much less new technology. We already talked about ensuring enough data privacy. Then we had contractual safety and right to use features of this type in our products that we were going to team with early adopters who were interested in using it. “Then on the technical side, there are a couple of additional hurdles that are unusual in the software development space — or at least unfamiliar to many software organizations. That’s because AI capabilities don’t behave the same way that a lot of software developers are used to. You have to go through extra work to ensure it’s behaving the way you want it to and to ensure it’s behaving safely. “To give a concrete example, if you purchase a database from Oracle or whoever, you go pick up Postgres or rent RDS from Amazon, whatever. You know how it’s going to behave. You know if a particular query will give you a specific response. You know in very deterministic ways how it’s going to behave. When the database doesn’t behave that way, it’s a bug, and you report a bug. “In the AI world, part of the power of it is that it can generate novel and unexpected responses. That means that you now have this odd problem of how do you ensure yourself that the responses the AI is generating are the ‘correct’ responses? We have to be very careful with how we protect our data and how we use our data. We had infosec involved and legal involved and procurement involved – all these people were involved in evaluating the contracts we had with Microsoft and OpenAI. “For example, if we’re going to take a patient record that we imported from another EHR and reduce it to a thumbnail summary for another provider, how are we sure it was summarized correctly? How do we ensure that [genAI] extracted the information it should have and it didn’t extract some irrelevant information or it didn’t hallucinate something altogether. These are well-known dangers of large language models, and so you have to test for them. “So we had to put testing processes into place and perform human review of a large number of cases and we had to have patient safety reviews of the outputs of these large language models. We had to do bias evaluation to see if they were producing biases or hate speech and things like that. Those are processes that are unfamiliar to many development teams.” Do you expect to see any ROI any time soon? How would you calculate that based on what you’ve deployed? “I think that’s one of the brave new world questions we’re all struggling with. Many people in the industry are asking that question at the moment. The capabilities of generative AI are quite impressive. However, how those things turn into real value, I think, is what we’re all discovering. I think we’ll be asking those questions as we’re rolling out our early preview features to our alpha partners, and they can give us direct feedback on how valuable it is for them to have these new capabilities. We will be asking questions in our strategy sessions and planning sessions.” Who are your customers, primarily? “Broadly, Athenahealth serves mostly the ambulatory physicians in the US, their nurses and their clinical staff. We have a small hospital footprint and a significant footprint in primary care and a number of other ambulatory specialities like orthopedics and OB-GYN.” “The customers who are getting the early preview are the ones interested in these capabilities. When you think about all the providers and all the networks of the providers in the United States, some of them are more hungry for technological advancements than others. Some of them desire stability much more. Our customer support managers who work with customers daily have a very good sense of who wants what and they were able to help us locate customers who were most excited about technological innovations and who’d be willing to partner with us to explore these new capabilities.” Related content news analysis EU commissioner slams Apple Intelligence delay Margrethe Vestager, Europe's chief gatekeeper, takes a shot at Apple's decision to delay rolling out the company's AI. By Jonny Evans Jun 28, 2024 7 mins Regulation Apple Generative AI how-to Download our unified communications as a service (UCaaS) enterprise buyer’s guide Does your phone system date back to the last century? If so, you’re missing out on new technologies that can increase productivity and support a more distributed workforce. That’s where unified communications as a service, or UCaaS, comes By Andy Patrizio Jun 28, 2024 1 min Unified Communications Enterprise Buyer’s Guides Cloud Computing feature Enterprise buyer’s guide: Android smartphones for business Security is the biggest — but not only — factor when deciding what Android devices to support in your enterprise. See how Google, Honor, Huawei, Infinix, Itel, Motorola, Nokia, OnePlus, Oppo, Realme, Samsung, Tecno, Vivo, and Xiaomi stack By Galen Gruman Jun 28, 2024 23 mins Google Samsung Electronics Smartphones news Box announces upgrade to Box AI, integration with GPT-4o Box needed its own generative AI function to retain market share, says analyst. By Paul Barker Jun 27, 2024 4 mins Box Generative AI Collaboration Software Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe