Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Accenture chief software engineer: genAI is critical to the future of app development

feature
May 23, 202419 mins
DeveloperEmerging TechnologyEngineer

Accenture has invested more than $1 billion in generative AI tech to help it and its clients automate routine tasks and offer new ideas. Even so, Adam Burden, Accenture's chief software engineer, doesn't trust the technology because it's not yet fully baked.

Adam Burden
Credit: Accenture

Multinational professional services firm Accenture has already spent one-third of a planned $3 billion investment in generative AI (genAI) technology to help it reach internal productivity gains and more efficiently produce client products. The results have been nothing short of remarkable.

After using GitHub Copilot, 90% of developers said they felt more fulfilled with their job — and 95% said they enjoyed coding more with Copilot’s help. And by reducing more routine tasks, it also made them 40% more productive.

The return on investment has been impressive in other ways. In the first six months of this fiscal year, Accenture has secured $1 billion in genAI bookings from clients, over half of which came in during Q2. The company, which employes about 150,000 engineers, is also aiming to double its AI workforce from 40,000 today to 80,000 employees in the future.

After recently completing a gig as Accenture’s head of technology for North America, Adam Burden has now taken on a number of other roles, including being the company’s chief software engineer and global lead for innovation, responsible for all lab-related R&D projects. In short, Burden is responsible for all of Accenture’s incubating business projects, its venture investments, as well as innovation, advisory projects — and the workforce associated with them.

Burden spoke with Computerworld about the challenges of using genAI tools, along with some of its most unexpected benefits.

Accenture Adam Burden

Adam Burden, Accenture’s chief software engineer

Accenture

How has genAI affected your job and the jobs of software engineers and others who work for you? “It’s already virally infecting us in many ways. It’s become part of people’s natural workflow processes where we have general AI systems that are built into teams. For example, [there’s] one called Amethyst that I use all the time to help me better locate Accenture knowledge sources and resources, ask questions about methodology — that one’s become pretty popular and is in the, I would say…, mainstream.

“And then, you know, when you look at at the average individual’s job, [they’re] using generative AI now to help…write content. They’ve embraced a product in our marketing department called Writer, which also happens to be a ventures investment of ours, as well. And it’s really become kind of a de facto standard to help people write a first draft of content, and help them actually write better.

“When you dive in on software engineering, I would say that true software engineers, coders, and developers are among the most impacted or positively impacted by general AI and how they do their work every day.

“Depending upon how you count the numbers, we have somewhere in the 150,000-to-200,000 range of software engineers. That group uses [Copilot] in a lot of different ways today. Of course, we also have our own internal tools that we use, which are now fully embedded with generative AI. I have a client project team, for example, it has 1,600 people on it that are all using generative AI every day and…they actually deliver their projects as well.”

What are your software engineers using generative AI for primarily? “It’s interesting. It’s beyond the software development. There’s a lot the things that make people nervous about the software development piece itself and the coding. [AI] is good for inspiration, but you’re going to want to check exactly what’s generated really, really well.

“On the other hand, the other pieces of software engineering, which quite frankly this is a very much a pareto principle thing, like 80% of the work that you do has nothing to do with writing the code.

“I would tell you that our software engineers — and I’ve seen some of the data, too — say the code that they actually get out of Copilot and other [genAI tools] is 70% to 80% usable — in some cases, even higher. And, they check it very thoroughly before they use it. This is primarily, I would say, more on internal projects than it is for clients. But they tell us that they’re 40% to 50% more productive with generative AI in the things that they do.”

What’s been your own experience with using AI? “I’ve used it as the chief softare engineer for various purposes. For example, I took what is basically an ecommerce application — this is an open source one called SimplCommerce (it’s one you can get it in GitHub). The AI basically takes and reads in all of the source code, and we’re talking a couple hundred thousand lines of C# code. And the goal was to discover if genAI could help us better maintain applications. What I found was its ability to help me more rapidly take over an application that I wasn’t familiar with was remarkable. I asked it to help me try and find a bug in the code, and it found it right away. But that wasn’t the really cool thing.

“The really cool thing was that I do a lot of pre-engineering work, among other things, and I wondered what it would be like if asked [the genAI] it to do something new. So, I was actually demoing [Copilot] to people in a conference room, and I said ‘OK, I’m going to do the thing that you should never do when you’re showing a demo: I’m going to ask you guys to tell me what you want the demo to do.’ I said, ‘Somebody tell me a feature enhancement that this ecommerce application doesn’t currently do.’ This one person raised their hand and said, ‘I want you to add a wish list feature.’ I actually didn’t know how it’s going to do that, but I started thinking to myself, I know what’s in a wish list; you want to be able to add things to it. You want to be able to delete stuff from it, and that type of thing. You know what it did that really shocked me? It started putting stuff in the user stories I hadn’t thought about. And at the end of the day, it actually built a better product than I would have standalone…. Because it was making suggestions, like: ‘You need to add a feature to post your wish list on social media so that you can get more presence.’ I thought that was actually a really good idea.

“So, my point here is that I think that these tools will actually make us better. They give me a bit of superpowers to a degree to actually be a better software engineer. And this is a microcosm of the experience that our people are now having in the space.”

What are the main reasons you’re using gen AI? To assist with code generation? Update software? Create new apps? Create user stories? “I’d say that the main purposes that we’re using for today is to help us with the user stories as well as the post code-generation piece. We don’t entirely turn it loose on the code generation piece because it’s not quite ready for that yet. But I’d say the pre-software development and post-software development parts of writing code are definitely a big piece of it. But look, in doing that, I’m tackling the 80% of the work that’s out there and I’m getting a ton of benefit as a result.”

Why don’t you fully trust genAI yet? “The primary concern that people have is around the security aspects of it; What was the model that you’re building from actually trained on? So, what we’ve done is we’ve started to build our own small language models that have a very narrow code base. So if you’re using a public model, you just don’t know what kind of provenance is in it. What we have done is where we are using some public code generation models, or even Microsoft Copilot and others, is we have a very prescriptive process of security reviews and other guardrails for when we do actually generate that code from it. I think that’ll get better over time.

“As you get more enterprise-ready type software development engines, I think some tools like the one we’ve seen recently from folks like Devin, and there’s another one from Poolside and others, that they’re going to have more closed software engineering libraries that you’ll have more trust and faith in what they’re actually trained on.

“I can’t point to it [Copilot generated software] and say this particular code that it generated has a big security flaw in it and it was because it was taught against another library. We haven’t exactly seen that scenario yet, but we have seen some weaker code examples or even some bad algorithms that don’t work as well, which is why we continue to put the kind of scrutiny on it that we do today.

“We work in literally hundreds of programming languages because our client’s legacy systems are written in those. And, if you want to use, you know, genAI for literally Pascal generation, Fortran, those types of things, it’s not quite as good at that as it is with more modern languages where there’s more ample available software libraries, like Java for example.”

Do you trust genAI enough to allow it to be used to empower a citizen software workforce where they can create their own business applications? “I think that the no-code, low-code providers that are out there, like Mandix and others, have done a good job with that and they’re starting to combine genAI features into their product sets to actually help those citizen developers work faster.

“…I haven’t yet seen us take and hand genAI over to [the business side] and say, ‘Here’s a code generation engine and a prompt for someone that’s not trained as a software engineer to do that.'” Because, frankly, they would have trouble building software that meets your enterprise standards and that can follow the different architecture patterns and models that are important to your enterprise to fit into the business. Will those models get better and those tools get better? One hundred percent, completely. I see that future out there…, where the no-code, low-code kind of toolset with what we’re seeing happen with generative AI and software engineering.”

What guardrails have you put in place to ensure AI doesn’t cause security, privacy or copyright infringement problems? “We definitely have checks in our software check-in process where the right attributions and other things are taking place and we provide provenance for code that’s actually been written. So we can track and maintain where it comes from. And of course, we maintain all the security aspects of it. And like I said, we don’t really allow unfiltered codes to be generated. We can allow [genAI] to be inspirational and to help us accelerate things, but in terms of just putting it directly into production systems or otherwise, that’s definitely not something that we’re fully engaged in at this point.

“We’re testing in that, and I think we’ll eventually get there. I think it’s going to take some time for us to feel very comfortable about that, because you never know, for example, what open source software licenses you’re inheriting and what this is actually built on. So, you have to be very careful about it. So, until we get more private small language models, if you will, which we’re actually building these from now, I think that people are going to exercise a lot of caution — especially at the code development phase.

“But, it’s great for inspiration if you’re really struggling with solving a problem, like what’s the most efficient algorithm to do X, Y, and Z? It is a great way to actually get some of those things done. Recently, we were testing a quantum computer and we needed an algorithm for the traveling salesman problem — a very classic quantum-type problem. But we wanted it to be able to solve it in a classic architecture we used. We used it to generate that and it was awesome. It was perfect; it would generate something and we could see that it ran really efficiently, and it was awesome. So those type of scenarios, I think, are fair game right now.”

What kinds of increases in productivity and efficiencies are you seeing? “Everybody’s mileage varies on this, but I would say for the demographics that are really embracing this for the pre-code development and post-code development, they’re seeing somewhere around 40% on average. But it depends on the legacy environment too and what it’s actually learned from. So, if there’s no documentation for the code or the application or something like that, of course your productivity is going to be a lot lower. But if it’s a relatively rich environment with a good track record and history, then it does increase productivity a great deal.

“I’ll give you one other thing though, that’s kind of surprising to us. We’ve used this [genAI] in SAP and for other package software too, and we’re actually finding some real benefit from using it with package systems. So, it’s not just the custom software engineering that is providing benefit to you, but also in the package systems, too. Is that 40%? Generally not. It’s a little bit lower than that. But it’s definitely giving us a boost where we’re able to apply it.”

When you say you use genAI in “packaged software,” what does that mean? “Oftentimes, like with SAP, it’s not all that different than software engineering. Sometimes, you’re doing a lot of work, besides the configuration of the SAP system. You’re doing things like KDD’s, which are key decision documents. You still create test scripts and other things. They’re using it for that type of thing and seeing a lot of benefit.”

How have you gone about educating your workforce on AI to ensure it’s being used safety and responsibly? “Massive steps. There’s two different groups that we’re trying to tackle here, right? There’s the ones that will use it right and the ones that will create AI. We’ve made some commitments to double our AI workforce from 40,000 to 80,000.

“We made it $3 billion investment in AI. That’s around people who will build these systems. And then for all the people that would use them, so my software engineers and others, this is a huge initiative that we have right now. We actually have a system internally to get people more conversational with genAI solutions. We call it TQ, or your technology quotient, and we’ve had hundreds of thousands of employees take the TQ training class on generative AI. We have many, many others now that are also fully engaged in more deeper dive classes around how to use generative AI and the different systems it’s running on. So, it’s a massive effort at Accenture to rescale our workforce. We say this a lot: we think that you need to make more investment in the people than in the technology.

“There is no AI workforce to go out and hire from. It doesn’t exist out there. So you have to create your own, and for enterprises we absolutely tell them that this is something that they’ve got to focus and place a huge amount of attention on.”

How do you get your message about training needs out and how do you get employees to engage in that training? “It’s a top down thing for us. So, our CEO has made it a huge priority for our business to be ready for the era of genAI. It’s a key pillar of the way that we’ll approach delivering services to clients in the future. And so it’s actually embedded in a lot of our training materials now. But you also hear it from top down — the messaging from virtually all of our leadership channels — that taking these courses is a priority for our employees. And, of course we have gamification and other things that help us sort of ensure that we’re getting the right penetration across the organization to do this.

“There’s lots of different ways to tackle that. But we like to believe, and we find, that our workforce is usually pretty eager to reinvent themselves regularly. And they’re embracing it pretty readily. I think for other clients or other circumstances, they need help in different types of solutions to incent their workforce to kind of go through this process of learning this. And it’s big. If your job is going to change by having an AI agent work with you as a customer care professional or other things, that is a significant adjustment to the way you currently perform your job.”

Do you feel it’s necessary to clean up your data repositories before rolling out an AI solution? Or can you work on that as you pilot these solutions? “So if you’re talking about like using it as a tool, like how [some workers] uses it for Writer and for other things, you can use those things as you go. Now, if you’re trying to create an enterprise knowledge base and you gradually clean it up and eventually get it ready to load into there and you’re going to use it for Q&A type responses and other things, then I think you’ve got to have a clean data foundation first. It is definitely one of the principles that that we’ve observed. If you haven’t invested in building that, there’s a definitely a prerequisite for you to embrace using generative AI on more of an enterprise scale.

“You could do some proof of concepts and pilots for sure, but if you want to reinvent an entire value chain, for example inside of finance or even in your supply chain part of the business, you’ll find yourself really needing to go and make that investment.

“The truth is a lot of clients have actually gone through this. They’ve invested a lot in the last couple of years in cleaning their data and data lakes and having a better data, data architecture and data foundations. So, their level of readiness is good. It doesn’t mean that there aren’t others that are behind. But I do tell people who are behind, the good news is you can actually use genAI to help you cleanse your data. And that wasn’t a tool that was available a few years ago for people that were doing it in a less efficient way. So maybe you’re actually going to end up cleansing your data and improving your environment, in a faster, more efficient, and perhaps even better way than your predecessors did. There’s the glass-half-full way of looking at it.”

How were you using genAI to clean your data? “You can use generative AI to actually read it and take large volumes of data, for example, and help you identify duplicates and help you identify incorrectly formatted content, such as addresses and other things. And, it’ll actually provide you recommendations for what the cleaned data would look like.

“And if you use your prompt engineers in the right way to where they’re structuring it to let it know what good data looks like, you know, and this is what I expect the output to look like, they can actually output it in a common limited format so you can upload it right into another data model with nice, cleansed data. We also find that it’s not bad for enhancing data enrichment as well. If you want to, for example, move to a nine-digit zip code for everybody, it can pretty easily go in and just apply that to all of your data as well without any fancy tools or other third-party products required.

“Make no mistake, genAI is definitely a great tool to help you with data cleansing and building a better data foundation.”

Will AI be a job killer? “I think it’s going to create different jobs. I look at it this way, if you go back to the 1940s, the biggest employer, the biggest occupation for women was as a switchboard operator in telephone exchanges. And there’s not a lot of switchboard operators left. But we have lots of employed people, and I think history is full of examples like. We’re going to see different jobs.

“Of course, prompt engineer is one of the ones that’s most commonly cited, but there’s lots of other things that will be there. The way that we look at this is it’s going to automate a lot of ordinary and allow people to be more extraordinary. We believe that most people will benefit from having augmentation of their capabilities and they’ll get some superpowers out of it as well.

“So, does it kill jobs? I don’t think so. It’s going to make the jobs different and better and more fulfilling in the end. For me as a software engineer, I now get to work on much harder problems rather than the simpler, more boring and ordinary things I’d typically have to do. And we think that’s a great outcome for people, and we think that’s a great outcome for business as well.”