Americas

  • United States

Asia

This year, we get ready for the Apple of your AI

feature
Jan 02, 20244 mins
AppleArtificial IntelligenceGenerative AI

Almost 1 billion generative AI-equipped smartphones are set to ship into 2027, according to Counterpoint. It looks like Apple will make some of them.

iPhone 14 rear camera setup

With almost 1 billion generative AI (genAI) equipped smartphones set to ship between now and 2027, according to Counterpoint, it’s increasingly likely that Apple will be in the mix with edge-based Apple GPT inside its phones.

The company has been slammed for seeming to be a late show to the genAI party. Arguably, this is true, with even Microsoft Copilot (and built-in ChatGPT) now available as an iPhone app.

Deliberate, intentional … and a bit slow

Apple has commented on the tech, pointing out that it already packs lots of machine intelligence inside its devices and explaining plans to expand the AI within its products on a “deliberate” basis. The inference is that any mass scale deployment of such profound tech should be purpose-driven to avoid unexpected consequences.

With those statements designed to buy it a little time, the company is quietly investing billions in R&D around the technology — including AI deals with news publishers.

It has held an internal AI summit and is alleged to be aiming to deliver a much smarter, much more AI-driven Siri along with tactical inclusion of genAI properties across its apps, all within an internal project dubbed “Ajax.”

R&D on the fast track

The company seems to be making progress. According to The Information’s Jeff Pu, Apple aims to bring this smarter Siri to market toward the end of the year — just in time to take a slice of the market growth Counterpoint envisions. (It now predicts about 100 million smartphones with on-device genAI will ship this year.)

The problem with genAI is that it is server based and typically needs huge chunks of memory and data space to run. Think of it this way: Today, if you use Microsoft Copilot on your iPhone to run a GenAI request, the task is offloaded to a server for the actual work, and the response returned to the device.

That’s not ideal for three key reasons:

Apple’s focus on privacy, security, and the environment means the company surely wants to be able to run requests natively on the edge device, no server required.

What Apple has done

Apple’s R&D teams have taken a big step toward that, announcing a big breakthrough that promises to let iPhones and other Apple devices successfully run computationally, memory-intensive LLM (large language models) on the device itself. 

“Our work not only provides a solution to a current computational bottleneck, but also sets a precedent for future research,” the researchers said. “We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their full potential in a wide range of devices and applications.”

It feels like internal development is accelerating.

Apple’s machine learning (ML) teams also recently released a new ML framework for Apple Silicon: MLX, or ML Explore. That was followed when Apple worked with Cornell University to release its open-source “Ferret” LLM, which lets you use regions of images to build up those LLM machines. Take a look at this post to understand the implications around that.

Apple’s R&D teams have also come up with a model that generates avatars from video 100 times faster than other systems.

GenAI at the edge is the Apple of your AI

In other words, Apple is building helpful task-based LLM tools that can run natively on the device.

None of this is unexpected. Apple’s playbook will not be to put a huge server on every street to furnish all the requested information as AI advances more deeply into society. It makes more sense to equip its devices with on-device AI, and if Apple’s teams can deliver more success on that task than they currently enjoy with 5G modem development, they have a chance to ace the industry.

Maybe.

At the same time, competing products that do use server-based services are moving ahead, with their Siri equivalents delivering nuanced responses, generating images and more. That’s really not a good look for a company that briefly led in on-device AI, which is why Apple is working so hard and why most industry watchers expect the company will deliver some of the first results of this mammoth research effort at WWDC 2024

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.