Sure, 26B models on beefy desktop silicon are finally nipping at the heels of commercial APIs, but this is a mobile thread. On a phone with 8GB of RAM and passive cooling, your tokens per second (t/s) are going to fall off a cliff after the first minute of sustained compute
It’s likely a llama.cpp backend issue. On the Pixel, inference hits QNN or a well-optimized Vulkan path that distributes the SoC load properly. On the iPhone, everything is shoved through Metal, which maxes out the GPU immediately and causes instant overheating. Until Apple opens up low-level NPU access to third-party models, iPhones will just keep melting on long-context prompts
This article is all fluff because real benne marketing. If they mentioned that a 4B model on an iPhone 16 drains 15% of the battery for a single long prompt and triggers hard thermal throttling after 20 seconds, nobody would be clicking on headlines about "commercial viability" fwiw
I ran several Gemma 4 quants on my 24gb mac mini, and with proper context size tuning they're quick enough I guess, but I would really love to see them working well on an iphone with 2/3gb of ram...
I noticed the inference is routed through the gpu rather than the Apple neural engine. Google’s engineers likely gave up on trying to compile custom attention kernels for Apple’s proprietary tensor blocks iirc. While Metal is predictable and easy to port to, it drains the battery way faster than a dedicated NPU. Until they rewrite the backend for the ANE, this is just a flashy tech demo rather than a production-ready tool
Are the Apple neural engines even a practical target of LLMs?
Maybe not strictly impossible, but ANE was designed with an earlier, pre-LLM style of ML. Running LLMs on ANE (e.g. via Core ML) possible in theory, but the substantial model conversion and custom hardware tuning required makes for a high hurdle IRL. The LLM ecosystem standardized around CPU/GPU execution, and to date at least seems unwilling to devote resources to ANE. Even Apple's MLX framework has no ANE support. There are models ANE runs well, but LLMs do not seem to be among them.
It is possible but requires a very specific model design to utilize. As this reverse engineering effort has shown [0] "The ANE is not a GPU. It’s not a CPU. It’s a graph execution engine." To build one requires using a specific pipeline specifically for CoreML [1].
That's the best "what is ANE, really?" investigation / explanation I've seen. Directly lays out why LLMs aren't an ideal fit, its "convolution engine" architecture, the need for feeding ANE deep operation sequence plans / graphs (and the right data sizes) to get full performance, the fanciful nature of Apple's performance claims (~2x actually achievable, natch), and the (superior!) hard power gating... just _oodles_ of insight.
More info on specific design choices needed to run models here [1]. I mean it is possible given that apple themselves did it in [2], but it's also not as general purpose or flexible as a GPU.
It will be interesting to see how things change in a couple of months at WWDC, when Apple is said to be replacing their decade old CoreML framework with something more geared for modern LLMs.
> A new report says that Apple will replace Core ML with a modernized Core AI framework at WWDC, helping developers better leverage modern AI capabilities with their apps in iOS 27.
ANE is OK, but it pretty much needs to pack your single vector into at least 128. (Draw Things recently shipped ANE support inside our custom inference stack, without any private APIs). For token generation, that is not ideal, unless you are using a drafter so there are more tokens to go at one inference step.
It is an interesting area to explore, and yes,this is a tech demo. There is a long way to go to production-ready, but I am more optimistic now than a few months back (with Flash-MoE, DFlash, and some tricks I have).
Running background processes might motivate the use of NPU more but don't exactly feel like a pressing need. Actively listen to you 24/7 and analyze the data isn't a usecase I'm eager to explore given the lack of control we have of our own devices.
> Google’s engineers likely gave up on trying to compile custom attention kernels for Apple’s proprietary tensor blocks iirc.
The AI Edge Gallery app on Android (which is the officially recommended way to try out Gemma on phones) uses the GPU (lacks NPU support) even on first party Pixel phones. So it's less of "they didn't want to interface with Apple's proprietary tensor blocks" and more of that they just didn't give a f in general. A truly baffling decision.
Huh I didn't see those instructions when I tried it last week. Must not have looked closely enough. I do remember it not having NPU support (confirmed by other people) back at the Gemma 3 launch a while ago.
Edge Gallery app on Android has NPU support but it requires a beta release of AICore so I'm sure the devs are working on similar support for Apple devices too.
You're not just delivering expertise, you're stepping into a situation where incentives are already misaligned, expectations are fuzzy, and there's often a cashflow problem hiding somewhere
It doesn't have to be messy. A lot of the messiness is self inflicted by contractors who are desperate for work and would be better of just getting a regular employee job instead.
Makes total sense. Consumer UX relies on pure determinism. When I click "Save", I know exactly what's going to happen. When I type a prompt into an "AI agent", I'm basically playing roulette every single time. Until we figure out how to wrap these probabilistic models inside rigid, predictable UX patterns, the mainstream crowd is going to keep treating AI like an annoying toy instead of an actual tool
reply