Inside the M4 Apple Neural Engine, Part 1: Reverse Engineering
114 points
by zdw
1 day ago
| 14 comments
| maderix.substack.com
| HN
msie
1 minute ago
[-]
I remember the good old days when Apple was desperate for developers and produced great documentation and there were a lot of great 3rd-party books too. You can't just give out awards in hopes that someone will make that great app.
reply
giancarlostoro
9 minutes ago
[-]
Reverse Engineering with AI is only going to get better. I have seen some crazy things friends of mine have done with Claude alone. Let's just says SaaS isn't the only industry that could one day suffer.
reply
LatencyKills
1 hour ago
[-]
I worked on the Xcode team for years and know the lengths Apple goes to make this stuff difficult to figure out.

I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.

reply
mayhemducks
9 minutes ago
[-]
I never realized just how much hardware engineering Apple dedicated to enabling people to type faster with their thumbs!
reply
Octoth0rpe
2 hours ago
[-]
Part 2 has benchmarks: https://maderix.substack.com/p/inside-the-m4-apple-neural-en...

6.6 FLOPS/W, plus the ability to completely turn off when not in use, so 0W at idle.

reply
love2read
2 hours ago
[-]
This article was clearly written by a human (and AI) but still has a few "LLMisms" such as:

- The key insight - [CoreML] doesn't XXX. It YYY.

With that being said, this is a highly informative article that I enjoyed thoroughly! :)

The article links to their own Github repo: https://github.com/maderix/ANE

reply
walthamstow
2 hours ago
[-]
We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing
reply
pixl97
1 hour ago
[-]
This said, there were people that talked like this before LLMs, it didn't develop this whole cloth.
reply
DrScientist
48 minutes ago
[-]
Exactly. LLM's are mimics.

People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.

reply
pixl97
13 minutes ago
[-]
I mean, it's both.

Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.

Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.

reply
Angostura
2 hours ago
[-]
My honest take? You're probably right
reply
sholladay
56 minutes ago
[-]
You are absolutely right.

Here is why you are correct:

- I see what you did there.

- You are always right.

reply
rafram
38 minutes ago
[-]
Also the Prior Art section, which has telltale repetition of useless verbs like "documenting," "providing insight into," and "confirming" on each line. This was definitely AI-written, at least in part.
reply
GeekyBear
53 minutes ago
[-]
The recent news is that Apple is supposedly replacing the Core ML framework with an updated version that will make it easier to integrate third party LLMs into your apps.

> the company is also planning a few other software-based AI upgrades, including a new framework called Core AI. The idea is to replace the long-existing Core ML with something a bit more modern.

https://www.bloomberg.com/news/newsletters/2026-03-01/apple-...

reply
behnamoh
49 minutes ago
[-]
It's insane that the source code of ANE is not available even to the MLX team, possibly one of the reasons Awni (MLX project head) left Apple.
reply
mathisfun123
47 minutes ago
[-]
Tell me you've never worked at a hardware company without telling me lololol
reply
behnamoh
46 minutes ago
[-]
Yes I haven't worked at a hardware company, nothing to be ashamed of!
reply
mattlangston
2 hours ago
[-]
The future is bright for software engineers.

The big takeaway isn't reverse engineering the ANE per se, but what Manjeet could do with his software engineering skills when accelerated by AI.

This is a good example of the present state of software engineering. Not future state - present state.

reply
daoistmonk
1 hour ago
[-]
Tangential: Is anyone doing something similar to accelerate the support matrix of Linux on anything higher than M2?
reply
kamranjon
1 hour ago
[-]
I have always wondered if the neural engine could be used for training - pretty excited for part 3 of this to see if the juice is actually worth the squeeze
reply
FL33TW00D
26 minutes ago
[-]
Unreadable Claude slop
reply
poszlem
3 hours ago
[-]
Genuine question, not trying to throw a shade or anything, but are those cores actually useful with the state of apple intelligence being what it is?
reply
rahkiin
3 hours ago
[-]
They are also used by ML models that are deeply integrated in macos and ios without you knowing. Like object and text detection in images.
reply
willis936
1 hour ago
[-]
I wish they would (or wouldn't if they are) hook it up to the ios keyboard.
reply
geerlingguy
2 hours ago
[-]
And help in Photos, Final Cut Pro, and other apps.
reply
dagmx
1 hour ago
[-]
If you strip away the branding, Apple has and continues to ship a ton of algorithms that likely use the ANE and end users can use CoreML to do the same.

Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.

reply
sroussey
53 minutes ago
[-]
Don’t forget FaceID and many of the image manipulation.

And while everyone else went to more powerful giant LLMs, Apple moved most of Siri from the cloud to your device. Though they do use both (which you can see when Siri corrects itself during transcription—you get the local Siri version corrected later by the cloud version).

reply
stetrain
2 hours ago
[-]
Apple's OSes run a lot of local ML models for many tasks that aren't branded as Apple Intelligence, and they have done so for many years now.
reply
llm_nerd
2 hours ago
[-]
reply
esafak
2 hours ago
[-]
You can convert your own ML models to MLX to use them; Apple Intelligence is not the only application.
reply
nullstyle
2 hours ago
[-]
MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine.
reply
mirsadm
2 hours ago
[-]
Even then there is no transparency on how it decides what runs on the ANE/GPU etc
reply
sroussey
52 minutes ago
[-]
Correct. OS level stuff get first priority, so you can’t count on using it.
reply
eleventyseven
1 hour ago
[-]
> Throughout this series, “we” refers to maderix (human) and Claude Opus 4.6 (by Anthropic) working as a pair. The reverse engineering, benchmarking, and training code were developed collaboratively

Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.

reply
Anonbrit
56 minutes ago
[-]
Humans also write endless amounts of convincing bullshit, and have done since time immemorial. False papers and faked results have been a growing scourge in academia before LLMs were a thing, and that's just counting the intentional fraud - the reproducibility crisis in science, especially medical and psychological science, affects even the best designed and well intentioned of studies.

Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things

reply
withinboredom
1 hour ago
[-]
Claude likes to hide bad benchmarks from you, so it will show you where you are clearly winning. You even see some weird benchmarks in the article.
reply