I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.
“Difficult” because of lack of documentation? Or difficult because of purposefully obfuscating things?
Apple are really the major OS company _without_ widespread use of a first party obfuscator; Microsoft have WarBird and Google have PairIP.
You might want to look into techniques like control-flow flattening, mixed boolean–arithmetic transformations, opaque predicates, and dead code injection — Apple uses all of these. The absence of a publicly named obfuscator doesn’t mean Apple doesn’t apply these methods (at least during my time there).
Ever wonder why Apple stopped shipping system frameworks as individual .dylib files? Here’s a hint: early extraction tools couldn’t preserve selector information when pulling libraries from the shared cache, which made the resulting decompiled pseudocode unreadable.
That's interesting; I suppose I must not have touched the parts of the platform that use them, and I've touched a fair amount of the platform.
Again, I _have_ seen plenty of obfuscation techniques in DRM/FairPlay, but otherwise I have not, and again, I am entirely sure the ANE toolchain from CoreML down through Espresso and into AppleNeuralEngine.framework definitely does not employ anything I would call an obfuscation technique.
> Ever wonder why Apple stopped shipping system frameworks as individual .dylib files?
If the dyld cache was supposed to be an obfuscation tool, shipping the tools for it as open source was certainly... a choice. Also, the reason early tools couldn't preserve selector information was selector uniqueing, which was an obvious and dramatic performance improvement and explained fairly openly, for example - http://www.sealiesoftware.com/blog/archive/2009/09/01/objc_e... . If it was intended to be an obfuscation tool, again it was sort of a baffling one, and I just don't think this is true - everything about the dyld cache looks like a performance optimization and nothing about it looks like an obfuscator.
> I suppose I must not have touched the parts of the platform that use them
It’s understandable not to have direct exposure to every component, given that a complete macOS build and its associated applications encompass tens of millions of lines of code. /s
That said, there’s an important distinction between making systems challenging for casual hackers to analyze and the much harder (if not impossible) goal of preventing skilled researchers from discovering how something works.
> Also, the reason early tools couldn't preserve selector information was selector uniqueing
That isn't even remotely how we were making things difficult back then.
I led the SGX team at Intel for a while, working on in-memory, homomorphic encryption. In that case, the encryption couldn’t be broken through software because the keys were physically fused into the CPU. Yet, a company in China ultimately managed to extract the keys by using lasers to remove layers of the CPU die until they could read the fuses directly.
I’ll wrap up by noting that Apple invests extraordinary effort into making the critical components exceptionally difficult to reverse-engineer. As with good obfuscation—much like good design or craftsmanship—the best work often goes unnoticed precisely because it’s done so well.
I'm done here - you go on believing whatever it is you believe...
Looking very forward to more of your insight/comments. Hopefully your NDA has expired on some topic that you can share in detail!
No one ever notices plastic surgery when it is done well. The same can be true for obfuscation. But, as I indicated, no amount of obfuscation is foolproof when dealing with experienced, well-funded attackers. The best you can do is make their task annoying.
I was mostly joking, I am not from the US and not skilled enough to be considered for bothering with creating a visa for me when there are thousands of developers much more fit for this in the USA. But it is neat to see that the requirements are not as intense as I would've expected
Why did you guys remove the ability to detach the console and move it to another window?
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
I don’t understand the mindset, I really don’t. Why are humans held to such a lower standard?
Shoddy research / hallucination, tendency to lose the thread, lack of historical / background context… the failure modes are at least qualitatively similar.
Show me an LLM failure and I’ll show you a high profile journalist busted for the same thing. And those are humans who focus on these things!
AI can trip all the right searches to fool these shortcuts whilst sometimes being entirely full of shit and they have no resume nor credentials to verify should we desire to check.
If you have such and vouch for it I can consider your trustworthiness rather than its. If you admit you yourself are reliant on it then this no longer holds
Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things
6.6 FLOPS/W, plus the ability to completely turn off when not in use, so 0W at idle.
> Apple’s “38 TOPS INT8” is computed as 19 TFLOPS FP16 × 2, following the industry convention of counting INT8 operations as 2× the FP16 rate. But the hardware doesn’t actually execute INT8 operations twice as fast.
Why would Apple follow that convention when the hardware explicitly doesn't seems like a more straight-faced lie that I expect from Apple
(This was a while ago. I see the M4 is at 28 B)
Which is why I'm all the more surprised that Apple would claim 2x more ANE TOPS than it can really does.
thanks
> the company is also planning a few other software-based AI upgrades, including a new framework called Core AI. The idea is to replace the long-existing Core ML with something a bit more modern.
https://www.bloomberg.com/news/newsletters/2026-03-01/apple-...
I wonder to what extent this is a branding exercise; the framework that will replace Core ML could have just as easily been called "Core ML", except the current hotness is "AI" and not "ML".
* They haven’t said the source isn’t available to them, just that the closed nature of the ANE means they can’t use it in OSS.
* They’ve repeated constantly that it can’t do backprop and isn’t useful for most MLX use cases.
And really, ANE isn’t even that interesting for MLX really; it’s a limited resource power efficient inference engine for smallish edge models. If you want to use it you can use the Apple APIs, which while limited are generally “shaped” like what you’d want to do anyway. Almost every “biggish” CPU has one of these now and Apple don’t want to give away the specifics of theirs (even though it’s been pretty thoroughly RE’d by real REs and re-summarized by Claude, like this article).
I typically use python ML libraries like lightgbm, sklearn, xgboost etc.
I also use numpy for large correlation matrices, covariance etc.
Are these operations accelerated? Is there a simple way to benchmark?
I see a lot of benchmarks on what look like C functions, but today in my jobs I rely on higher level libraries. I don't know if they perform any better on apple HW, and unless they have a flag like use_ane I'm inclined to think they do better.
Of course chatgpt suggested I benchmark an Intel Mac vs. newer apple silicon. Thanks chatgpt, there's a reason people still hate AI.
It mostly doesn't because NPUs are bespoke and vendor-specific (which incents neglect by software devs working on open source numerics and ML/AI infrastructure), and the Apple ANE is no exception. Part of this effort is most likely about fixing that for the specific case of the Apple ANE.
I just think: great it seems like I'm paying for a hardware accelerator that makes Siri go faster. And I use siri on my laptop exactly 0 times in the last infinite years.
I'm a dev, not a creative, unfortunately. I don't use other people's software, I generally write my own (or used to before Claude took over my world).
So fundamentally, it still comes down to CPUs + RAM.
https://opensource.apple.com/projects/mlx/ is needed to do this?
- The key insight - [CoreML] doesn't XXX. It YYY.
With that being said, this is a highly informative article that I enjoyed thoroughly! :)
The article links to their own Github repo: https://github.com/maderix/ANE
People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.
Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.
Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.
It's not my subject, but it reads as a list of things. There's little exposition.
Here is why you are correct:
- I see what you did there.
- You are always right.
Is it like one of those “Morning” nods, where two people cross paths and acknowledge that it is in fact morning? Or is there an unstated preference being communicated?
Is there any real concern behind LLMs writing a piece, or is the concern that the human didn’t actually guide it? In other words, is the spirit of such comments really about LLM writing, or is it about human diligence?
That begs another question: does LLM writing expose anything about the diligence of the human, outside of when it’s plainly incorrect? If an LLM generates a boringly correct report - what does that tell us about the human behind that LLM?
> hollance/neural-engine — Matthijs Hollemans’ comprehensive community documentation of ANE behavior, performance characteristics, and supported operations. The single best existing resource on ANE.
> mdaiter/ane — Early reverse engineering with working Python and Objective-C samples, documenting the ANECompiler framework and IOKit dispatch.
> eiln/ane — A reverse-engineered Linux driver for ANE (Asahi Linux project), providing insight into the kernel-level interface.
> apple/ml-ane-transformers — Apple’s own reference implementation of transformers optimized for ANE, confirming design patterns like channel-first layout and 1×1 conv preference.
"Here’s the fascinating part:", "And one delightful discovery: "
Personally I find the AI-isms take away from the voice of the author. What does the author find interesting? What was their motivation? It's all lost in a sea of hubris and platitudes.
There's almost certainly a positive side - technical people who aren't so good at communication can now write punchy deep-tech blogs. But what's lost is the unique human voice that is normally in every piece of writing. It's like every blog is rewritten by a committee of copywriters before it's published. Bleurgh.
- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."
- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."
The phrasing is the same, which I notice sometimes happens in my own notes, but it's most noticeable when an LLM is asked to summarize items. An LLM written job description (without major prompting) for a resume comes out the same way, in my experience. It's the simplest full-sentence grammar for describing what something is, and then what something does.
If we used the developer's descriptions (from the github repo) to populate the info, it would look like this:
- hollance/neural-engine - Everything we actually know about the Apple Neural Engine (ANE)
- mdaiter/ane - Reverse engineered the Apple Neural Engine, with working Python and Objective C samples
- apple/ml-ane-transformers - eiln/ane - Reverse engineered Linux driver for the Apple Neural Engine (ANE).
- apple/ml-ane-transformers - Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)
IMO It may not be as information-packed as the LLM list, but it is more interesting to read. I can tell, or at least think I can tell, that different individuals wrote each description, and it's what they wanted me to know most about their project.
If I were making a list of software during research (that would eventually turn into a report), the particular details I write down in the moment would be different, depending on the solution I'm looking for or features it has or doesn't have, will add or won't add. I don't try to summarize "the Whole Project" in one clean bullet point, i (or my readers) can re-read the repo for that, or glean it from surrounding context (presuming enough surrounding context was written). But unless I made an effort later to normalize the list, the grammar, length, and subpoints would vary from the form-identifiable "LLM Concise Summary." It's more work for me to write to a standard, and even more work to consciously pick one.
EDIT: Upon re-reading the article, I noticed the "Prior Art" section is written in past-tense, as I would expect. But the list is in present tense. I feel like it jumps from "narrative" to "technical details list" back to "narrative". And the list is 70% of the section! I wouldn't mind reading a whole paragraph describing each project, what worked, what didn't, what they could use and what they couldn't, in the past tense, if it were interestingly-written. Something that tells me the author dove into the previous projects, experimented with them, or if they interacted with the developers. Or something interesting the author noticed while surveying the "prior art". but "interestingly-written" isn't really the LLMs goal, nor its ability. It's maximal information transfers with minimal word count. So the result is a list that smells like the author merely read the repo readme and wrote a summary for the masses in a technical report.
tl;dr The list is just "a list", and that makes it not interesting to read. If it was not interesting to read it was probably not interesting to write, which I take as an LLM writing it.
What actual benefits do they get?
I guess they can have their own models run faster than the competition on their hardware? But they don't even really have anything that consumers use on the ANE as far as I can tell and local LLMs are taking off on macs and could really benefit from this
The big takeaway isn't reverse engineering the ANE per se, but what Manjeet could do with his software engineering skills when accelerated by AI.
This is a good example of the present state of software engineering. Not future state - present state.
Most TPU designs have been based around systolic arrays, which for matrix ops have a quadratic speedup. A typical design is a 128x128 array of MAC units. You shift weights along one dimension, parameters along the other. It takes 128 cycles to shift a full matrix input in, then 128 cycles to shift the answer back out, but during those 256 cycles you got 16,384 MAC operations done, for a factor of 64 speedup.
The other big appeal of this design is it's way simpler than GPUs. The memory access patterns are predictable, there's no threads or thread divergence, etc. So it can be way more efficient in silicon, not just in area but especially in power efficiency.
There's other ideas for architectures besides this basic systolic array idea. If you want to learn about them, a good place would be the HotChips presentations of the last few years: https://hc2025.hotchips.org and similar domain names for prior years.
There's far more microarchitectural complexity in GPUs that actually isn't efficient for NN structures.
"Systolic array" actually means something more specific than "repeated structures on a die."
Again, I'd suggest referencing the various HotChips presentations. It's a really interesting topic area. Or the original TPU v1 paper for the basics.
AMD originally went all in on what you call GPU. It was great for gaming. Not as much for inference.
Nvidia whilst still making it GPU tuned the architecture for AI workloads. Gaming hasn’t improved as much lately.
Efficiency is the question.
This, a thousand times this.
For me, what AI brings is augmented humans. Just as we don't calculate on paper anymore, what is the reason of doing things by hand when a machine in X times better.
Want to code by hand, as artisans of old? Suit yourself.
I, for one, love the smell of burning chrome.
Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.
And while everyone else went to more powerful giant LLMs, Apple moved most of Siri from the cloud to your device. Though they do use both (which you can see when Siri corrects itself during transcription—you get the local Siri version corrected later by the cloud version).