Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.
Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).
Training:
- Pretrained on 200B tokens across 16 TPU v6e (27 hours)
- Post-trained on 2B tokens of synthesized function-calling data (45 minutes)
- Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)
You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle
The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...
We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.
While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.
This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544
Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle
GitHub: https://github.com/cactus-compute/needle
▲Do you have any examples or data on the discriminatory power of the model for tool use?
The examples are things like "What is the weather in San Francisco", where you are only passed a tool like
tools='[{"name":"get_weather","parameters":{"location":"string"}}]',
I had a thing[1] over 10 years ago that could handle this kind of problem using SPARQL and knowledge graphs.
My question is how effective is it at handling ambiguity.
Can I send it something like a text message "lets catch up at coffee tomorrow 10:00" and a command like "save this" and have it choose a "add appointment" action from hundreds (or even tens) of possible tools?
[1] https://github.com/nlothian/Acuitra/wiki/About
reply▲michelsedgh32 minutes ago
[-] Thanks to a Huggingface linked below, I tested it and im not impressed. prmopt: i need to contact my boss i will be late. Result: 20mins [{"name":"set_timer","arguments":{"time_human":"20 minutes"}}]. It didnt use the email tool and i tried 2-3 different ways of asking it.
reply▲Did you give it an email tool? It uses the tool it’s given. HF example only has timer tool.
reply▲Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.
But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.
E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`
reply▲HenryNdubuaku7 hours ago
[-] So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.
reply▲Oh gotcha. Fixed my comment.
reply▲Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
reply▲quantumleaper7 hours ago
[-] Should be quick and easy with WebGPU, too.
reply▲That's an even better idea, I bet this could run in Transformers.js.
reply▲Good idea. Could you make that.
reply▲Good idea. Could you ask a Claude Code to make that.
Today is 2026 after all
reply▲I'll put this on chonklm.com!
reply▲HenryNdubuaku7 hours ago
[-] thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
reply▲giancarlostoro7 hours ago
[-] Alternatively, record a video that showcases it.
reply▲HenryNdubuaku7 hours ago
[-] Ok, will do that now!
reply▲giancarlostoro5 hours ago
[-] I know we all think of bad things when we hear "short form video" but short demos can do a LOT for any project, shows the user how its used, what it looks like, what it solves, etc all in anywhere from 15 seconds to a couple of minutes, doesn't need to be ultra fancy, screen recording is fine. :)
reply▲Since there is no GUI here, I feel like a simple plaintext chat transcript would be both 100x smaller and 100x easier to read. (Not to mention accessible.)
reply▲giancarlostoro5 hours ago
[-] Sure, and we've seen those terminal screen recorders that give you back a replayable demo, that could work too.
reply▲kristopolous6 hours ago
[-] That M versus B is way too subtle. 0.026B is my suggestion
reply▲I was so confused by many comments in this post but thanks to you I realized that some people are apparently reading it as 26B and that's why their comments make no sense.
reply▲The "M" nomenclature has been around since at least BERT and T5/FLAN. It's valid to use it even if today's LLM devs are more familiar with billion-scale models.
reply▲HenryNdubuaku6 hours ago
[-] Haha, we were trying to not be hand-wavy too much :)
reply▲kristopolous12 minutes ago
[-] Oh hey it's Henry. I met you a couple weeks ago at an event in SF. Nice to see you on here.
reply▲[flagged]
reply▲reply▲I’d edit it if I could, but it seems to be past the timeout.
As the other poster noted, the post wasn’t meant to be read as a personal attack
reply▲kristopolous3 hours ago
[-] Pardon me, do I know you?
Why are you attacking me?
reply▲I don't think they're attacking you, but suggesting you read more carefully. The information provided is correct and clear, but you need to let go of your own biases when consuming it.
I personally prefer the M to the B. I guess as an engineer, noticing the units comes pretty naturally.
reply▲kristopolous1 hour ago
[-] 25-35 Billion is expected these days, there's many models of this size, it's very common. (Gemma 4 31B, Qwen 3.6 25B & 35B, JT 35B, EXAONE 35B, Nemotron 30B, GLM 4.7-flash 30B, Servam 30B, LFM2 24B, Granite 4.1 30B...)
Announcing something that's 1/1000th is significant and remarkable! Hiding it in a single letter is burying the lede.
reply▲I read it as 26B as well.
reply▲Lovely to see the push for tiny models.
I have been building for small (20B or less) models for quite a while. Highly focused/constrained agents, many of them running together in some kind of task orchestration mode to achieve what feels like one "agent".
I build (privacy first) desktop apps this way and I want to get into mobile apps with similar ideas but tiny models.
reply▲Awesome! I just tried to set an alarm and add some groceries to the shopping list, and it outperformed Siri.
reply▲>Experiments at Cactus showed that MLPs can be completely dropped from transformer networks, as long as the model relies on external knowledge source.
Heh, what a coincidence, just today one of my students presented research results which also confirmed this. He removed MLP from Qwen and the model still could do transformation tasks on input but lost knowledge.
reply▲Dumb questions, from someone not in the field...
What is a distilled model?
Why doesn't Google do this (to make their models smaller)?
Seems like you could make a competitor to Gemini?
reply▲HenryNdubuaku4 hours ago
[-] No question is stupid!
1. Distilled means taking the intelligence of a big model and compacting into a tiny model.
2. Google already does so with FunctionGemma, but Needle argues that better performance could be achieved with 10x smaller model using our technologies.
reply▲Model distillation is lossy compression of big model to produce a smaller model.
Smaller model requires less space on disk, less video memory, and less compute (cheaper hardware).
Downside is that distilled model performs worse on the same benchmarks compared to original model.
reply▲No FFN is blowing my mind. This is pretty much "Attention Is ACTUALLY All You Need". Reminds me of BERT Q&A which would return indices into the input context, but even that had a FFN. Really exciting work.
reply▲I guess this had always been bugging me. I get while you need activation/non-linearities, but do you really need the FFN in Transformers? People say that without it you can't do "knowledge/fact" lookups, but you still have the Value part of the attention, and if your question is "what is the capital of france" the LLM could presumably extract out "paris" from the value vector during attention computation instead of needing the FFN for that. Deleting the FFN is probably way worse in terms of scaling laws or storing information, but is it an actual architectural dead-end (in the way that deleting activation layer clearly would be since it'd collapse everythig to a linear function).
reply▲This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)
I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?
I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...
reply▲From all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?
reply▲HenryNdubuaku4 hours ago
[-] Gemini is pretty solid for 1-shot tool call and affordable as well.
reply▲Hi, would love to know where you get that impression on 1 shot tool calling, was there concrete evaluation carried out? pretty new to this and was a bit lost when trying to compare models on different capabilities.
reply▲Sounds interesting.
Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.
https://pastebin.com/PYZJKTNk
reply▲It better, considering its purpose is to run on devices with no GPU.
reply▲Can it summarize text it fetches?
Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.
I will defiantly play around with this!
reply▲> I will defiantly play around with this!
Are you Calvin or Hobbes?
reply▲I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?
Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?
reply▲HenryNdubuaku4 hours ago
[-] It is for building agentic capabilities into very small devices like phones, glasses, watches and more. Does that make sense?
reply▲I'm having trouble understanding why someone would want that? Like, what are the product use-cases of such a thing? I understand why people want that for coding agents--although the jury is still very much out on whether those are terribly useful--but I cannot fathom what someone might want an agent to
do on a cell phone? Is there some user-facing activity on a phone that's similar to coding with a tight, objectively measurable feedback loop (analogous to dev/compile/test)?
EDIT: more of you cretins have downvoted than have replied.. so.. show your cards.
reply▲jasonjmcghee1 hour ago
[-] Throwing a few things out - HN has changed over the years, but people make stuff to make stuff. There don't need to be product use cases. The tone of the comment goes against the spirit of HN - likely the reason for downvotes.
That aside- a very small model that takes text and outputs structured json according to a spec is nice. It let's you turn natural language into a user action. For example, command palettes could benefit from this.
If you can do a tiny bit of planning (todo) and chain actions, it seems reasonable that you could traverse a rich state space to achieve some goal on behalf of a user.
Games could use something like it for free form dialog while stool enforcing predefined narrative graphs etc.
I'm sure you could come up with more. It's a fuzzy function.
reply▲> people make stuff to make stuff. There don't need to be product use cases.
OK. Great! So it doesn't need to be a commercial product. But does it do something (anything?) interesting? I'm interested in your games example, I'd love to see it done in real life. IIUC, game AIs are actually much more constrained and predictable for play-ability reasons. If you let it go all free form a plurality of players have a "WTF??!?" experience which is super Not Good.
reply▲digdugdirk45 minutes ago
[-] It doesn't have to do any thing interesting - it's completely fascinating all on it's own. If you understand anything about the math and science behind LLMs, you'll understand that this is an achievement worthy of sharing to a community like HN.
That being said, small models like these have plenty of use cases. They allow for extra "slack" to be introduced into a programmatic workflow in a compute constrained environment. Something like this could help enable the "ever present" phone assistant, without scraping all your personal data and sending it off to Google/OpenAI/etc. Imagine if keywords in a chat would then trigger searches on your local data to bring up relevant notes/emails/documents into a cache, and then this cache directly powers your autocomplete (or just a sidebar that pops up with the most relevant information). Having flexible function calling in that loop is key for fault tolerance and adaptability to new content and contexts.
Its cool. Enjoy it.
reply▲jcgrillo36 minutes ago
[-] > Something like this could help enable the "ever present" phone assistant, without scraping all your personal data and sending it off to Google/OpenAI/etc
OK so show me what that's for. Show me something useful you can do with that ability.
> Imagine if keywords in a chat would then trigger searches on your local data to bring up relevant notes/emails/documents into a cache, and then this cache directly powers your autocomplete (or just a sidebar that pops up with the most relevant information).
I'm really trying but.. idgi? I truly cannot imagine how this would improve my life in any way...
> Its cool. Enjoy it.
No. It sounds like a useless complication on my watch. I don't fucking care if it can tell me the phase of the moon. I can look up at the sky and see the moon and know what phase it is.
EDIT: You say:
> If you understand anything about the math and science behind LLMs, you'll understand that this is an achievement worthy of sharing to a community like HN.
OK. So educate me. Tell me what I'm missing.
reply▲HenryNdubuaku3 hours ago
[-] You can think of “phone use” for instance, what Siri is supposed to be.
reply▲I mean.. Siri basically works? When I'm driving I say "Hey Siri, find me a gas station along my route", and it does. Or I say "Hey Siri, call Joe Bob mobile" and it does. Or I say "Hey Siri, play me a podcast". This is kind of a solved problem already? When I'm driving this is literally as complicated of a distraction as I want--I'm not going to be dictating emails or texts. When I'm not driving, the touchscreen keyboard (as shitty an interface as that is) is 100x better than voiced natural language commands.
reply▲It does just barely work now after they spent billions, and they may still fall back to cloud LLMs for a significant number of things. This is a way that everyone can get that on the actual Apple Watch or local phone for any application they build.
reply▲I get that, but I still can't imagine what it might be
for. TBH I don't have a smart watch, because I can't think of anything I'd want one for--my mechanical watch keeps time to within a few seconds per month and the lume lasts all night. I don't know what making it "smarter" would do for me, it does an A+ job of being a watch. What are the things that "everyone" can build with this that actually matter? Like, what is the differentiator?
EDIT: To be clear, the monoculture of phone operating systems sucks. If this somehow enables more entrants into that space then I'm all for it. However, I don't see this in particular being the deciding factor... For example, the reason I don't run a 3rd party operating system on my phone isn't because it's lacking Siri or "OK Google" (if these things went away tomorrow I'd barely notice), it's because it would be a pain in the ass to make it be a phone.
reply▲Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
reply▲Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?
reply▲HenryNdubuaku5 hours ago
[-] So it’s a tiny model capable of function calling that could run locally on cheap devices.
reply▲This would be amazing for home assistant.
reply▲synesthesiam3 hours ago
[-] On my list to check out tomorrow :D
reply▲Wow can’t believe the voice engineer lead for Nabu Casa is here! Super excited to see if this works for HA!
reply▲Can this be converted to onnx or otherwise be used in a browser?
reply▲Does the model have capacity for in context learning ?, if we give it examples of patterns can it follow them ?.
reply▲dangoodmanUT5 hours ago
[-] Why pick Gemini? It's probably the worst tool calling model of the major labs.
reply▲This is some excellent work Henry! Very excited to try it out.
reply▲cmrdporcupine7 hours ago
[-] reply▲Man, I love that there are still people writing new MOO servers in 2026. Any game out there already running on mooR?
reply▲cmrdporcupine5 hours ago
[-] Many people tease that they will, and start... but then kinda stop. But mostly just been building my own bespoke thing on my own bespoke platform, and kinda running out of steam because I need to make $$ instead.
reply▲This is really cool. Any plans to release the dataset?
reply▲HenryNdubuaku6 hours ago
[-] We include the dataset pipeline in the codebase so far, might release dataset.
reply▲hey nice work, is it possible to release the datasets?
reply▲What is the use case for this?
reply▲HenryNdubuaku4 hours ago
[-] Deploying AI on tiny devices like watches, earphones, glasses etc.
reply▲Ok, but why? What is the use case?
reply▲chris_money2023 hours ago
[-] I don't think the limit is just on tiny devices. It can also be used in apps on generic computers, because its so small anything can run it reasonably quick.
For example, I am thinking this could be helpful for say if you have a complicated build and test infrastructure, fine tune this model on that infrastructure and then people can say more generic things like build and run this library's test, rather than issuing the exact commands to do that or going to Claude, GHCP, etc
reply▲BoredPositron5 hours ago
[-] I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
reply▲FYI, distilling Gemini is explicitly against the ToS:
"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."
reply▲Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
reply▲HenryNdubuaku6 hours ago
[-] Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.
reply▲I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
reply▲FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.
reply▲Oh no! They stole the model weights!
Distillation "attacks" is such bullshit
reply▲This is being downvoted but it's worth noting if only for the "be careful" aspect.
That said, we need more people distilling models IMO, just be ready for a C&D and a ban
reply