However I do want to mention that the “recommended” flow these days isn’t to separate out a tool request in the way you have. Eg instead of asking an LLM to route a tool, extracting that, running the tool, passing output back to the LLM, etc. - you simply pass the tool definitions, prompt, structural output expectations, and let the LLM (and your caller library) manage the tool use loop.
That’s how these modern LLMs are trained in post-training, and so I suspect it’s likely you’ll get different (and potentially worse?) results in trying to subvert this with a small, local model.
It comes with all the downsides you mentioned to let the LLM do this, but is also more likely to be in-distribution, and it’s easier to compose multiple tool calls.
Anyway, thanks for sharing! I’d love to see evals on a task where it compares the result when an LLM is involved in tool selection versus when it is handed tool output only - if I’m wrong about quality degradation then there’s a lot to like about your local tool routing.
quick note: it doesn’t have to be an rnn. i’ve got a follow-up example coming that uses a transformer-style ToolController with self attention, more expressive routing, etc.
but here’s the thing — when you rely on few-shot bootstrapping the LLM, you never end up updating the model's priors. even after 100k tool calls, you’re still stuck in the same polluted context window and its all stateless.
this gets worse fast with more than 3–4 tool calls, especially when there’s branching logic (e.g., if api1 > 5, go left, else right).
what this approach offers is: backprop through tool calls. you can tune prompts and update priors across the full workflow, end to end. trying to develop this intuition a bit more, and would love feedback.
thanks for the suggestion on the eval — will post that comparison soon.
Speaking generically -- any place in your workflow you feel the task is not hard, you can use smaller and cheaper LM.
Smaller LMs come with accuracy reduction, particularly in tail cases. So in the real world this doesn't work out.
Also is gumble softmax usage intentional? It looks like a straightforward classifier that just needs regular softmax.
For complex real-world agent flows though, tool use is often the only thing that the LLM is expected to do. Like in a coding agent:
```
User: Develop a program to ...
Agent: Bash("touch main.py") > 0, ""
Agent: Edit("main.py", initial_patch) > 0, ""
Agent: Bash("python main.py") > 1, "SyntaxError: ..."
Agent: Edit("main.py", fix_patch) > 0, ""
Agent: Bash("python main.py") > 0, "OK"
Agent: FINISH
```
Here, tool selection (+ writing the arguments) is actually the whole job. It's also easy to see that if you omit even one of the tool use records in the middle, the agent wouldn't work at all.
You'd still need to figure out what payload to give to the tool based on your context.
But I guess depending on your business case it might be worth it. It's not something I'd do from the beginning, though.
in my mind the biggest difference is llms that are invoked during a workflow, and llms that are invoked when _creating_ code (codegen).
for the former, tools could be well defined till they are small in number, but at some point, the system needs to examine a library of tools, understand how to call it and integrate it, and at its peak, even create new tools to talk to systems not already present in that library (codegen).
this gets worse when you’re chaining 3–4+ tools. context gets noisy, priors stay frozen and there's prompt soup..
my intuition here is: you can learn the tool routing and the llm prompts before and after the call. (can always swap out the rnn for a more expressive encoder model and backprop through the whole thing).
super useful when you’re building complex workflows -- it gives you a way to learn the full pipeline, not just guess and hope.
If LLMs could handle determinism better, I’d say having a single chat-based entrypoint into a plethora of services makes sense. But as they stand, it doesn’t make sense. Simpler control flow and constraining the number and type of downstream services that sit behind a single interface I think is the way to go.
That said, I agree we should keep the ambition to move to the one size fits all approach.
I think of an llm as a differentiable interpreter of a program. it should do decision making (tool selection, argument routing), branching logic via weights + gates etc.
so as a differentiable state machine:
- each state == a stage in your workflow
- transitions == tool calls
- encode this as a rnn or graph
and learn transitions and actions via supervision or RL
https://gist.github.com/viksit/c67d1d960c4cec89488290496defb...
Right tool for the step to the right extent.
Feels like soft skills for software development.
re: different tools (apis vs mcps). in my mind, there should be no real difference at what kind of tools is called at this moment since I model this as a softmax over a label set of tools.
that said, an idea I want to investigate is whether tools can live in a learned embedding space, where selection isn’t a softmax over discrete labels but a nearest-neighbor or attention mechanism over continuous vectors.
this is the intuition I'm developing as we speak and in some of my other comments on this thread (see differentiable state machine comment).
I guess that applies when you're not able to fine-tune the LLM you're using. Presumably Anthropic has a lot of data too.