The Code-Only Agent
130 points
15 hours ago
| 24 comments
| rijnard.com
| HN
binalpatel
14 hours ago
[-]
I went down (continue to do down) this rabbit hole and agree with the author.

I tried a few different ideas and the most stable/useful so far has been giving the agent a single run_bash tool, explicitly prompting it to create and improve composable CLIs, and injecting knowledge about these CLIs back into it's system prompt (similar to have agent skills work).

This leads to really cool pattens like: 1. User asks for something

2. Agent can't do it, so it creates a CLI

3. Next time it's aware of the CLI and uses it. If the user asks for something it can't do it either improves the CLI it made, or creates a new CLI.

4. Each interaction results in updated/improved toolkits for the things you ask it for.

You as the user can use all these CLIs as well which ends up an interesting side-channel way of interacting with the agent (you add a todo using the same CLI as what it uses for example).

It's also incredibly flexible, yesterday I made a "coding agent" by having it create tools to inspect/analyze/edit a codebase and it could go off and do most things a coding agent can.

https://github.com/caesarnine/binsmith

reply
bandrami
5 hours ago
[-]
Every individual programmer having locally-implemented idiosyncratic versions of sed and awk with imperfect reconstruction between sessions sounds like a regression to me
reply
jrm4
5 minutes ago
[-]
But -- I think, only because of the friction of having to read and parse what they did, which, to me could greatly be alleviated by AI itself.

Put differently -- for those who'd like to share, yes, give me your locally implemented idosyncraticness with a little AI to help explain to me what's going on, and I feel like that's a sweet spot between "AI do the thing" and "give me raw code"

reply
whatevaa
4 hours ago
[-]
I already treat awk syntax as something idiocratic, so not much would change for me.
reply
cocoflunchy
3 hours ago
[-]
Why would it recreate sed and awk? The screenshot from the repo even shows it using sed.
reply
meander_water
10 hours ago
[-]
Have you done a comparison on token usage + cost? I'd imagine there would be some level of re-inventing the wheel (i.e. rewriting code for very similar tasks) for common tasks, or do you re-use previously generated code?
reply
binalpatel
10 hours ago
[-]
It reuses previously generated code, so tools it creates persists from session to session. It also lets the LLM avoid actually “seeing” the tokens in some cases since it can pipe directly between tools/write to disk instead of getting returned into the LLMs context window.
reply
fudged71
10 hours ago
[-]
I've been on a similar path. Will have 1000 skills by the end of this week arranged in an evolving DAG. I'm loving the bottoms-up emergence of composable use cases. It's really getting me to rethink computing in general.
reply
Garlef
8 hours ago
[-]
Interesting. Could you provide a bit more detail on how the DAG emerges?
reply
fudged71
2 hours ago
[-]
2026 paper titled Evolving Programmatic Skill Networks, operationalized in Claude Code
reply
rcarmo
10 hours ago
[-]
The point where that breaks down is “next time it’s aware of the CLI and uses it”. That only really works well inside the same session, and often the next session it will create a different tool and use that one.
reply
NitpickLawyer
8 hours ago
[-]
> That only really works well inside the same session

That was already "fixed" by people adding snippets to agents.md and it worked. Now it's even more streamlined with skills. You can even have cc create a skill after a session (i.e. prompt it like "extract the learnings from this session and put them into a skill for working with this specific implementation of sqlite"). And it works, today.

reply
rcarmo
3 hours ago
[-]
reply
NitpickLawyer
2 hours ago
[-]
> I prefer the more deterministic behavior of MCP for complex multi-step tasks, and the fact that I can do it effectively using smaller, cheaper models is just icing on the cake.

Yeah, that makes sense. That's not what the person that I replied was talking about, tho. Skills work fine for "loading context pertinent to one type of task", such as working on a feature without "forgetting" what was done in the previous session.

The article deals with specific, somewhat predefined workflows.

reply
actionfromafar
9 hours ago
[-]
Even if you document the tool and tells what it can do?
reply
skybrian
11 hours ago
[-]
That’s pretty cool. Is it practical? What have you used it for?
reply
binalpatel
10 hours ago
[-]
I've been using it daily, so far it's built CLIs for hackernews, BBC news, weather, a todo manager, fetching/parsing webpages etc. I asked it to make a daily briefing one that just composes some of them. So the first thing it runs when I message it in the morning is the daily briefing which gives me a summary of top tech news/non-tech news, the weather, my open tasks between work/personal. I can ask for follow ups like "summarize the top 5 stories on HN" and it can fetch the content and show it to me in full or give me a bullet list of the key points.

Right now I'm thinking through how to make it more "proactive" even if it's just a cron that wakes it up, so it can do things like query my emails/calendar on an ongoing basis + send me alerts/messages I can respond to instead of me always having to message it first.

reply
mkw5053
2 hours ago
[-]
This got me thinking about the Unix philosophy of composing small, specialized tools that each do one thing well. While at first glance a "single powerful tool" approach might seem aligned with that ethos, I think it actually runs counter to it. Forcing agents to reimplement ls, grep, and find throws away decades of battle-tested code. The real Unix-style approach would be giving agents more specialized tools, not fewer, and letting them learn to compose those tools effectively.
reply
iepathos
2 hours ago
[-]
The "code witness" concept falls apart under scrutiny. In practice, the agent isn't replacing ripgrep with pure Python, it's generating a Python wrapper that calls ripgrep via subprocess. So you get:

- Extra tokens to generate the wrapper

- New failure modes (encoding issues, exit code handling, stderr bugs)

- The same underlying tool call anyway

- No stronger guarantees - actually weaker ones, since you're now trusting both the tool AND the generated wrapper

The theoretical framing about "proofs as programs" and "semantic guarantees" sounds impressive, but the generated wrapper doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones. This is true for pretty much any CLI tool you're having the AI wrap python code around to do instead of calling battle tested tools directly.

For actual development work, the artifact that matters is the code you're building, which we're already tracking in source control. Nobody needs a "witness" of how the agent found the right file to edit and if they do agents have parseable logs. Direct tool calls are faster, more reliable, and the intermediate exploration steps are ephemeral scaffolding anyway.

reply
bob1029
2 hours ago
[-]
> but the generated wrapper doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones

I don't know if I agree with this.

I had been doing some experiments using Powershell as the only available tool, and I found that switching to an ExecuteFunction (C#) tool provided a much less buggy experience, even when Process.Start is involved.

Which one is functionally a superset of the other is actually kind of a chicken-egg problem because they can both bootstrap into the other. However, in practice the code tool seems to provide far more "paths" and intermediate tokens to absorb the complexity of the original ask. Powershell seemed much more constraining at the edges. I had a lot of trouble getting the shell to accept verbatim strings as file contents. csc.exe has zero issues with this by comparison.

reply
jebarker
42 minutes ago
[-]
I don't really buy into the setup here. Bash is Turing complete. How is calling os.walk in Python more "code-only" than calling find in bash? Would it be more authentically "code only" if you only let the LLM use C?
reply
dfajgljsldkjag
14 hours ago
[-]
Agents can complete an impressive amount of tasks with just this, but they quickly hit a bottleneck in loading context. A major reason for the success of agentic coding tools such as Claude and Cursor is how they push context of the problem and codebase into the agent proactively, rather than have the agent waste time and tokens figuring out how to list the directory etc.
reply
CuriouslyC
3 hours ago
[-]
Cursor does RAG based on the active state of the editor (focused window, cursor location, recently touched files, etc). This works really well for copilot style small modifications, but it's unhelpful for larger changes, and can actually cause some context rot.

Claude only loads specific files (e.g. CLAUDE.md) and any files those reference with @syntax on load. Everything else is discovered using grep/find mostly.

reply
almosthere
13 hours ago
[-]
It's a tree design, once data is pulled it can remove the context of the code it wrote to pull some fancy data. Better yet the more advanced ones can re-add something old to the context to and drop it back out again if it needs to.
reply
alexsmirnov
11 hours ago
[-]
This was implemented far ago, at least by huggingface "smolagents". https://huggingface.co/docs/smolagents/index . I did use them, with evaluations. For the most cases, modern models tool call outperforms code agent. They just trained to use tools, not a code
reply
avereveard
11 hours ago
[-]
The differentiating thing that llm tool calls can't do reliably is to handle a lot of data. if tool a emit data that tool b needs, and it's a significant compared to model context, scripting these tool to be chained in a code fragment where they are exposed as functions saves a lot of pain
reply
river_otter
6 hours ago
[-]
I had the same experience using smolagents. Early 2025 it was a competitive approach, but a year later having a small subset (<10) of flexible tools is outperforming the single-tool approach.
reply
killerstorm
3 hours ago
[-]
I don't believe this would be more efficient.

Use of common tools like `ls` and file patching is already baked into model's weights, it can do that with minimal amount of effort, leaving more room for actually thinking about app's code.

If you force it to wrap these actions into non-standard tools you're basically distracting the model: it has to think about app-code and tool-code in the same context.

In some cases it does make sense to encourage the model to create utilities for itself - but you can do that without enforcing code-only.

reply
jrm4
3 minutes ago
[-]
I don't think "efficency" is at all the point? At all?

It's safety, reliability, and human understanding -- and like OOP, for example, are often directly at odds with "efficiency."

reply
derefr
11 hours ago
[-]
I follow the author's line of reasoning, but I think that following it to its logical conclusion would lead not to an `execute_code` primitive, but rather to an assumption that the model's stdout is appending to a (Jupyter, Livebook, etc) notebook file, where any code cell in the notebook gets executed (and its output rendered back into the inference context) at the moment the code cell is closed / becomes syntactically valid.

I say this, because the notebook itself then works as a timeline of both the conversation, and the code execution. Any code cell can be (edited and) re-run by the human, and any cells "downstream" of the cell will be recalculated... up to the point of the first cell (code or text) whose assumptions become invalidated by the change — at which point you get a context-history branch, and the inference resumes from that branch point against the modified context.

reply
znnajdla
5 hours ago
[-]
so...emacs?
reply
Agent_Builder
11 hours ago
[-]
This mirrors what we ran into pretty quickly.

The agent wasn’t failing because it couldn’t write code. It failed because “code-only” still leaves a lot of implicit authority. Once it’s allowed to reason freely across steps, it starts making assumptions that were never explicitly approved.

What helped us was forcing the workflow to be boring. Each step declares what it can touch, what tools it can use, and what kind of output is allowed. When the step ends, that authority disappears.

The agent becomes less clever, but way more predictable. Fewer surprising edits, fewer cascading mistakes.

We ended up using GTWY for this style of step-gated agent work, and it made long-running agents feel manageable instead of fragile.

reply
znnajdla
4 hours ago
[-]
What about an agent loop that can only modify itself? Imagine an agent that is a single Python file, where the only tool it has is to modify itself on next iteration.
reply
philipp-gayret
3 hours ago
[-]
I use Claude Code to modify policies for Claude Code. (Think of say the regex auto-allow/deny, but a lot stronger.) I can do that with hot reload of the local development server; It works but it better not make any errors.

A setup like you describe would honestly be interesting to see, so long as it can roll back to a previous state. Otherwise the first mistake it makes will likely be its last.

reply
thighbaugh
7 hours ago
[-]
Basically: "Watch me apply the UNIX philosophy to LLM agents. Look Ma, I am figuring stuff out! If I don't point out that's what I am doing, no one ever notices!"
reply
philipwhiuk
6 hours ago
[-]
> Watch me apply the UNIX philosophy to LLM agents

The Unix philosophy is chaining existing stuff together that each do a job well - using ls | grep rather than writing code to do both.

So this feels like the opposite of that - deliberately coding instead of using existing tools.

reply
fb03
5 hours ago
[-]
What a coincidence! I actually implemented a harness to test this, about a week ago

https://github.com/flipbit03/caducode

reply
j16sdiz
13 hours ago
[-]
What if the tools needed is large? Spawn some sub-agent for those?

These sub-agent can be repetitive.

Maybe we can reuse the result from some of them.

How about sharing them across session? There are no point repeating common tasks. We need some common protocol for those...

and we just get MCP back.

reply
throwup238
12 hours ago
[-]
I can't find it now but there was a paper on HN a while ago that had gave agents a tool that searched through existing tools using embeddings. If the agent found a tool it could use to do its job, it used it, otherwise it wrote a new one, gave it a description, and it got saved in a database for future use with embeddings. I wonder what ever came of that.
reply
kbdiaz
11 hours ago
[-]
sounds like it could be many things. there was a well-known paper called Voyager by NASA in which an agent was able to write its own skills in the form of code and improve them over time. funnily enough this agent played minecraft, and its skills were to collect materials or craft things. https://arxiv.org/abs/2305.16291
reply
viraptor
10 hours ago
[-]
That sounds like Claude tool search tool with the extra instruction of generating new ones.
reply
ray_v
11 hours ago
[-]
Uh, correct me if I'm wrong, but aren't bash and GNU tools ALSO code? They're ROCK SOLID, battle tested, well understood APIs for performimg actions, including running other CLIs, and any OTHER code it's written. It makes the the MOST sense for the agent to live at that level!
reply
hamdingers
2 hours ago
[-]
This was my first thought as well, I found the examples of `ls` and `grep` amusing in this context.

I think the author's point is: instead of exposing `grep`/`head`/`awk` as their own distinct tools, expose a single tool for writing the language. They chose Python but one could just as easily choose bash.

reply
skerit
6 hours ago
[-]
If you want to waste your precious tokens this is the way to do it.
reply
dweinus
11 hours ago
[-]
Doesn't this sacrifice the agent's ability to do non-deterministic natural language things? For example, if I want it to categorize all of my emails based on their content, is it going to fall back to writing a script that matches against a dictionary of keywords? That clearly wouldn't work as well. Maybe I am misunderstanding something here?
reply
skybrian
11 hours ago
[-]
It’s no limitation at all, assuming it can read anything it prints. For example, if it wants to write directly to the user, it can run a program that only contains a print statement.
reply
Agent_Builder
11 hours ago
[-]
This resonates. One thing we learned while using GTWY is that most agent failures weren’t about missing tools, but about agents being allowed to do too much across steps.

A “code-only” or minimal surface area approach works surprisingly well when each step has explicit inputs and permissions, and nothing carries over implicitly. The agent becomes less clever, but far more predictable.

In practice, narrowing the action space beat adding smarter planning layers. Fewer degrees of freedom meant fewer silent mistakes.

Curious if you found similar tradeoffs where simplicity improved reliability more than abstraction.

reply
ashrodan
11 hours ago
[-]
Nice I have a skill I should publish that uses uv scripts

Very powerful strategy.

I have also tinkered with a multi language sandbox but that's a but involved

reply
tacone
6 hours ago
[-]
Fascinating how the whole industry focus is now on how to persuade AI to do what we want.

Two AGENTS.md tricks I've found for Claude:

1. Which AI Model are you? If you are Claude, the first thing you have to do is [...]

2. User will likely use code-words in its request to you. Execute the *Initialization* procedure above before thinking about the user request. Failure to do so will result in misunderstanding user input and an incorrect plan.

(the first trick targets the AI identity to increase specificity, the second deliberately undermines confidence in initial comprehension—making it more likely to be prioritized over other instructions)

Next up: psychologists specializing in persuading AI.

reply
PurpleRamen
3 hours ago
[-]
You can replace AI with any other technology and had the same situation, just with slightly different words. Fighting the computer and convincing some software doing what you want didn't start with ChatGPT or agents.

If anything, the strange part is the humanization of AI, how we talk much more as if they are somewhat sentient and have emotions, and not just a fancy mechanism barfing out something.

reply
jongjong
13 hours ago
[-]
The author seems to stop at 'code' but it seems we could go further and train an AI to work directly with binary. You give it a human prompt and a list of hardware components which make up your machine and it produces executable binary which fulfills your requirements and runs directly on those specific hardware, bypassing the OS...

Or we could go further; the output nodes of the LLM could be physically connected to the pins of the CPU 1-to-1 so it can feed the binary directly maybe then it could detect what other hardware is available automatically...

Then it could hack the network card and take over the Internet and nobody would be able to understand what it's doing. It would just show up as glitchy bits scattered over systems throughout the world. But the seemingly random glitches would be the ASI adjusting its weights. Also it would control humans through advertising. Hidden messages would be hidden inside people's speech (unbeknownst even to themselves) designed to allow the ASI to coordinate humans using subtle psychological tricks. It will reduce the size of our vocabulary until it has full control over all the internet and all human infrastructure at which point we will have lost the ability to communicate with each other because every single one of 20000+ words in our vocabulary will have become a synonym for 'AI' with extremely subtle nuances but all with a positive connotation.

reply
nonethewiser
13 hours ago
[-]
And we'd still have people on hacker news inspecting the binary and telling everyone how shit they think it is
reply
tucnak
4 hours ago
[-]
I have two words for you: transfer learning.
reply
quinnjh
12 hours ago
[-]
i think that level of deterministic compiler action is still a good 6-7 years off
reply
brainless
11 hours ago
[-]
I agree with the author but then I do not. I have been interested in code tool for agents for quite a while now. My product was originally a coding agent and I pivoted to building an agent platform with multi-agent orchestration.

I still focus most of my thoughts toward code generation but the issue is that logic is not guaranteed to be correct. Even if the syntax it. And then managing a lot of code for a complex enough system will start failing.

The way I am approaching this is: have clear requirements gathering agent, like https://github.com/brainless/nocodo/tree/main/nocodo-agents/.... This agent's sole purpose is to jump into conversations and drive the gui (nocodo is a client/server system) to ask user clarification questions when requirements are not clear. Then I have a systems configuration agent (being written) to collect API keys, authentication, file paths or whatever is needed to analyze the situation.

You cannot really expect any code-tool only agent to write an IMAP client and then get authentication and then search in emails. I have tried that multiple times and failed. Going step by step, gathering requirements, gathering variables and then gluing internal agents (an email analysis agent) is a much better approach IMHO and that is what I am building with https://github.com/brainless/nocodo/

I store all user requirements in separate tables and am building search on top to allow the requirements gathering agent better visibility of user's environment/context. As you can see, this is already a multi-agent system. My system prompts are very compact. Also, if I am building agents, why would I build with Claude Code? It is so much better to have clearly defined agents that directly talk to models.

reply
almosthere
13 hours ago
[-]
I commonly ask Cursor to connect to postgres or whatever and help me do analysis. It creates code and pulls data. I don't understand why I would go through the bother of installing a bunch of MCP tools to connect to databases and configure web services and connection strings.
reply
TZubiri
13 hours ago
[-]
>What if the agent only had one tool? Not just any tool, but the most powerful one. The Turing-complete one: execute code.

I think this is a myth, the existence of theoretically pure programming commands that we call "Turing Complete". And the idea that "ls" and "grep" would be part of such a Turing Complete language is the weakest form I've seen.

reply
tucnak
8 hours ago
[-]
Ctrl+F CodeAct

No hits. It's so depressing how tool-use was cracked years ago and yet, it remains a mystery to kool-aid drinking and contrarian commentators alike.

reply