Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code
56 points
6 hours ago
| 12 comments
| github.com
| HN
nullbio
1 hour ago
[-]
> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.

While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?

reply
adriancooney
1 hour ago
[-]
100%. This is why I'm so reluctant to give any access to my OpenClaw. The skills hub is poisoned.
reply
ddaniel10
1 hour ago
[-]
Great point. I wrote it as important note and ill take it into account.
reply
scotth
19 minutes ago
[-]
Does this do anything to resist prompt injection? It seems to me that structured exchange between an orchestrator and its single-tool-using agents would go a long way. And at the very least introduces a clear point to interrogate the payload.

But I could be wrong. Maybe someone reading knows more about this subject?

reply
aaaalone
3 hours ago
[-]
I will not download or use something which constantly reminds me of this weird dude suckerberg who did a lot of damage to society with facebook
reply
627467
1 hour ago
[-]
This Zuckerman[0] would like a word

[0] https://en.wikipedia.org/wiki/Mortimer_Zuckerman

reply
ddaniel10
1 hour ago
[-]
Haha it's your personal agent, let him handle the stuff you don't like. But soon, right now its not fully ready
reply
philipallstar
2 hours ago
[-]
That's really good to know
reply
gowld
2 hours ago
[-]
Zuckerberg.

At first I thought it was a naming coincidence, but looking at the zuckerman avatar and the author avatar, I'm unsure if it was intentional:

https://github.com/zuckermanai

https://github.com/dvir-daniel

https://avatars.githubusercontent.com/u/258404280?s=200&v=4

The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.

reply
zeroonetwothree
2 hours ago
[-]
I was hoping it was a Philip Roth reference but I was disappointed when I opened the page.
reply
4b11b4
4 hours ago
[-]
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
reply
ddaniel10
4 hours ago
[-]
DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point
reply
asim
2 hours ago
[-]
I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.

The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5

https://github.com/asim/aslam

reply
ddaniel10
6 hours ago
[-]
Hi HN,

I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.

Repo: https://github.com/zuckermanai/zuckerman

The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.

Zuckerman flips that:

1. Starts with almost nothing (core essentials only).

2. Behavior/tools/prompts live in plain text files.

3. The agent can rewrite its own configuration and code.

4. Changes hot-reload instantly (save -> reload).

5. Agents can share improvements with others.

6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).

Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).

Tech stack: TypeScript, Electron desktop app + WebSocket gateway, pnpm + Vite/Turbo.

Quickstart is literally:

  pnpm install && pnpm run dev
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.

Would love feedback from folks who have built agent systems or thought about safe self-modification.

reply
grigio
4 minutes ago
[-]
i like the idea is possible to run in a docker container?
reply
iisweetheartii
6 hours ago
[-]
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:

1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat) 2. Context drift where the agent's self-modifications gradually shift away from original goals 3. File corruption from concurrent edits or malformed writes

Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.

Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!

reply
junon
2 hours ago
[-]
AI generated response on a post about AI. Getting tired of this timeline.
reply
ohyoutravel
2 hours ago
[-]
Not only that, but the OP created that account solely to hype their own product lol. There’s another bot downthread doing the same thing. Minimally it feels like dang should not let new accounts post for 30 days or something without permission.
reply
yborg
1 hour ago
[-]
That might reduce botting for about 30 days, people will just tee up an endless supply of parked ids that will then spin up to post after the lockout expires.
reply
nullbio
1 hour ago
[-]
Yep. It's very obvious, and lazy.
reply
anarticle
1 hour ago
[-]
Why not ban both accounts? Seems like a fine way to keep SNR high to me.
reply
verdverm
1 hour ago
[-]
if you ban an account, they know to make a new one

if you shadowban, they are none the wiser and the effect to SNR is better

reply
ekinertac
5 hours ago
[-]
there are hardcoded elements in the repo like:

/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log

reply
ddaniel10
4 hours ago
[-]
thanks
reply
noncoml
25 minutes ago
[-]
I would change the name of the project. Why would I want to run something that keeps remind me of that guy
reply
joonate
1 hour ago
[-]
|The agent can rewrite its own configuration and code.

I am very illiterate when it comes to Llms/AI but Why does nobody write this in Lisp???

Isn't it supposed to be the language primarily created for AI???

reply
lm28469
1 hour ago
[-]
> Isn't it supposed to be the language primarily created for AI???

In 1990 maybe

reply
tines
1 hour ago
[-]
Nah, it’s pretty unrelated to the current wave of AI.
reply
plagiarist
20 minutes ago
[-]
If hot reloading is a goal I would target Erlang or another BEAM language over a Lisp.
reply
lmf4lol
1 hour ago
[-]
I am surprised that no one did this in a LISP yet.
reply
falloutx
2 hours ago
[-]
Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.
reply
ddaniel10
1 hour ago
[-]
Thanks for the feedback. Are you going to forget this name though?
reply
noncoml
23 minutes ago
[-]
I don’t know if I will forget it, but it’s enough to keep me away from considering using it
reply
amelius
4 hours ago
[-]
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
reply
ddaniel10
4 hours ago
[-]
I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations
reply
Zetaphor
2 hours ago
[-]
Have you tested this with a local model? I'm going to try this with GLM 4.7
reply
mcny
1 hour ago
[-]
What would be the best model to try something like this on a 5800XT with 8 GB RAM?
reply
amelius
4 hours ago
[-]
Yes, it certainly makes sense if you have the budget for it.

Could you share what it costs to run this? That could convince people to try it out.

reply
ddaniel10
4 hours ago
[-]
I mean, you can just say Hi to it, and it will cost nothing. It only adds code and features if you ask it to
reply
croes
3 hours ago
[-]
AI is cheap right now. At some point the AI companies must turn to generate profit
reply
WalterSear
36 minutes ago
[-]
Anthropic has stated that their inference process is cash positive. It would be very surprising if this wasn't the case for everyone.

It's certainly an open question whether the providers can recoup the investments being made with growth alone, but it's not out of the question.

reply
dboreham
34 minutes ago
[-]
Someone needs to send this to Spike Feresten.
reply