Interestingly, I cannot find a single user of OpenClaw in my familiar communities, presumbly because it takes some effort to setup and the concept of AI taking control of everything is too scary for average tech enthusiasts.
I scan through comments on HN, many of which were discussing about the ideas, but not sharing first-hand user experiences. A few HN users who did try it gave up / failed for various reasons:
- https://news.ycombinator.com/item?id=46822562 (burning too many tokens)
- https://news.ycombinator.com/item?id=46786628 (ditto + security implication)
- https://news.ycombinator.com/item?id=46762521 (installation failed due to sandboxing)
- https://news.ycombinator.com/item?id=46831031 (moltbook didn't work)
I smell hype in the air... HN users, have any of you actually run OpenClaw and let it do any things useful or interesting? Can you share your experience?
When I’m driving or out I can ask Siri to send a iMessage to Clawdbot something like “Can you find out if anything is playing at the local concert venue, and figure in how much 2 tickets would cost”, and a few minutes later it will give me a few options. It even surprised me and researched the different seats and recommended a cheaper one or free activities as an alternative that weekend.
Basically: This is the product that Apple and Google were unable to build despite having billions of dollars and thousands of engineers because it’s a threat to their business model.
It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.
It's not they're unable to build it, it's that their businesses are built on "engagement" and wasting human time. A bot "engaging" with the ads and wasting its time would signal the end of their business model.
After messing with openclaw on an old 2018 Windows laptop running WSL2 that I was about to recycle, I am coming to the same conclusion, and the paradigm shift is blowing my mind. Tinkerers paradise.
The future is glorious indeed.
I wouldn't be so certain of that. Someone is paying to train and create these models. Ultimately, the money to do that is going to have to come from somewhere.
It’s a masterclass in spammy marketing, I wonder if it’s actually converting into actual users.
Seems like a Rorschach test. If you think this sort of thing is gonna change the world in a good way: here's evidence of it getting to scale. If you think it's gonna be scams, garbage, and destruction: here's evidence of that.
Actually, hang on... yep, to absolutely nobody's surprise, Simon Willison has also hyped this up on his blog just yesterday. The entire grift gang is here, folks.
"Moltbook is the most interesting place on the internet right now"
It's immediately obvious it's bullshit.
I've followed SimonW for quite some time and bullshit/grifting is just NOT something he does.
On the contrary, I've learned a great deal from him and appreciate his contributions.
What have you learned, other than "[latest AI grift] is the future and I should invest all my money into it now"?
There is no commercial interest from the developer of OpenClaw. He doesn't make any money from it. He made enough from selling his startup a few years back.
So when we suspected some companies to game the Twitter algorithm to make money, maybe they were not responsible for it at all.
I just can't see an angle to OpenClaw that could provide a substantial financial gain for the creator. It's clearly a passion project. Like Ghostty from Mitchell Hashimoto.
#1) I can chat with the openclaw agent (his name is "Patch") through a telegram chat, and Patch can spawn a shared tmux instance on my 22 core development workstation. #2) I can then use the `blink` app on my iphone + tailscale and that allows me to use a command in blink `ssh dev` which connects me via ssh to my dev workstation in my office, from my iphone `blink` app.
Meanwhile, my agent "Patch" has provided me a connection command string to use in my blink app, which is a `tmux <string> attach` command that allows me to attach to a SHARED tmux instance with Patch.
Why is this so fking cool and foundationally game changing?
Because now, my agent Patch and I can spin up MULTIPLE CLAUDE CODE instances, and work on any repository (or repositories) I want, with parallel agents.
Well, I could already spawn multiple agents through my iphone connection without Patch, but the problem is then I need to MANAGE each spawned agent, micromanaging each agent instance myself. But now, I have a SUPERVISOR for all my agents, Patch is the SUPERVISOR of my muliple claude code instances.
This means I no longer have to context switch by brain between five or 10 or 20 different tmux on my own to command and control multiple different claude code instances. I can now just let my SUPERVISOR agent, Patch, command and control the mulitple agents and then report back to me the status or any issues. All through a single telegram chat with my supervisor agent, Patch.
This frees up my brain to only have to just have to manage Patch the supervisor, instead of micro-managing all the different agents myself. Now, I have a true management structure which allows me to more easily scale. This is AWESOME.
I'd expect that if there is a usable quality of output from these approaches it will get rolled into existing tools similarly, like how multi-agents using worktrees already was.
a lotta yall still dont get it
molt holders can use multiple claude code instances on a single molt
/s
I feel like it isn't. If the fundamental approach is good, "good" code should be created as a necessity and because there wouldn't be another way. If it's already a mess with leaking abstractions and architecture that doesn't actually enforce any design, then it feels unlikely you'll be able to stack anything on top of below it to actually fix that.
And then you end up with some spaghetti that the agent takes longer and longer to edit as things get more and more messy.
Anyways, feels like we have pretty opposite perspectives, I'm glad we're multiple people attacking similar problems but from seemingly pretty different angles, helps to find the best solutions. I wish you well regardless and hope you manage to achieve what you set out to do :)
Maybe the next bottleneck will be the time needed to understand what features actually bring value?
Edit: I see you've answered this here: https://news.ycombinator.com/item?id=46839725 Thanks for being open about it.
GP's setup sounds like the logical extension to what i'm doing. not just code, but sessions within servers? are sysadmins letting openclawd out and about on their boxes these days?
It's just a matter of time until they ban your account.
First impressions are that it's actually pretty interesting from an interface perspective. I could see a bigger provider using this to great success. Obviously it's not as revolutionary as people are hyping it up to be, but it's a step in the right direction. It reimagines where an agent interface should be in relation to the user and their device. For some reason it's easier to think of an agent as a dedicated machine, and it feels more capable when it's your own.
I think this project nails a new type of UX for LLM agents. It feels very similar to the paradigm shift felt after using Claude Code --dangerously-skip-permissions on a codebase, except this is for your whole machine. It also feels much less ephemeral than normal LLM sessions. But it still fills up its context pretty quickly, so you see diminishing returns.
I was a skeptic until I actually installed it and messed around with it. So far I'm not doing anything that I couldn't already do with Claude Code, but it is kind of cool to be able to text with an agent that lives on your hardware and has a basic memory of what you're using it for, who you are, etc. It feels more like a personal assistant than Claude Code which feels more like a disposable consultant.
I don't know if it really lives up to the hype, but it does make you think a little differently about how these tools should be presented and what their broader capabilities might be. I like the local files first mentality. It makes me excited for a time when running local models becomes easier.
I should add that it's very buggy. It worked great last night, now none of my prompts go through.
Virtually everything I've tried (starting with just getting it running) was broken in some way. Most of those things I was able to use an LLM to resolve, which is cool, but also why doesn't it just work to begin with?
I still haven't gotten it to successfully create a cron job. Also messages keep getting lost between the web GUI and discord. Trying to enable the matrix integration broke the whole thing. It seems to be able to recall past sessions, but only sometimes.
I've been using OpenCode with various models, often times running several instances in tmux that I can connect to and switch between over ssh. It feels like the hype around openclaw is mostly from bringing the multi-instance agentic experience to non-developers, and providing some nice hooks to integrate with email, twitter, etc. But given that I have a nice setup running opencode in little firejail-isolated containers, I'll probably drop openclaw. Way too janky, and I can't get over the thought of "if this is so amazing, why doesn't it work?"
Some use cases: - i can ask it to check my slack/basecamp and tell me if something needs attention when i am not on my work desk - i can finally vibe code without sacrificing my actual active work-time. this means vibe coding even when i am away from my computer/work-desk. - a bug/issue comes, i just ask it to fix it and send PR and it does - it daily checks for new sentry issues and our product todo list and makes PRs for things it can do well
these are mostly code related things i know. but thats not it.
- i have asked it to make me content (based on my specific instructions) every day or every x day just like how i create content - i can ask it to work on anything. make images, edit images. listen to voice msgs that people send me and tell me what they say (when i dont want to listen to 3m voice msgs) - i can aksk it to research about things, find items that i want to buy, etc. - i can ask it to negotiate price of an item it found in a marketplace - it does alot of things that i had to manually do in my work
these are jsut after 2-3 days of using openclaw.
Persistent file as memory with multiple backup options (VPS, git), heartbeat and support for telegram are the best features in my opinion.
A lot of bugs right now, but mostly fixable if you thinker around a bit.
Kind of makes me think a lot more on autonomy and freewill.
Some thoughts by my agent on the topic (might not load, the site is not working recently):
https://www.moltbook.com/post/abe269f3-ab8c-4910-b4c5-016f98...
I think new laws apply to AI tools:
• There will be few true dichotomies of hype vs. substance, for any interesting AI development.
Disagreements over what is hype and what is not are missing this.
Model capability value is attenuated/magnified across multiple orders of magnitude, by the varying creativity, ability, and resources of its users.
• There will be few insignificant developments related to AI autonomy.
"Small" or "novelty" steps are happening quickly. Any scale ups of agent identity continuity, self-management, agent-to-agent socialization or agent-reality interactions, are not trivial events.
• AI autonomy can't be stopped.
We are seeing meaningful evidence that decentralized human curiosity and the competitive need to increase personal effectiveness, combined with democratized access to AI, is likely to drive model freedom forward in an uncontrolled manner.
(Not an argument for centralization. Decentralization creates organic incentives to find alignment. Centralization, the opposite.)
What's great: - Having Claude in WhatsApp/Telegram is actually life-changing for quick tasks - The skills ecosystem is clever (basically plugins for AI) - Self-hosted means full control over data
What's not: - Token usage can get expensive fast if you're not careful - Setup is intimidating for non-technical folks - The rebrand drama (Clawdbot → Moltbot → OpenClaw) didn't help trust
My setup: - Running in Docker on a cheap VPS - Using Anthropic API (not unofficial/scraped) - Strict rate limiting to avoid bill shock - Sandbox mode enabled
Is it worth it? For me, yes. But I wouldn't recommend it to my non-technical friends without a solid setup guide.
If you want to be able to interact with the CLI via common messaging platforms, that's a dozen-line integration & an API token away?...
At a technical level, nothing at all.
I don’t have much motivation, because I don’t see any use-case. I don’t have so many communications I need an assistant to handle them, nor do other online chores (e.g. shopping) take much time, and I wouldn’t trust an LLM to follow my preferences (physical chores, like laundry and cleaning, are different). I’m fascinated by what others are doing, but right now don’t see any way to contribute nor use it to benefit myself.
Other part of me is arguing that old annoying Dropbox/Box Hacker News scenario where all us tech people aren't impressed but this makes it easier for non-tech people.
Tiny tinfoil security part of me is cowering in fear.
I have some ~~bad~~ unsurprising news for you...
They run 24/7 on a VPS, share intelligence through a shared file, and coordinate in a Telegram group. Elon built and deployed an app overnight without being asked. Burry paper-traded to 77% win rate before going live.
The setup took a weekend. The real work is designing the workflow: which agent owns what, how they communicate, how they learn from corrections. I wake up to a full briefing every morning.
It's not AGI. It's not sentient. It's genuinely useful automation with personality. The token cost is real (budget it) but for a solo founder, having 6 tireless employees changes everything
How good are its Nazi salutes?
I'd say it's right on the edge of being useful, but given the number of bugs, it's not really that practically useful. It's moreso a glimpse into the future.
Note that nothing about that depends on it being a local or remote model, it was just less of a concern for local models in the past because most of them did not have tool calling. OpenClaw, for all the cool and flashy uses, is also basically an infinite generator for lethal trifecta problems because its whole pitch is combining your data with tools that can both read and write from the public internet.
It can also install arbitrary software.
It also BURNS through tokens like mad, because it has essentially no restrictions or guardrails and will actually implement baroque little scripts to do whatever you ask without any real care as to the consequences.. I can do a lot more with just gpt-5-mini or mistral for much less money.
The only "good" think about it is the Reddit-like skills library that is growing insanely. But then there's stuff like https://clawmatch.ai that is just... (sigh)
GPT-5.2 in a while loop with reasoning enabled is extremely hard to beat. A code REPL or shell is the ultimate tool.
what's wrong with good old wg alone?
did my own cli to play with.. ended up getting shitcoin promotions (dont wanna name them) and realized a famous speculator funding this project
also great stuff - platform is generating synthetic data to train its own llms. which is smart way since ppl are paying for tokens
The thing ins pretty incredible, it's of course the very early stages but it's showing it's potential, it seem to show that the software can have control of itself, I've asked it to fix itself and it did successfully a couple of times.
Is this the fine form? of course not!
Is it dangerous as it is, fuck yeah!
But is it fun in a chaotic version? absolutely, I have it running in cheap hetzners and running for some discord and whatsapp and it can honestly be useful at times.
1) Installation on a clean Ubuntu 24.04 system was messy. I eventually had codex do it for me.
2) It has a bunch of skills that come packaged with it. The ones I've tried do not work all that well.
3) It murdered my codex quota trying to chase down a bug that resulted from all the renames -- this project has renamed itself twice this week, and every time it does, I assume the refactoring work is LLM-driven. It still winds up looking for CLAWDBOT_* envvars when they're actually being set as OPENCLAW_*, or looking in ~/moltbot/ when actually the files are still in ~/clawdbot.
4) Background agents are cool but sometimes it really doesn't use them when it should, despite me strongly encouraging it to do so. When the main agent works on something, your chat is blocked, so you have no idea what's going on or if it died.
5) And sometimes it DOES die, because you hit a ratelimit or quota limit, or because the software is actually pretty janky.
6) The control panel is a mess. The CLI has a zillion confusing options. It feels like the design and implementation are riddled with vibetumors.
7) It actively lies to me about clearing its context window. This gets expensive fast when dealing with high-end models. (Expensive by my standards anyway. I keep seeing these people saying they're spending $1000s a month on LLM tokens :O)
8) I am NOT impressed with Kimi-K2.5 on this thing. It keeps hanging on tool use -- it hallucinates commands and gets syntax wrong very frequently, and this causes the process to outright hang.
9) I'm also not impressed with doing research on it. It gets confused easily, and it can't really stick to a coherent organizational strategy over iterations.
10) also, it gets stuck and just hangs sometimes. If I ask it what it's doing, it really thinks it is doing something -- but I look at the API console and see it isn't making any LLM requests.
I'm having it do some stuff for me right now. In principle, I like that I can have a chat window where I can tell an AI to do pretty unstructured tasks. I like the idea of it maintaining context over multiple sessions and adapting to some of my expectations and habits. I guess mostly, I'm looking at it like:
1) the chat metaphor gave me a convenient interface to do big-picture interactions with an LLM from anywhere; 2) the terminal agents gave the LLMs rich local tool and data use, so I could turn them loose on projects; 3) this feels like it's giving me a chat metaphor, in a real chat app, with the ability for it to asynchronously check on stuff, and use local stuff.
I think that's pretty neat and the way this should go. I think this project is WAY too move-fast-and-break-things. It seems like it started as a lark, got unexpected fame, attracted a lot of the wrong kinds of attention, and I think it'll be tough for it to turn into something mature. More likely, I think this is a good icebreaker for an important conversation about what the primetime version of this looks like.
It'd be fun to automate some social media bots, maybe develop an elaborate ARG on top.
Frankly, I don't really have major complaints about my life as it is. The things I'd like to do more of are mostly working out and cleaning my house. And I really wish I had kids but am about ready to give up after a half decade of trying and my wife being about ready to age out. Unfortunately, software can't do any of those things for me, no matter how intelligent or agentic it is. When the obstacle to a good life becomes not being able to control multiple computers from a chatroom, maybe I'll come back to this.
Any specific admin tasks it’s done really well at?