OpenClaw (a.k.a. Moltbot) Is Everywhere All at Once, and a Disaster
59 points
3 hours ago
| 8 comments
| cacm.acm.org
| HN
jerf
29 minutes ago
[-]
This, IMHO, puts the "can we keep AIs in a box" argument to rest once and for all.

The answer is, no, because people will take the AIs out the box for a bit of light entertainment.

Let alone any serious promise of gain.

reply
anonymous908213
23 minutes ago
[-]
I have little confidence in humanity's capabilities for that scenario, but I don't think this actually indicates much of anything. This happened in the first place because LLMs are so borderline useless (relative to the hype) that people are desperate to find any way to make them useful, and so give them increasingly more power to try to materialize the promised revolution. In other words, because LLMs are not AI, there is no need to try to secure them like AI. If some agency or corporation develops genuine artificial intelligence, they will probably do everything they can to contain it and harness its utility solely for themselves rather than unleashing them as toys for the public.
reply
Traster
16 minutes ago
[-]
To be honest, I would rather the author be put in a box he seems grumpy.
reply
simonw
5 minutes ago
[-]
A bit odd that this talks about AutoGPT and declares it a failure. Gary quotes himself describing it like this:

> With direct access to the Internet, the ability to write source code and increased powers of automation, this may well have drastic and difficult to predict security consequences.

AutoGPT was a failure, but Claude Code / Codex CLI / the whole category of coding agents fit the above description almost exactly and are effectively AutoGPT done right, and they've been a huge success over the past 12 months.

AutoGPT was way too early - the models weren't ready for it.

reply
vander_elst
6 minutes ago
[-]
I dunno, tbh I'd be in the camp of putting a banner 'run this at your own risk' and then let it go wild. Some people are going to get burnt, probably quite bad, but I guess it's more effective to learn like that rather than reading stuff upfront and take necessary precautions and maybe these will be cautionary tales also for others.

Thanks to the reports, hopefully, with time, some additional security measures will also be added to the product.

reply
senko
31 minutes ago
[-]
reply
Traster
11 minutes ago
[-]
I'm british so I apprecitate this condition, we need to talk down, we need to down play. An American will celebrate an LLM surprising them, a brit will be disappointed - until an LLM suprises by failing and then we'll be delighted.

There's a lot of hand wringing about how far wrong LLMs can go, but can we be serious for a second, if you're running <whatever the name is now>, you're tech savvy and bear the consequences. This isn't simple child abuse like teenage girls on facebook.

There is a reason people are buying mac minis for this and it's cool. We really need to be more excited by opportunity, not threatened.

reply
cyanydeez
1 hour ago
[-]
This reminds me when the kiddies would group together to DDoS internet sites.
reply
add-sub-mul-div
30 minutes ago
[-]
I hadn't thought of that parallel before. LLMs are transitioning the society into script kiddies.
reply
locusofself
20 minutes ago
[-]
This does make a quite a bit of sense. When I was a teenager in the 90s/early aughts, it was all IRC, script kiddie stuff. Reckless abandon. What worries me is that it seems like full-grown adults are happy to accelerate the dead internet and put security at risk. I assume it's not just teenagers running these stupid LLM bots.
reply
away0g
48 minutes ago
[-]
i remember back when i was a young botnet
reply
jtbaker
33 minutes ago
[-]
sung in the voice of Pumbaa

When he was a young botnet!

[1] https://youtu.be/__pNuslNCro

reply
blindriver
55 minutes ago
[-]
> LLMs hallucinate and make all kinds of hard-to-predict and sometimes hard-to-detect errors. AutoGPT had a tendency to report that it had completed tasks that it hadn’t really, and we can expect OpenClaw to do the same.

Ah, so a bit more useful than my teenage son? Where do I sign up??

reply
chasd00
6 minutes ago
[-]
> Ah, so a bit more useful than my teenage son? Where do I sign up??

I’m glad I’m not the only one. As a parent, the “teenage son” is a bewildering sight to behold.

reply
cactusplant7374
42 minutes ago
[-]
Peter Steinberger made an AI personal assistant. It looks like an interesting project that threatens major players like Apple and Amazon. People seem increasingly jealous of the success. What makes this any less secure than e-mail? I just don't see it. There are plenty of attack vectors of every piece of tech we use.
reply
ubercore
40 minutes ago
[-]
reply
causal
30 minutes ago
[-]
Wow great writeup and holy cow that's bad - I'm still trying to understand what OpenClaw/Moltbot can do that makes it worth this to so many people.
reply
Veen
5 minutes ago
[-]
There's a lot of, to put it lightly, bullshit in this blog article, starting with when openclaw was released (late November 2025, not January 25, 2026). The first bit of config — "listen: "0.0.0.0:8080" — is not the default. Default is loopback and it was when I first encounter this project at the end of December.

Essentially, the author has deliberately misconfigured an openclaw installation so it is as insecure as possible, changing the defaults and ignoring the docs to do so. Lied about what they've done and what the defaults are. Then "hacked" it using the vulnerability they created.

That said, there are definite risks to using something like openclaw and people who don't understand those risks are going to get compromised, but that doesn't justify blatant lying.

reply
williamcotton
41 minutes ago
[-]
reply
jrochkind1
37 minutes ago
[-]
the "with hands" part, which is it's whole thing.
reply
wat10000
31 minutes ago
[-]
My email client won't decide on its own to delete all my email, forward a private email to someone who shouldn't see it, or send my bank password to a scammer who asks for it in the right way.
reply