The answer is, no, because people will take the AIs out the box for a bit of light entertainment.
Let alone any serious promise of gain.
> With direct access to the Internet, the ability to write source code and increased powers of automation, this may well have drastic and difficult to predict security consequences.
AutoGPT was a failure, but Claude Code / Codex CLI / the whole category of coding agents fit the above description almost exactly and are effectively AutoGPT done right, and they've been a huge success over the past 12 months.
AutoGPT was way too early - the models weren't ready for it.
Thanks to the reports, hopefully, with time, some additional security measures will also be added to the product.
[0] https://garymarcus.substack.com/p/openclaw-aka-moltbot-is-ev...
There's a lot of hand wringing about how far wrong LLMs can go, but can we be serious for a second, if you're running <whatever the name is now>, you're tech savvy and bear the consequences. This isn't simple child abuse like teenage girls on facebook.
There is a reason people are buying mac minis for this and it's cool. We really need to be more excited by opportunity, not threatened.
When he was a young botnet!
Ah, so a bit more useful than my teenage son? Where do I sign up??
I’m glad I’m not the only one. As a parent, the “teenage son” is a bewildering sight to behold.
Essentially, the author has deliberately misconfigured an openclaw installation so it is as insecure as possible, changing the defaults and ignoring the docs to do so. Lied about what they've done and what the defaults are. Then "hacked" it using the vulnerability they created.
That said, there are definite risks to using something like openclaw and people who don't understand those risks are going to get compromised, but that doesn't justify blatant lying.