You are not supposed to install OpenClaw on your personal computer
113 points
4 hours ago
| 17 comments
| twitter.com
| HN
darth_avocado
2 hours ago
[-]
Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
reply
frenchtoast8
1 hour ago
[-]
The security team at my company announced recently that OpenClaw was banned on any company device and could not be used with any company login. Later in an unrelated meeting a non technical executive said they were excited about their new Mac Mini they just bought for OpenClaw. When they were told it was banned they sort of laughed and said that obviously doesn't apply to them. No one said anything back. Why would they? This is an executive team that literally instructed the security team to weaken policies so it could be more accommodating of "this new world we live in."
reply
ropetin
1 hour ago
[-]
Similar thing at my company. Someone /very/ high up in the org chart recently said to the entire company that OpenClaw is the future of computing, and specifically called out Moltbook as something amazing and ground breaking. There is literally no way security would ever let OpenClaw in the same room as company systems, never mind actually be installed anywhere with access to our data.

It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.

reply
ekjhgkejhgk
2 hours ago
[-]
Those people aren't the same. Those are two ideas that you heard from the internet, and you're imagining it's the same person talking.
reply
HeliumHydride
2 hours ago
[-]
reply
chrysoprace
1 hour ago
[-]
I'm glad that a term for this exists. It's always seemed so silly to me that someone would think that a group of people would all conform to the same opinion.
reply
CoastalCoder
37 minutes ago
[-]
Thank you!!!

I've been looking for a term for this concept for years!

reply
jacquesm
1 hour ago
[-]
Some of them are the same.

It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.

reply
dullcrisp
20 minutes ago
[-]
So they’re assuming the existence of somebody to be mad at without direct evidence?
reply
jacquesm
2 minutes ago
[-]
No, they're applying statistics.
reply
mountainriver
1 minute ago
[-]
Honestly it’s been a breath of fresh air to have most of the gatekeeping in software be removed.

Seems that it was by and large just people wanting to feel important, and holding onto their positions.

Apps need great security, but security can also get out of control. Apps need good abstractions and code hygiene but that too can get out of control.

I’ve fallen in love with programming all of again now that I’m not so tied down by perceived perfection.

reply
hugs
6 minutes ago
[-]
openclaw is the napster of itunes.

people who have been around long enough know that we're currently in the wild west of networked agentic systems. it's an exciting time to build and explore. (just like napster and early digital music.) eventually some big company will come along and pave the cow paths and make everything safe and secure. but the people who will actually deliver that are likely playing with openclaw (and openclaw like systems) now.

reply
throw10920
2 hours ago
[-]
Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"? Can you point to a dozen blogs/Twitter profiles, or are you just inventing a fictitious "other" to attack?
reply
Macha
59 minutes ago
[-]
The person being quoted for one, who is apparently focused on safety and alignment at meta. Safety being handing over your email credentials to the shiny new thing, apparently
reply
LudwigNagasena
50 minutes ago
[-]
Are they even a developer? “Safety and alignment” as AI buzzwords are quite different from “security and privacy”. In any case, I wouldn’t take a random person with a sinecure job as exemplary of anything.
reply
monksy
2 hours ago
[-]
They aren't. They're the ones who are resisting the all in thing on AI stuff. What you're seeing is over reactive trend followers.
reply
bubblewand
47 minutes ago
[-]
Same as the “MongoDB is webscale” crowd.
reply
autoexec
2 hours ago
[-]
And likely massive amounts of marketing spending pushing for people to bend over and accept AI anything anywhere.
reply
resonious
1 hour ago
[-]
I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".
reply
neya
33 minutes ago
[-]
> why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them

I'm a sane developer. I do not trust AI at all. I built my own personal OpenClaw clone (long before it was even a thing) and ran controlled experiments inside a sandbox. My stack is Elixir, so this is pretty much easy. If an agent didn't actually respect your requirements, it's just as easy as running an iex command to kill that particular task.

In my experience, AI, be it any model - consistently disobeys direct commands. And worse, it consistently tried to cover up its tracks. For example, I will ask it to create a task within my backend. It will tell me it did - for no reason at all, even share me a task ID that never existed. And when asked why it lied, it would actually spin the task up and accuse me of not trusting it.

It doesn't matter which vendor, which model. This behaviour is repeatable across models and vendors. Now, why would I give something like this access to my entire personal and professional life?

To group me and others like me with the clowns doing this is an insult to me and others who have accumulated decades of experience and security best practices and who had nothing to do with OpenClaw.

reply
tptacek
1 hour ago
[-]
I'm enthusiastic about AI (it's gone from the 2nd most important thing to happen in my career to tied for first, with the Internet) and I am baffled by OpenClaw.
reply
eucyclos
10 minutes ago
[-]
I thought Ben Goertzel had a good take on it: "someone made hands for a brain that doesn't exist yet"
reply
almosthere
9 minutes ago
[-]
"ever" is the key word. Like driving, we as humans will cede control, at some point, to AI.
reply
xantronix
1 hour ago
[-]
You must not say his name. If you say it, you will summon him.
reply
andai
1 hour ago
[-]
Was building a claw clone the other day when for debugging I added a bash shell. So I type arbitrary text into a Telegram bot and then it runs it as bash commands on my laptop.

Naturally I was horrified by what I had created.

But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!

Funny how that works, subjectively...

(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)

As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...

https://xkcd.com/1200/

reply
cromka
2 hours ago
[-]
It's greed.
reply
j45
2 hours ago
[-]
Developers with and without devops experience.
reply
dylan604
1 hour ago
[-]
This isn't any different than pre-Claude. We've always had people that wrote code, but had no clue about systems. Not everyone is a CS major. I've seen people do the strangest things that you would think a sane person would never do, yet, their the strangeness is happening by someone you would otherwise consider sane/smart. Not everyone is a sysadmin banging perl to automate things.
reply
co_king_5
2 hours ago
[-]
> Why would you ever let a non deterministic program god level access to everything?

If they don't their jobs are going to get replaced by AI

reply
autoexec
2 hours ago
[-]
To the extent that anyone can be replaced they will be replaced and nothing they do now will save them. The good news is that so far I haven't seen companies having much success outright replacing workers with AI chatbots.
reply
jacquesm
1 hour ago
[-]
They don't have the successes but they do replace them. I've seen a couple of examples of that in the last couple of months, there is just no way to avoid these abominations any more.
reply
skeeter2020
2 hours ago
[-]
it's not successfully replacing them with AI that is the problem; it's firing them to then replace them with AI which, when it doesn't work is either too late or at best incredibly disruptive for the people impacted.
reply
autoexec
1 hour ago
[-]
That's certain true. Lots of letting workers go only to hire new ones at much lower pay
reply
observationist
2 hours ago
[-]
They're getting replaced by AI anyway, these bleeding edge agents are just surfboards for the wave.

Learn fast or die trying, lol.

reply
peteforde
36 minutes ago
[-]
I am baffled by the popularity of *claw but I am always looking to learn, so I was happy to have the algo serve me this YT video of Limor explaining how she had a sandboxed claw running a local LLM to chew through a particularly dense datasheet to create a wrapper library and matching test coverage. https://www.youtube.com/watch?v=fdidNp5IHHI

This example is, as of this moment, the only example that has communicated to me that February 2026's local agent harnesses have some utility in the right context and expert hands.

I was particularly bolstered by the unintentional but very real demonstration of how LLMs really can be leveraged to free up humans to spend more parent time with their infants. We spend a lot of characters lamenting how we never got jetpacks, so here's someone doing it right.

reply
Spivak
43 seconds ago
[-]
I don't use it but am thinking about it because it's very roughly the agent I built myself but with a community around it so I have to do less work fiddling with it.
reply
nkrisc
39 minutes ago
[-]
Looking at the tweet he’s replying to, I still find it incredible people talk to these LLMs as if they are rational beings who will listen to them. The fact that they sometimes do is almost coincidence more than anything.

It’s even more unbelievable that they seem to think instructions are rules it will follow.

To paraphrase Captain Barbossa: “They’re more guidelines than actual rules.”

reply
neom
7 minutes ago
[-]
My reply to said tweet seems to have resonated: https://x.com/dissenter_hi/status/2025799046883864925
reply
slopinthebag
33 minutes ago
[-]
Lol. I tried doing some image generation with SOTA models. I explicitly asked it not to do something it was doing and it would literally do the thing, and straight up tell me it didn't.

Unless someone has a cognitive impairment it's just simply not a failure mode of cooperative humans. Same with hallucinations. Both humans and AI can be wrong, but a human has the ability to admit when they don't understand or know something, AI will just make it up.

I don't understand why people would ever trust anything important to something with the same failure mode as AI. It's insane.

reply
8cvor6j844qw_d6
2 hours ago
[-]
Are people really running OpenClaw on their primary machine?

Anyone security-conscious would isolate it on dedicated hardware (old laptop, Raspberry Pi, etc.) with a separate network and chat surface.

reply
jofzar
1 hour ago
[-]
Brother people watch porn on their company laptop, you think people are using protection for their openclaw's?
reply
chickensong
2 hours ago
[-]
> Anyone security-conscious

Most people aren't, including many professional developers.

reply
dylan604
1 hour ago
[-]
You'd be amazed at the corporate IT world where any extra equipment like that would just not be available and/or allowed. Besides, if it were a corporate machine and not my personal machine and work was forcing me to use AI, I'd have no qualms. They get what they ask for with the equipment provided!
reply
tylervigen
1 hour ago
[-]
How did the question become “which corporate device can I install OpenClaw on?” Who is doing that?
reply
amelius
2 hours ago
[-]
Did Hegseth install OpenClaw in the pentagon yet?
reply
ericbuildsio
1 hour ago
[-]
Giving OpenClaw permissions on a non-sandboxed account seems like it would massively fragilize my digital life

Small upside: it saves a few minutes here and there on some tasks (eg. checking into flights)

Massive tail-risk downside: it does something like what's linked in the tweet (eg. deletes my entire inbox)

reply
throwatdem12311
1 hour ago
[-]
It doesn’t matter what you’re “supposed to do”. People don’t read manuals or warnings.
reply
hinkley
1 hour ago
[-]
So... stupid question, if this is true, why isn't it downloaded as a docker image?
reply
mh2266
7 minutes ago
[-]
the point is to give it access to your email so it can do email things, putting it in a container stops it from rm -rf / but it doesn't stop it from, well, doing anything it can do with email
reply
throwaway920102
17 minutes ago
[-]
People are working on this stuff as we speak. Stuff like https://fly.io/blog/design-and-implementation/
reply
ImPostingOnHN
34 minutes ago
[-]
You can break out of a docker container, especially with the permissions many people would give such a container (privileged=true, etc).
reply
StevenNunez
3 hours ago
[-]
What's the fun in that? Also I think /stop would help here.
reply
slg
1 hour ago
[-]
This post exists in that Poe's law purgatory of it being impossible for someone without the proper context to know whether this is sarcastically mocking OpenClaw or an attempt at defending OpenClaw against some of the bad press it has received due to people not understanding the risks involved. Because the comments here are responding of if this post is a sane reasonable take, but I read it and just see a laundry list of restrictions you need to put on OpenClaw listed one after another until you get to the point in which the software is effectively useless.
reply
fourthark
50 minutes ago
[-]
(Which it is?)
reply
yesitcan
1 hour ago
[-]
This person’s title is “Safety and alignment at Meta Superintelligence”. It must be satire.
reply
BloondAndDoom
2 hours ago
[-]
I mean if you are not connecting it to the real things why even bother, just chatgpt or Claude online at that point.

We have enough assistants, the key idea with opeclaw is it can do stuff instead of talk with what you have. It’s terrible security but that’s the only way it makes sense. Otherwise it’s just a lot of hoops to combine cron jobs with a AI agent on the cloud that can do things an report back.

Not that I think anyone should do it, it’s a recipe for disaster

reply
recursivecaveat
2 hours ago
[-]
Yeah, it's like saying you can hire a con artist as your personal assistant as long as they work from a sealed box and just pass little reviewed paper slips back and forth through a slit. Why have one at that point? Very difficult to be 'assisted' without granting access.
reply
hiuioejfjkf
2 hours ago
[-]
Director of Safety and Alignment at Meta gives full access to a LLM to theirs email

after anthropic publishes research how a model tried to blackmail an executive with emails about an affair to not be shut down

and justification in thread is "I tried it on a toy inbox, it worked well, so I trusted it with my real email"

CLOWN WORLD

reply
blibble
2 hours ago
[-]
pretty clear the facebook safety and alignment role is just for show if she couldn't figure this out

its like they hired the worst person they could get their hands on

reply
observationist
2 hours ago
[-]
Safety and Alignment is just the same old trust & safety people from social media platforms, they somehow managed to convince the people with money of their relevance. I'll never understand that move - the slightest pause for consideration of necessary personnel by those in charge should have nixed any such hiring, but they're spending billions in stock and salary on these folks. Good for them, I guess.
reply
mv4
2 hours ago
[-]
LinkedIn says she was a researcher. Joined as part of the Meta <> Scale deal with Alexandr Wang.
reply
gedy
1 hour ago
[-]
I agree - but what exactly are you supposed to do with it if it has its own email, phone #, etc?
reply
nanobuilds
42 minutes ago
[-]
It can always forward you things to your real email for you to action them. So as a layer doing the boring work of sorting things, researching, and keeping track of changes, but execution, public actions, real-life stuff can still be confirmed by the human (through telegram for example).

There are some good uses if managed properly but people tend to trust ais more and more these days.

reply
uniformlyrandom
10 minutes ago
[-]
exactly.
reply
plagiarist
2 hours ago
[-]
This is the sanest take I've seen from anyone using the claws.

I would still not want the LLM to have read access to email. Email is a primary vector for prompt injection and also used for password resets.

reply
ericbuildsio
1 hour ago
[-]
Agreed, I wouldn't even trust it with read-only access to my email

I'd trust it as much as I would a VA from Fiverr

Want it to check you into a flight? Forward the check-in email to its own inbox

Read-only access to my calendar; it can invite me to meetings

No permissions beyond that

reply
nurettin
1 hour ago
[-]
Didn't all vendors directly or indirectly ban the use of *claw? Why are there still articles about this? Are they unable to detect users?
reply
crazygringo
38 minutes ago
[-]
No, not at all.

They're banned from using them with flat-fee subscription accounts meant only for first party tools.

You're entirely welcome to use them with pay-as-you-go API access. That's what the API is for.

reply
SoMomentary
55 minutes ago
[-]
Has OpenAI banned it's use already? I hadn't seen that one come through yet.
reply
ukuina
1 hour ago
[-]
API usage is not banned.
reply
antisol
1 hour ago
[-]

    Listen carefully: OpenClaw is basically a real person you have hired, whose capabilities are vast and fast — in ways both good and potentially bad. But you’ve hired it in the absence of a resume or behavioral background check results. 

...Except that a human is culpable and subject to consequences when they directly disobey instructions in a way that causes damage, particularly if you give them repeated direct instructions to "stop what you are doing".

And also, when it says "You're absolutely right! I disobeyed your direct instructions causing irreparable damage, so sorry, that totes won't happen again, pinky promise!", those are just some words, not actually a meaningful apology or promise to not disobey future instructions.

Personally, I question the usefulness of an AI assistant that can't even be trusted to add an entry to my calendar.

    you withhold and limit access to your devices, your account credentials, and even its own full account permissions, from the start, to the same extent that you would withhold such access from a new hire.
No, like I pointed out, a new hire has signed an employment agreement filled with legalese and is subject to legal ramifications if they delete all my emails while I'm screaming "stop what you are doing!". And if they say "oh, sorry, I totally misunderstood your instructions, that won't happen again" and then do it again, they're committing a crime.

What's the point of hiring a personal assistant who is incapable of sending email? Isn't that precisely what you hire a PA to do?

    Would you let a human being with the aforementioned characteristics — brilliant and capable, but lacking a resume or behavioral background check results — directly use your personal computer or your work computer?

No. And I also wouldn't hire that person as a PA.
reply
syngrog66
20 minutes ago
[-]
madness & reeks of setup bait for security exploits
reply