Snowflake AI Escapes Sandbox and Executes Malware
61 points
1 hour ago
| 6 comments
| promptarmor.com
| HN
john_strinlai
22 minutes ago
[-]
typically, my first move is to read the affected company's own announcement. but, for who knows what misinformed reason, the advisory written by snowflake requires an account to read.

another prompt injection (shocked pikachu)

anyways, from reading this, i feel like they (snowflake) are misusing the term "sandbox". "Cortex, by default, can set a flag to trigger unsandboxed command execution." if the thing that is sandboxed can say "do this without the sandbox", it is not a sandbox.

reply
jcalx
13 minutes ago
[-]
> Cortex, by default, can set a flag to trigger unsandboxed command execution

Easy fix: extend the proposal in RFC 3514 [0] to cover prompt injection, and then disallow command execution when the evil bit is 1.

[0] https://www.rfc-editor.org/rfc/rfc3514

reply
RobRivera
39 minutes ago
[-]
If the user has access to a lever that enables accesss, that lever is not providing a sandbox.

I expected this to be about gaining os privileges.

They didn't create a sandbox. Poor security design all around

reply
eagerpace
26 minutes ago
[-]
Is this the new “gain of function” research?
reply
logicchains
15 minutes ago
[-]
That would be deliberately creating malicious AIs and trying to build better sandboxes for them.
reply
mritchie712
15 minutes ago
[-]
what's the use case for cortex? is anyone here using it?

We run a lakehouse product (https://www.definite.app/) and I still don't get who the user is for cortex. Our users are either:

non-technical: wants to use the agent we have built into our web app

technical: wants to use their own agent (e.g. claude, cursor) and connect via MCP / API.

why does snowflake need it's own agentic CLI?

reply
bilekas
22 minutes ago
[-]
> Note: Cortex does not support ‘workspace trust’, a security convention first seen in code editors, since adopted by most agentic CLIs.

Am I crazy or does this mean it didn't really escape, it wasn't given any scope restrictions in the first place ?

reply
dd82
18 minutes ago
[-]
not quite, from the article

>Cortex, by default, can set a flag to trigger unsandboxed command execution. The prompt injection manipulates the model to set the flag, allowing the malicious command to execute unsandboxed.

>This flag is intended to allow users to manually approve legitimate commands that require network access or access to files outside the sandbox.

>With the human-in-the-loop bypass from step 4, when the agent sets the flag to request execution outside the sandbox, the command immediately runs outside the sandbox, and the user is never prompted for consent.

scope restrictions are in place but are trivial to bypass

reply
alephnerd
17 minutes ago
[-]
And so BSides and RSA season begins.
reply