Show HN: Am-I-vibing, detect agentic coding environments
60 points
11 days ago
| 10 comments
| github.com
| HN
Timwi
10 days ago
[-]
The proposed approach has a large number of drawbacks:

* It's not reliable, the project’s own readme mentions false positives.

* It adds a source of confusion where an AI agent tells the user that the CLI tool said X, but running it manually with the same command line gives something different.

* The user can't manually access the functionality even if they want to.

Much better to just have an explicit option to enable the new behaviors and teach the AI to use that where appropriate.

reply
lozenge
10 days ago
[-]
* The online tutorials the LLM was trained on don't match the result the LLM gets when it runs the tool.
reply
0xDEAFBEAD
10 days ago
[-]
We're reaching levels of supply chain attack vulnerability that shouldn't even be possible.
reply
mrKola
10 days ago
[-]
Wasted opportunity to call it: vibrator
reply
Larrikin
10 days ago
[-]
I was working on an Android project and needed to add specific vibration patterns for different actions. Our company was maybe a week into our exploration of LLM tools and they still really sucked. I kept getting failures trying to get any thing useful to output. So I dug into the docs and just started doing it all myself. Then I found some Android engineer had named the base functionality Vibrator back in one of the earliest SDKs.

Thee LLM was actually implementing nearly everything, finding the term vibrator, and was then erasing its output.

reply
lexicality
10 days ago
[-]
reply
SequoiaHope
10 days ago
[-]
Leaves the name available for a buttplug.io agentic interface plugin.
reply
pryelluw
10 days ago
[-]
colon.ai has a nice vibe to it.
reply
mhuffman
10 days ago
[-]
Vibe-Rater
reply
fahrradflucht
10 days ago
[-]
Alternative name suggestion: prompt-injection-toolkit
reply
CaptainFever
10 days ago
[-]
This library envisions cooperative results, like a code giving extra context to AI agents if it detects it is in an agentic environment, but I worry that some people may try to use this to restrict others.

I guess in that scenario, AI agents would have a project-specific "stealth mode" to protect the user.

reply
omeid2
10 days ago
[-]
As someone who uses AI everyday. People who wish to restrict the use of their code by AI should be allowed to do so, but they should make sure their LICENSE is aligned with that. That is the only issue I see.
reply
ritzaco
11 days ago
[-]
This seems like a really bad idea. Agents need to adapt to get good at using tools designed for humans (we have a lot), or use tools specifically designed for agents (soon we will have lots).

But to make your tool behave differently just causes confusion if a human tries something and then gets an agent to take over or vice versa.

reply
hoistbypetard
10 days ago
[-]
On the other hand, if you want to make your tool detect an agent and try a little prompt injection, or otherwise attempt to make the LLM misbehave, this seems like an excellent approach.
reply
kristianc
10 days ago
[-]
In other words, a supply chain attack? Let's call it what it is.
reply
hoistbypetard
10 days ago
[-]
I think the term "supply chain attack" is frequently overused, and if I were feeling cantankerous, I might split hairs and argue that I was framing it more as a "watering hole attack" instead. But I agree that it could also be framed as a "supply chain attack", and you seem to have correctly realized that I was suggesting this was an excellent approach to either attack people using LLMs connected to agentic tooling or to render your gadget incompatible with such usage, if that was your goal.

I do not think it's a particularly good way to assist such users.

reply
JoshTriplett
10 days ago
[-]
This seems like a really good idea for projects that reject AI-written code, to detect and early-fail in such environments.
reply
anuramat
9 days ago
[-]
I also don't see how this requires heuristics, but usecases do exist; eg I set `CLAUDE`, so that a git hook can mark vibe commits -- a prompt would be a waste of tokens and would introduce non-determinism, and MCP is yet another dependency that can get ignored in favour of the CLI equivalent anyway.
reply
ethan_smith
10 days ago
[-]
Tools can maintain consistent interfaces while still providing agent-aware optimizations through metadata or output formatting that doesn't disrupt the human experience.
reply
ofirg
10 days ago
[-]
i'm this old: i don't think you should name packages in SWE with names that you will eventually cave in and change if the project gets real use.
reply
ascorbic
10 days ago
[-]
This isn't something that's going to need to be in a pitch deck. It's the second open source library I've released this week. But even if it was serious, if Hugging Face hasn't changed its name then I think this is fine
reply
deadbabe
10 days ago
[-]
It’s still a ridiculous choice for a name, look at stuff like ScuttleButt whose adoption is only hurt by its crappy name that few people want to bring up in public.
reply
mattigames
10 days ago
[-]
Dead babe has a good point there.
reply
ljlolel
10 days ago
[-]
Can’t stop laughing
reply
johncole
10 days ago
[-]
Lol
reply
maxbond
10 days ago
[-]
I feel I'd be remiss if I didn't suggest the name "vibe check." (The name doesn't bother me personally, for whatever that's worth.)
reply
Retr0id
10 days ago
[-]
why would this one need to be changed?
reply
petesergeant
11 days ago
[-]
Neat! I might monkey patch vitest to show full diffs for expect when being used by an agent
reply
barbazoo
10 days ago
[-]
I don’t like that the fact that an agent was used to write the code is bleeding into runtime of that code. Personally I see the agent as a tool but at the end of the day I have to make the code mine and that includes writing error handling and messaging that’s easy to understand for a human because the agent is not going to help when you get an alert at 3am at night. And often what’s easy to understand by a human is easy to understand for a LLM.
reply
SudoSuccubus
10 days ago
[-]
Good luck detecting things. Guess what. None of your fucking business. It works, it works. You didn't like that. Go fuck yourself. It's like "anti cheating" shit in academia. I get some random output from things. All I do is have a sample of things I want to mimic and any style I have. I can tell Abby system to make it not sound like itself.

Just be honest. You're failing in this "fat the man, man" thing on AI and llms.

It's better to work with the future than pretend that being a Luddite will work in the long run

reply
toobulkeh
10 days ago
[-]
It has nothing to do as a “gotcha”. It’s about improving error codes and other interactions for agentic editors.
reply