Here's why: The slot machine can drop any hard requirement that you specifically in your AGENTS.md, memory.md or your dozens of skill markdowns. Pretty much guaranteed.
These harnesses approaches pretend as if LLMs are strict and perfect rule followers and the only problem is not being able to specify enough rules clearly enough. That's fundamental cognitive lapse in how LLMs operate.
That leaves only one option not reliable but more reliable nevertheless: Human review and oversight. Possibly two of them one after the other.
Everything else is snake oil but at that point, you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.
I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.
AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.
I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.
Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.
Ive solved customer issues in seconds and minutes with it vs hours.
I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills
I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid
Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.
Just shit that would've taking years previously, is achievable in under a month.
Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.
Not that these or any "skills" will do that, but just- in principle. This is like alienation from labor at scale.
Humans have been minimizing how much work is needed to get a certain level of output for as long as we can track. It's civilization. Should we go back to farming by hand with hoes, to maximize labor used? Go back to streetlights that are individually lit? The society that falls behind on automation becomes poorer, and eventually just dies, as even the people born there tend to choose to leave to higher productivity places. It happened to eastern europe, it happens to the Amish. To any poor society which gets emigration. Doing more with less has always been exciting.
Right now it's not clear in which direction everything is involving and that's why people experiment with handing all their data to random agents, figuring out how to store and access context, re-use prompts and other attempts to harness this tech. Most of these will maybe be useless in a year as they might be deeply integrated into the next wave of models but staying on top of the development has always been part of the fun of working in this field.
a worker is just the sum total of all work related context. to collate, verify and organize this context is just asking to be replaced.
If Addy reads this, how do you pitch this vs. Superpowers? https://github.com/obra/superpowers
I showed up on the agentic dev scene prior to superpowers, and I am getting concerned that >50% of my self-rolled processes are now covered by superpowers.
I no longer trust gh stars, can anyone chime in? Is superpowers now truly adopted?
If it is truly valuable, why hasn't Boris integrated the concepts yet?
I also found that I have different skills for different tasks; at work security is a huge concern and I over-emphasise security in the skills. At play I'm less bothered about security and so the skills I've written to help me build stupid one-shot exploratory websites are less about security and more about refactoring and exploring concepts.
People were hyping up Oh My Opencode. When they realized it didn't lead to any significant gains in performance they hopped on the next thing.
And when the same thing happens to Superpowers it'll be something else they cling on because "this time it's different"
To give back as much as I can, I use the two built-in CC review processes when appropriate. But, those only do "is this PR good code?"
Far too late did I finally roll my own custom review skill that tests: "does this PR accomplish what the specs required?"
If I could ask for one more vanilla CC skill, it might be that. However, maybe rolling your own repo-aware skill via prompt is better?
I used superpowers - but it burns waay more tokens for basically the same outcome as a single line that states
"Please do planning and ask any required questions before implementing.
[my prompt]"
On the latest models and with a decent harness, the planning modes are quite good, and the single sentence telling it to ask you questions lets the model pick the right thing to ask about, instead of wasting a bunch of time/tokens on predefined skills that try to force basically the same result.
It does introduce a second set of required interactions, but you can have another agent be your "questions answerer" if you need it (result quality goes down a bit vs answering myself, but still quite good, especially if you spend a bit of time on the answerer prompt)
Basically - things are moving fast enough I'm not convinced buying into superpowers/agentskills/[daily prompt magic beans]/etc tooling really makes sense.
I'd stick to the defaults in the harness for most cases, and then work on being clear with the ask.
"If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill."It shouldn't be your default, but should absolutely be tried when your skill/agent test suite displays evidence that it's not being reliably invoked without it.
Good test cases.
Clear and concise documentation.
CI/CD.
Best practices and onboarding docs.
Managing LLMs is becoming more and more similar to managing teams of people.
If the LLM fails, either you didn't describe your outcome sufficiently or is misinterpreted what you said or it couldn't do it (rare).
Common errors should be encoded as context for future similar tasks, don't bloat skills with stuff that isn't shown to be necessary.
This is not true for anything complex. They’re instruction followers, of which task completion is just one facet.
They’re also extremely eager to complete tasks without enough information, and do it wrongly. In the case of just describing task completion, despite your best efforts, there are always some oversights or things you didn’t even realize were underspecified.
So it helps a lot to add some process around it, eg “look up relevant project conventions and information. think through how to complete the task. ask me clarifying questions to resolve ambiguities. blah blah”. This type of prompt will also help with the new Opus 4.7 adaptive thinking to ensure it thinks through the task properly.
Yes, not everything I use LLMs for going to have the same level of ambiguity or complex requirements. Optimizing by choosing to skip over parts of the process is exactly Addy is talking in this article.
I prefer the start small and iterate approach to arrive at a result.
Then I ask it to summarize. Sometimes after that I ask it to generalize.
Curious how normal that is - it would only take a couple of these to really fill the context alot.
I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.
Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.
Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.
805 lines
660 lines
511 lines
Maybe I am _too_ conservative here. Lots to explore.Yep, benchmarks, comparisons of with/without, samples of generated code with/without. This kind of stuff matters, and you may be making your agent stupider or getting worse results without real analysis.
Also this prose reads like the author has drunk the Google kool-aid and not much else.
This (sdlc == working backwards & bar raiser) is so horribly wrong, that I hope this was an LLM hallucination.
In general, I'm starting to see these agent scaffolding systems as an anti-pattern: people obsess over systems for guiding agents and construct elaborate rube-goldberg machines and then others cargo-cult them wholesale, in an effort to optimize and control a random process and minimize human involvement.
But I don't expect anyone to every use my stuff. It's complicated as hell. But it's for me, and it works without me having to remotely think about the complexity.
I love that.
Very grateful for this repository and everyone who contributed to it!
That being said, this post is full of reasonable assertions, so I'm looking forward to experimenting with this... whatever it is.
I only make it for me, so it's a bit complex and targeted towards me, and what I do, but it's pretty easy to adjust things.
https://github.com/notque/vexjoy-agent
Working on reading through Agent Skills, it seems we've converged on a lot of the same points, and I've never seen it, so trying to get an understanding of it.
Edit 1: I don't like all the commands. I just rely on a single router to automatically decide what I want, and that feels like the most reasonable way to me to communicate with it.
I don't want to remember things. And that's the way for me to scale the number of skills and activities. I don't have to think about them.
Edit 2: We have very different routers.
https://github.com/addyosmani/agent-skills/blob/f504276d8e07...
vs
https://github.com/notque/vexjoy-agent/blob/main/skills/do/S...
I personally wouldn't call theirs an intelligent router. They are dancing between a few different skills. We have extremely different setups there.
But of course, I'm using way more context to get it done. I'm even sending it out to Haiku to build the route choices.
I choose to use tokens to make things better for myself, not everyone would make the same choice, so I certainly see why they are using a few skills, and composing them.
Edit 3: This is much easier for a user to wrap their head around because there's much less.
I am only focused on the best improvements I can make that show value for my use cases. This is straight foward to reason about.
This seems like a nice way to get the best concepts for people trying to understand them. I commend them for a clean, simple approach.
Edit 4: Yeah, I think there are some things I can learn from them which is always good.
I especially like simple decisions like collapsing the install details for each harness in the readme.
I'm going to read over the entire thing and look for opportunities to improve my stuff.
We are all working together, learning, testing, building, trying to find the best way to implement things.