However, Steve Yegge's recent credulous foray into promoting a crypto coin, which was (IMO) transparently leveraging his audience and buzz to execute a pump and dump scheme, with him being an unwitting collaborator, makes me think all is not necessarily well in Yegge land.
I think Steve needs to take a step back from his amazing productivity machine and have another look at that code, and consider if it's really production quality.
He's obviously a smart guy, so he definitely should've known better. It's weird how these AI evangelists use AI for everything, but somehow he didn't ask ChatGPT what all of this means and if it may have reputational damage, because I just asked if I should claim these trading fees, and it said:
Claiming could be interpreted as:
* Endorsing the token
* Being complicit if others get rugged later
* This matters if your X account has real followers.
and in the end told me to NOT claim these fees unless I'm OK with being associated with that token.There's another thing. A certain type of engineer seems to get sucked into Amazon's pressure culture. They either are, or end up, a bit manic. Laid back and relaxed one day (especially after holidays), but wound up and under a lot of internal pressure to produce the next, and a lot more of the latter. Something like Gas Town must be a crazy fix when you're feeling that pain. Combined with the vision that if you don't, you're unemployed/unemployable in 12 to 24 months, you might feel you have no choice but to spend every waking minute at it.
It's a bit (more than a bit) rude to analyse someone at a distance. And to be honest, I think something like Gas Town is probably one of the possible shapes of things to come. I don't think what I can observe looks super healthy, is all.
So true. beads[0] is such a mess. Keeps breaking often with each release. Can't understand how people can rely on it for their day-to-day work.
That would assume he's even looked at the code in the first place - I think his whole thesis is based on you never looking at the code.
I didn't notice that. Can you give me a source?
Upshot: Steve thinks he’s built a quality task tracker/work system (beads), and is iterating on architectures, and has gotten convinced an architecture-builder is going to make sense.
Meanwhile, work output is going to improve independently. The bet is that leverage on the top side is going to be the key factor.
To co-believe this with Steve, you have to believe that workers can self-stabilize (e.g. with something like the Wiggum loop you can get some actual quality out of them, unsupervised by a human), and that their coordinators can self stabilize.
If you believe those to be true, then you’re going to be eyeing 100-1000x productivity just because you get to multiply 10 coordinators by 10 workers.
I’ll say that I’m generally bought in to this math. Anecdotally I currently (last 2 months) spend about half my coding agent time asking for easy in-roads to what’s been done; a year ago, I spent 10% specifying and 90% complaining about bugs.
Example, I just pulled up an old project, and asked for a status report — I got a status report based on existing beads. I asked it to verify, and the computer ran the program and reported a fairly high quality status report. I then asked it to read the output (a PDF), and it read the PDF, noticed my main complaints, and issued 20 or so beads to get things in the right shape. I had no real complaints about the response or workplan.
I haven’t said “go” yet, but I presume when I do, I’m going to be basically checking work, and encouraging that work checking I’m doing to get automated as well.
There’s a sort of not-obvious thing that happens as we move from 0.5 9s to say 3 9s in terms of effectiveness — we’re going to go from constant intervention needed at one order of magnitude of work to constant intervention needed at 2.5x that order of magnitude of work — it’s a little hard to believe unless you’ve really poked around — but I think it’s coming pretty soon, as does Steve.
Who, nota bene, to be clear, is working at a pace that he is turning down 20 VCs a week, selling memecoin earnings in the hundreds of thousands of dollars and randomly ‘napping’ in the middle of the day. Stay rested Steve, keep on this side of the manic curve please, we need you.. I’d say it’s a good sign he didn’t buy any GAS token himself.
This is my biggest takeaway. He may or may not be on to something really big, but regardless, it's advancing the conversation and we're all learning from it. He is clearly kicking ass at something.
I would definitely prefer to see this be a well paced marathon rather than a series of trips and falls. It needs time to play out.
Oh good, mainstream coders finally catching up with the productivity of 2010s Clojurists and their “Hammock Driven Development”! (https://m.youtube.com/watch?v=f84n5oFoZBc)
Between quotes like these
> I had lunch again (Kirkland Cactus) with my buddies Ajit Banerjee and Ryan Snodgrass, the ones who have been chastising teammates for acting on ancient 2-hour-old information.
, and trying to argue that this is the future of all productivity while taking time to physically go to a bank to get money off a crypto coin while also crowing about how he can’t waste time on money.
On top of that this entire gas town thing is predicated on not caring about the cost but AI firms are currently burning money as fast as possible selling a dollar for 10 cents. How does the entire framework/technique not crash and burn the second infinite investment stops and the AI companies need to be profitable and not a money hole?
I will preface this that I think AI agents can accomplish impressive things, but in the same way that the Great Pyramid of Giza was impressive while not being economically valuable.
Software is constantly updating. For LLMs to be useful they need to be relatively up to date with software. That means training and from what I understand the training is the vast majority of costs and there is no plausible technical solution around this.
Currently LLMs seem amazing for software because AI companies like OpenAI or Anthropic are doing what Uber and Lyft did in their heyday where they sold dollars for pennys, just to gain market share. Mr. Yegge and friends have made statements about if cost scares you, then step away. Even in the article this thread is about he has this quote
> Jeffrey, as we saw from his X posts, has bought so many Claude Pro Max $200/month plans (22 so far, or $4400/month in tokens) that he got auto-banned by Anthropic’s fraud detection.
And so far what I’ve seen is that he’s developed a system that lets him scale out the equivalent of interns/junior engineers en masse under his tentative supervision.
We already had the option to hire a ton of interns/junior engineers for every experienced engineer. It was quite common 1.5-3 decades ago. You’d have an architect who sketched out the bones of the application down to things like classes or functions, then let the cheap juniors implement.
Everyone moved off that model because it wasn’t as productive per dollar spent.
Mr. Yegge’s Gas Town, to me, looks like someone thought “what if we could get that same gaggle of juniors, for the same cost or more, but they were silicon instead of meat”
Nothing he’s outlined has made me convinced that the unit economics of this are going to work out better than just hiring a bunch of young bright people right out of college, which corporations are increasingly loathe to do.
If you have something to point to for why they thought is incorrect, in regards to this iteration of AI, then please link it.
I guess we’ll make up the losses per unit at scale, and grow to infinity.
or if he's talking to us from 5 years in the future.
Ignoring the financial aspect, this all makes sense - one LLM is good, 100 is better, 1000 is better still. The whole analogy with the industrial revolution makes sense to me.
> AI firms are currently burning money as fast as possible selling a dollar for 10 cents.
The financial aspect is interesting, but we're dealing with today's numbers, and those numbers have been changing fast over the last few years. I'm a big fan of Ed Zitron's writing, and he makes some really good points, but I think condemning all creative uses of LLMs because of the finances is counterproductive. Working out how to use this technology well, despite the finances not making much sense, is still useful.