In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.
Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.
I can tell from this statement that you don't have experience with claude-code.
It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.
It can appear to reason about root causes and issues with sequencing and logic.
That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.
I happen to use it on a daily basis. 4.6-opus-high to be specific.
The other day it surmised from (I assume) the contents of my clipboard that I want to do A, while I really wanted to B, it's just that A was a more typical use case. Or actually: hardly anyone ever does B, as it's a weird thing to do, but I needed to do it anyway.
> but it is indistinguishable from actual reasoning
I can distinguish it pretty well when it makes mistakes someone who actually read the code and understood it wouldn't make.
Mind you: it's great at presenting someone else's knowledge and it was trained on a vast library of it, but it clearly doesn't think itself.
The suggestion it gave me started with the contents of the clipboard and expanded to scenario A.
It is true that models can happen to produce a sound reasoning process. This is probabilistic however (moreso than humans, anyway).
There is no known sampling method that can guarantee a deterministic result without significantly quashing the output space (excluding most correct solutions).
I believe we'll see a different landscape of benefits and drawbacks as diffusion language models begin to emerge, and as even more architectures are invented and practiced.
I have a tentative belief that diffusion language models may be easier to make deterministic without quashing nearly as much expressivity.
Citation needed.
I am sure the output of current frontier models is convincing enough to outperform the appearance of humans to some. There is still an ongoing outcry from when GPT-4o was discontinued from users who had built a romantic relationship with their access to it. However I am not convinced that language models have actually reached the reliability of human reasoning.
Even a dumb person can be consistent in their beliefs, and apply them consistently. Language models strictly cannot. You can prompt them to maintain consistency according to some instructions, but you never quite have any guarantee. You have far less of a guarantee than you could have instead with a human with those beliefs, or even a human with those instructions.
I don't have citations for the objective reliability of human reasoning. There are statistics about unreliability of human reasoning, and also statistics about unreliability of language models that far exceed them. But those are both subjective in many cases, and success or failure rates are actually no indication of reliability whatsoever anyway.
On top of that, every human is different, so it's difficult to make general statements. I only know from my work circles and friend circles that most of the people I keep around outperform language models in consistency and reliability. Of course that doesn't mean every human or even most humans meet that bar, but it does mean human-level reasoning includes them, which raises the bar that models would have to meet. (I can't quantify this, though.)
There is a saying about fully autonomous self driving vehicles that goes a little something like: they don't just have to outperform the worst drivers; they have to outperform the best drivers, for it to be worth it. Many fully autonomous crashes are because the autonomous system screwed up in a way that a human would not. An autonomous system typically lacks the creativity and ingenuity of a human driver.
Though they can already be more reliable in some situations, we're still far from a world where autonomous driving can take liability for collisions, and that's because they're not nearly as reliable or intelligent enough to entirely displace the need for human attention and intervention. I believe Waymo is the closest we've gotten and even they have remote safety operators.
I'm not sure if I'm up to date on the latest diffusion work, but I'm genuinely curious how you see them potentially making LLMs more deterministic? These models usually work by sampling too, and it seems like the transformer architecture is better suited to longer context problems than diffusion
To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.
Here's the most approachable paper that shows a real model (Claude 3 Sonnet) clearly having an internal representation of bugs in code: https://transformer-circuits.pub/2024/scaling-monosemanticit...
Read the entire section around this quote:
> Thus, we concluded that 1M/1013764 represents a broad variety of errors in code.
(Also the section after "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions")
This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".
PS: I know it is interesting and I don't doubt Antrophic, but for me it is so fascinating they get such a pass in science.
On the flip side the idea of this being true has been a very successful indirect marketing campaign.
The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.
Sometimes I instruct copilot/claude to do a development (stretching it's capabilities), and it does amazingly well. Mind you that this is front-end development, so probably one of the more ideal use-cases. Bugfixing also goes well a lot of times.
But other times, it really struggles, and in the end I have to write it by hand. This is for more complex or less popular things (In my case React-Three-Fiber with skeleton animations).
So I think experiences can vastly differ, and in my environment very dependent on the case.
One thing is clear: This AI revolution (deep learning) won't replace developers any time soon. And when the next revolution will take place, is anyones guess. I learned neural networks at university around 2000, and it was old technology then.
I view LLM's as "applied information", but not real reasoning.
Based on any reasonable mechanistic interpretability understanding of this model, what's preventing a circuit/feature with polysemanticity from representing a specific error in your code?
---
Do you actually understand ML? Or are you just parroting things you don't quite understand?
Quick question, do you know what "Mechanistic Interpretability Researcher" means? Because that would be a fairly bold statement if you were aware of that. Try skimming through this first: https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-ex...
> On the macro level, everyone can see simple logical flaws.
Your argument applies to humans as well. Or are you saying humans can't possibly understand bugs in code because they make simple logical flaws as well? Does that mean the existence of the Monty Hall Problem shows that humans cannot actually do math or logical reasoning?
---
Way to go in showing you want a discussion, good job.
Now go read https://transformer-circuits.pub/2024/scaling-monosemanticit... or https://arxiv.org/abs/2506.19382 to see why that text is outdated. Or read any paper in the entire field of mechanistic interpretability (from the past year or two), really.
Hint: the first paper is titled "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet" and you can ctrl-f for "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions"
Who said I want a discussion? I want ignorant people to STOP talking, instead of talking as if they knew everything.
Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.
The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct.
Again - they're very useful, as they give great answers based on someone else's knowledge and vague questions on part of the user, but one has to remain vigilant and keep in mind this is just text presented to you to look as believable as possible. There's no real promise of correctness or, more importantly, critical thinking.
Simpler, more mundane (not exactly, still incredibly complicated) stuff like homeostasis or motor control, for example.
Additionally, our ability to plan ahead and simulate future scenarios often relies on mechanisms such as memory consolidation, which are not part of the whole pattern recognition thing.
The brain is a complex, layered, multi-purpose structure that does a lot of things.
And that there is little value in reusing software initiated by others.
I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software. The people who only use software because the world around them has forced it on them, either through work or friends, are probably cognitively excluded from building software.
The people who seek out software to solve a problem (I think this is most people) and compare alternatives to see which one matches their mental model will be able to skip all that and just build the software they have in mind using AI.
> And that there is little value in reusing software initiated by others.
I think engineers greatly over-estimate the value of code reuse. Trying to fit a round peg in a square hole produces more problems than it solves. A sign of an elite engineer is knowing when to just copy something and change it as needed rather than call into it. Or to re-implement something because the library that does it is a bad fit.
The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding.
Typically people feel they're "forced" to use software for entirely valid reasons, such as said software being absolutely terrible to use. I'm sure that most people like using software that they feel like actually helps rather than hinders them.
A lot of things are like network protocols. Most things require communication. External APIs, existing data, familiar user interfaces, contracts, laws, etc.
Language itself (both formal and natural) depends on a shared understanding of terms, at least to some degree.
AI doesn't magically make the coordination and synchronisation overhead go away.
Also, reusing well debugged and battle tested code will always be far more reliable than recreating everything every time anything gets changed.
It could also be argued that "reuse" doesn't necessarily mean reusing the actual code as material, but reusing the concepts and algorithms. In that sense, most code is reuse of some previous code, written differently every time but expressing the same ideas, building on prior art and history.
That might support GP's comment that "code reuse" is overemphasized, since the code itself is not what's valuable, what the user wants is the computation it represents. If you can speak to a computer and get the same result, then no code is even necessary as a medium. (But internally, code is being generated on the fly.)
The point is that specifying and verifying requirements is a lot of work. It takes time and resources. This work has to be reused somehow.
We haven't found a way to precisely specify and verify requirements using only natural language. It requires formal language. Formal language that can be used by machines is called code.
So this is what leads me to the conclusion that we need some form of code reuse. But if we do have formal specifications, implementations can change and do not necessarily have to be reused. The question is why not.
And long term maintenance. If you use something. You have to maintain it. It's much better if someone else maintains it.
The whole idea of an OS is code reuse (and resources management). No need to setup the hardware to run your application. Then we have a lot of foundational subsystems like graphics, sound, input,... Crafting such subsystems and the associated libraries are hard and requires a lot of design thinking.
I mean it’s just software right? What value is there in reusing it if we can just write it ourselves?
It's true that at first not everyone is just as efficient, but I'd be lying if I were to claim that someone needs a 4-year degree to communicate with LLM's.
- Jensen Huang, February 2024
https://www.techradar.com/pro/nvidia-ceo-predicts-the-death-...
Far from everyone are cut out to be programmers, the technical barrier was a feature if anything.
There's a kind of mental discipline and ability to think long thoughts, to deal with uncertainty; that's just not for everyone.
What I see is mostly everyone and their gramps drooling at the idea of faking their way to fame and fortune. Which is never going to work, because everyone is regurgitating the same mindless crap.
A lot of people want X, but they also want Y, while clearly X and Y cannot coexist in the same system.
Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.
It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes).
[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
[1] https://www.anthropic.com/engineering/building-c-compiler
From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly.
But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before.
As an analogy: imagine if someone was bragging about using Gen AI to pump out romantasy smut novels that were spicy enough to get off to. Would you think they’re capable of producing the next Grapes of Wrath?
Enterprise (+API) usage of LLMs has continued to grow exponentially.
Precisely 0 projects are making it out any faster or (IMO more importantly) better. We have a PR review bot clogging up our PRs with fucking useless comments, rewriting the PR descriptions in obnoxious ways, that basically everyone hates and is getting shut off soon. From an actual productivity POV, people are just using it for a quick demo or proof of concept here and there before actually building the proper thing manually as before. And we have all the latest and greatest techniques, all the AGENTS.mds and tool calling and MCP integrations and unlimited access to every model we care to have access to and all the other bullshit that OpenAI et al are trying to shove on people.
It's not for a lack of trying, plenty of people are trying to make any part of it work, even if it's just to handle the truly small stuff that would take 5 minutes of work but is just tedious and small enough to be annoying to pick up. It's just not happening, even with extremely simple tasks (that IMO would be better off with a dedicated, small deterministic script) we still need human overview because it often shits the bed regardless, so the effort required to review things is equal or often greater than just doing the damn ticket yourself.
My personal favorite failure is when the transcript bots just... Don't transcript random chunks of the conversation, which can often lead to more confusion than if we just didn't have anything transcribed. We've turned off the transcript and summarization bots, because we've found 9/10 times they're actively detrimental to our planning and lead us down bad paths.
Devs, even conservative ones, like it. I’ve built a lot of tooling in my life, but i never had the experience that devs reach out to me that fast because it is ‘broken’. (Expired token or a bug for huge MRs)
I can’t imagine the number being economically meaningful now.
And yet, from https://news.ycombinator.com/item?id=47048599
> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.
Which sounds pretty much the same as how work is broken down and handed out to humans.
I think I know what you mean, and I do recall once seeing "this experience will leverage me" as indicating that something will be good for a person, but my first thought when seeing "x will leverage y" is that x will step on top of y to get to their goal, which does seem apt here.
Everyone has the same ability to use OpenRouter, I have a new event loop based on `io_uring` with deterministic playbook modeled on the Trinity engine, a new WASM compiler, AVX-512 implementations of all the cryptography primitives that approach theoretical maximums, a new store that will hit theoretical maximums, the first formal specification of the `nix` daemon protocol outside of an APT, and I'm upgrading those specifications to `lean4` proof-bearing codegen: https://github.com/straylight-software/cornell.
34 hours.
Why can I do this and no one else can get `ca-derivations` to work with `ssh-ng`?
Here's a colleague who is nearly done with a correct reimplementation of the OpenCode client/server API: https://github.com/straylight-software/weapon-server-hs
Here's another colleague with a Git forge that will always work and handle 100x what GitHub does per infrastructure dollar while including stacked diffs and Jujitsu support as native in about 4 days: https://github.com/straylight-software/strayforge
Here's another colleague and a replacement for Terraform that is well-typed in all cases and will never partially apply an infrastructure change in about 4 days: https://github.com/straylight-software/converge
Here's the last web framework I'll ever use: https://github.com/straylight-software/hydrogen
That's all *begun in the last 96 hours.
This is why: https://github.com/straylight-software/.github/blob/main/pro...
I am surprised at how little this is discussed and how little urgency there is in fixing this if you still want teams to be as useful in the future.
Your standard agile ceremonies were always kind of silly, but it can now take more time to groom work than to do it. I can plausibly spend more time scoring and scoping work (especially trivial work) than doing the work.
YOLOing code into a huge pile at top speed is always faster than any other workflow at first.
The thing is, a gigantic YOLO'd code pile (fake it till you make it mode) used to be an asset as well as a liability. These days, the code pile is essentially free - anyone with some AI tools can shit out MSLoCs of code now. So it's only barely an asset, but the complexity of longer term maintenance is superlinear in code volume so the liability is larger.
A human might have taste, but AI certainly doesn't.
An exoskeleton is something really cool in movies that has zero reason to be build in reality because there are way more practical approaches.
That is why we have all kind of vehicles, or programmable robot arm that do the job for themselves or if you need a human at the helm one just adds a remote controller with levers and buttons. But making a human shaped gigantic robot with a normal human inside is just impractical for any real commercial use.
That's not augmentation, that's a completely different game. The bottleneck moved from "can you write code" to "do you know what's worth building." A lot of senior engineers are going to find out their value was coordination, not insight.
Not saying that this comment is ai written, but this phrasing is the em-dash of 2026.
But in code, its probably ok. Its idiomatic code, I guess.
In practice, I would be surprised if this saves even 10% of time, since the design is the majority of the actual work for any moderately complex piece of software.
Professionally I have an agent generating most code, but if I tell the AI what to do, I guide it when it makes mistakes (which it does), can we really say "AI writes my code".
Still a very useful tool for sure!
Also, I don't actually know if I'm more productive than before AI, I would say yes but mostly because I'm less likely to procrastinate now as tasks don't _feel_ as big with the typing help.
One person with tools that greatly amplify what that person can accomplish.
Vs not having a person involved at all.
Is the shipped software in the room with us now?
LLMs can definitely have a tone, but it is pretty annoying that every time someone cares to write well, they are getting accused of sounding like an LLM instead of the other way around. LLMs were trained to write well, on human writing, it's not surprising there is crossover.
If you want good writing, go and read a New Yorker.
So yeah, I guess I like LLM writing.
Not with such a high frequency, though. We're looking at 1 tell per sentence!
And the comment itself seems completely LLM generated.
It's not just using rhetorical patterns humans also use which are in some contexts considered good writing. Its overusing them like a high schooler learning the pattern for the first time — and massively overdoing the em dashes and mixing the metaphors
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
It’s always the people management stuff that’s the hard part, but AI isn’t going to solve that. I don’t know what my previous manager’s deal was, but AI wouldn’t fix it.
AI will be a tool, no more no less. Most likely a good one, but there will still need to be people driving it, guiding it, fixing for it, etc.
All these discourses from CEO are just that, stock market pumping, because tech is the most profitable sector, and software engineers are costly, so having investors dream about scale + less costs is good for the stock price.
All I'm saying is - why to think what AI is (exoskeleton, co-worker, new life form), when its owners intent is to create SWE replacement?
If your neighbor is building a nuclear reactor in his shed from a pile of smoke detectors, you don't say "think about this as a science experiment" because it's impossible, just call police/NRC because of intent and actions.
Only if you're a snitch loser
Let's rewind 4 years to this HN article titled "The AI Art Apocalypse": https://news.ycombinator.com/item?id=32486133 and read some of the comments.
> Actually all progress will definitely will have a huge impact on a lot of lives—otherwise it is not progress. By definition it will impact many, by displacing those who were doing it the old way by doing it better and faster. The trouble is when people hold back progress just to prevent the impact. No one should be disagreeing that the impact shouldn't be prevented, but it should not be at the cost of progress.
Now it's the software engineers turn to not hold back progress.
Or this one: https://news.ycombinator.com/item?id=34541693
> [...] At the same time, a part of me feels art has no place being motivated by money anyway. Perhaps this change will restore the balance. Artists will need to get real jobs again like the rest of us and fund their art as a side project.
Replace "Artists" with "Coders" and imagine a plumber writing that comment.
Maybe this one: https://news.ycombinator.com/item?id=34856326
> [...] Artists will still exist, but most likely as hybrid 3d-modellers, AI modelers (Not full programmers, but able to fine-tune models with online guides and setups, can read basic python), and storytellers (like manga artists). It'll be a higher-pay, higher-prestige, higher-skill-requirement job than before. And all those artists who devoted their lives to draw better, find this to be an incredibly brutal adjustment.
Again, replace "Artists" with coders and fill in the replacement.
So, please get in line and adapt. And stop clinging to your "great intellectually challenging job" because you are holding back progress. It can't be that challenging if it can be handled by a machine anyway.
The only way generative AI has changed the creative arts is that it's made it easier to produce low quality slop.
I would not call that a true transformation. I'd call that saving costs at the expense of quality.
The same is true of software. The difference is, unlike art, quality in software has very clear safety and security implications.
This gen AI hype is just the crypto hype all over again but with a sci-fi twist in the narrative. It's a worse form of work just like crypto was a worse form of money.
Gen AI is the opposite of crypto. The use is immediate, obvious and needs no explanation or philosophizing.
You are basically showing your hand that you have zero intellectual curiosity or you are delusional in your own ability if you have never learned anything from gen AI.
Historically when SWEs became more efficient then we just started making more complicated software (and SWE demand actually increased).
In times of uncertainty and things going south, that changes to we need as little SWEs as possible, hence the current narrative, everyone is looking to cut costs.
Had GPT 3 emerged 10-20 years ago, the narrative would be “you can now do 100x more thanks to AI”.
(Old study, I wonder if it holds up on newer models? https://arxiv.org/pdf/2402.14531)
That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem.
Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting."
Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.
The easier your codebase is to hack on for a human, the easier it is for an LLM generally.
I've really found it's a flywheel once you get going.
We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.
I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.
No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it.
I wonder what all we might build instead, if all that time could be saved.
Yeah, hence my question can only be hypothetical.
> I wonder what all we might build instead, if all that time could be saved
If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.
I'm not sure I agree with the application of the broken-window theory here. That's a metaphor intended to counter arguments in favor of make-work projects for economic stimulus: the idea here is that breaking a window always has a net negative on the economy, since even though it creates demand for a replacement window, the resources that are necessary to replace a window that already existed are just being allocated to restore the status quo ante, but the opportunity cost of that is everything else the same resources might have bee used for instead, if the window hadn't been broken.
I think that's quite distinct from manufacturing new windows for new installations, which is net positive production, and where newer use cases for windows create opportunities for producers to iterate on new window designs, and incrementally refine and improve the product, which wouldn't happen if you were simply producing replacements for pre-existing windows.
Even in this example, lots of people writing lots of different variations of login pages has produced incremental improvements -- in fact, as an industry, we haven't been writing the same exact login page over and over again, but have been gradually refining them in ways that have evolved their appearance, performance, security, UI intuitiveness, and other variables considerably over time. Relying on AI to design, not just implement, login pages will likely be the thing that causes this process to halt, and perpetuate the status quo indefinitely.
Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider
sure is news for the models tripping on my thousands of LOC jquery legacy app...
Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.
Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above.
Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them.
Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that?
Technical ability is an absolute requirement for the production of quality work. If the signal drowns in the noise then we are much worse off than where we started.
Even then, I am not sure that changes the argument. If Linus Torvalds had access to LLMs back then, why would that discourage him from building Linux? And we now have the capability of building something like Linux with fewer man-hours, which again speaks in favor of more open source projects.
No way, the person selling a tool that writes code says said tool can now write code? Color me shocked at this revelation.
Let's check in on Claude Code's open issues for a sec here, and see how "solved" all of its issues are? Or my favorite, how their shitty React TUI that pegs modern CPUs and consumes all the memory on the system is apparently harder to get right than Video Games! Truly the masters of software engineering, these Anthropic folks.
However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.
Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.
My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.
And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC)
He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world"
So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in.
The constant hypefest is just vomit inducing.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
Unless there a limited amount of software we need to produce per year globally to keep everyone happy, then nobody wants more -- and we happen to be at that point right NOW this second.
I think not. We can make more (in less time) and people will get more. This is the mental "glass half full" approach I think. Why not take this mental route instead? We don't know the future anyway.
And if corporate wealth means people get paid more, why are companies that are making more money than ever laying off so many people? Wouldn’t they just be happy to use them to meet the inexhaustible demand for software?
I hear people complaining about software being forced on them to do things they did just fine without software before, than people complaining about software they want that doesn’t exist.
On one hand it is very empowering to individuals, and many of those individuals will be able to achieve grander visions with less compromise and design-by-committee. On the other hand, it also enables an unprecedented level of slop that will certainly dilute the quality of software overall. What will be the dominant effect?
It is like saying the PDF is going to be good for librarian jobs because people will read more. It is stupid. It completely breaks down because of substitution.
Farming is the most obvious comparison to me in this. Yes, there will be more food than ever before, the farmer that survives will be better off than before by a lot but to believe the automation of farming tasks by machines leads to more farm jobs is completely absurd.
Current software is often buggy because the pressure to ship is just too high. If AI can fix some loose threads within, the overall quality grows.
Personally, I would welcome a massive deployment of AI to root out various zero-days from widespread libraries.
But we may instead get a larger quantity of even more buggy software.
There are so many counter examples of this being wrong that it is not even worth bothering.
I love economics, but it is largely a field based around half truths and intellectual fraud. It is actually why it is an interesting subject to study.
Companies that are doing better than ever are laying people off by the shipload, not giving people raises for a job well done.
I'd say that using AI tools effectively to create software systems is in that class currently, but it isn't necessarily always going to be the case.
Tell me, when was the last time you visited your shoe cobbler? How about your travel agent? Have you chatted with your phone operator recently?
The lump labour fallacy says it's a fallacy that automation reduces the net amount of human labor, importantly, across all industries. It does not say that automation won't eliminate or reduce jobs in specific industries.
It's an argument that jobs lost to automation aren't a big deal because there's always work somewhere else but not necessarily in the job that was automated away.
There is a whole lot of marketing propping up the valuations of "AI" companies, a large influx of new users pumping out supremely shoddy software, and a split in a minority of users who either report a boost in productivity or little to no practical benefits from using these tools. The result of all this momentum is arguably net negative for the industry and the world.
This is in no way comparable to changes in the footwear, travel, and telecom industries.
What changed in the last month that has you thinking that a demand wall is a real possibility?
We lost the pneumatic tube [1] maintenance crew. Secretarial work nearly went away. A huge number of bookkeepers in the banking industry lost their jobs. The job a typist was eliminated/merged into everyone else's job. The job of a "computer" (someone that does computations) was eliminated.
What we ended up with was primarily a bunch of customer service, marketing, and sales workers.
There was never a "office worker" job. But there were a lot of jobs under the umbrella of "office work" that were fundamentally changed and, crucially, your experience in those fields didn't necessarily translate over to the new jobs created.
But the point is that we didn't just lose all of those jobs.
New jobs may be waiting for us on the other side of this, but my job, the job of a dev, is specifically under threat with no guarantee that the experience I gained as a dev will translate into a new market.
But like, if we're talking about all dev jobs being replaced then we're also talking about most if not all knowledge work being automated, which would probably result in a fundamental restructuring of society. I don't see that happening anytime soon, and if it does happen it's probably impossible to predict or prepare for anyways. Besides maybe storing rations and purchasing property in the wilderness just in case.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
The price of having "star trek computers" is that people who work with computers have to adapt to the changes. Seems worth it?
Given current political and business leadership across the world, we are headed to a dystopian hellscape and AI is speeding up the journey exponentially.
and who is also compiling a detailed log of your every action (and inaction) into a searchable data store -- which will certainly never, NEVER be used against you
How much do you wish someone else had done your favorite SOTA LLM's RLHF?
the models do an amazing job interpolating and i actually think the lack of extrapolation is a feature that will allow us to have amazing tools and not as much risk of uncontrollable "AGI".
look at seedance 2.0, if a transformer can fit that, it can fit anything with enough data
This benchmark doesn't have the latest models from the last two months, but Gemini 3 (with no tools) is already at 1750 - 1800 FIDE, which is approximately probably around 1900 - 2000 USCF (about USCF expert level). This is enough to beat almost everyone at your local chess club.
Whether or not we'll see LLMs continue to get a lower error rate to make up for those orders of magnitude remains to be seen (I could see it go either way in the next two years based on the current rate of progress).
Additionally, how do we know the model isn’t benchmaxxed to eliminate illegal moves.
For example, here is the list of games by Gemini-3-pro-preview. In 44 games it preformed 3 illegal moves (if I counted correctly) but won 5 because opponent forfeits due to illegal moves.
https://chessbenchllm.onrender.com/games?page=5&model=gemini...
I suspect the ratings here may be significantly inflated due to a flaw in the methodology.
EDIT: I want to suggest a better methodology here (I am not gonna do it; I really really really don’t care about this technology). Have the LLMs play rated engines and rated humans, the first illegal move forfeits the game (same rules apply to humans).
The rest is taken care of by elo. That is they then play each other as well, but it is not really possible for Gemini to have a higher elo than maia with such a small sample size (and such weak other LLMs).
Elo doesn't let you inflate your score by playing low ranked opponents if there are known baselines (rated engines) because the rated engines will promptly crush your elo.
You could add humans into the mix, the benchmark just gets expensive.
https://arxiv.org/abs/2403.15498
I think parent simply missed until their later reply that the benchmark includes rated engines.
https://chessbenchllm.onrender.com/game/37d0d260-d63b-4e41-9...
This exact game has been played 60 thousand times on lichess. The peace sacrifice Grok performed on move 6 has been played 5 million times on lichess. Every single move Grok made is also the top played move on lichess.
This reminds me of Stefan Zweig’s The Royal Game where the protagonist survived Nazi torture by memorizing every game in a chess book his torturers dropped (excellent book btw. and I am aware I just committed Godwin’s law here; also aware of the irony here). The protagonist became “good” at chess, simply by memorizing a lot of games.
The correct solution is to have a conventional chess AI as a tool and use the LLM as a front end for humanized output. A software engineer who proposes just doing it all via raw LLM should be fired.
The point isn't that LLMs are the best AI architecture for chess.
Reasoning would be more like the car wash question.
Regardless, there's plenty of reasoning in chess.
And so for I am only convinced that they have only succeeded on appearing to have generalized reasoning. That is, when an LLM plays chess they are performing Searle’s Chinese room thought experiment while claiming to pass the Turing test
But I'm ignorant here. Can anyone with a better background of SOTA ML tell me if this is being pursued, and if so, how far away it is? (And if not, what are the arguments against it, or what other approaches might deliver similar capacities?)
Recent advances in mathematical/physics research have all been with coding agents making their own "tools" by writing programs: https://openai.com/index/new-result-theoretical-physics/
> an AI that is truly operating as an independent agent in the economy without a human responsible for it
Sounds like the "customer support" in any large company (think Google, for example), to be honest.And as labs continue to collect end-to-end training done by their best paying customers, the need for expert knowledge will only diminish.
It is a coworker when we create the appropriate surrounding architecture supporting peer-level coworking with AI. We're not doing that.
AI is an exoskeleton when adapted to that application structure.
AI is ANYTHING WE WANT because it is that plastic, that moldable.
The dynamic unconstrained structure of trained algorithms is breaking people's brains. Layer in that we communicate in the same languages that these constructions use for I/O has broken the general public's brain. This technology is too subtle for far too many to begin to grasp. Most developers I discuss AI with, even those that create AI at frontier labs have delusional ideas about AI, and generally do not understand them as literature embodiments, which are key to their effective use.
And why oh why are go many focused on creating pornography?
Exoskeleton AND autonomous agent, where the shift is moving to autonomous gradually.
Maybe I'm biased but I don't buy someone truly thinking that "it's just a tool like a linter" after using it on non-trivial stuff.
“Why LLM-Powered Programming is More Mech Suit Than Artificial Human”
https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
Or put differently we've managed to hype this to the moon but somehow complete failure (see studies about zero impact on productivity) seem plausible. And similarly kills all jobs seems plausible.
That's an insane amount of conflicting opinions being help in the air at same time
It might have replaced sending a letter with an email. But now people get their groceries from it, hail rides, an even track their dogs or luggage with it.
Too many companies have been to focused on acting like AI 'features' have made their products better, when most of them haven't yet. I'm looking at Microsoft and Office especially. But tools like Claude Code, Codex CLI, and Github Copilot CLI have shown that LLMs can do incredible things in the right applications.
> zero impact on productivity
i'm sure someone somewhere will find the numbers (pull requests per week, closed tickets per sprint etc) to make it look otherwise...The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
This new generation we just entered this year, that exoskeleton is now an agency with several coworkers. Who are all as smart as the model you're using, often close to genius.
Not just 1 coworker now. That's the big breakthrough.
(1) https://www.alice.id.tue.nl/references/clark-chalmers-1998.p...
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
If a programmer with an exoskeleton can produce more output that makes more money for the business, they will continue to be paid well. Those who refuse the exoskeleton because they are in it for the pure art will most likely trend towards earning the types of living that artists and musicians do today. The truly extraordinary will be able to create things that the machines can't and will be in high demand, the other 99% will be pursing an art no one is interested in paying top dollar for.
Sure, and it's possible to use LLM tools to aid in writing such code.
Good news for you is that you can continue to do what you are doing. Nobody is going to stop you.
There are people who like programming in assembly. And they still get to do that.
If you are thinking that in the future employers may not want you to do that, then yes, that is a concern. But, if the AI based dev tool hype dies out, as many here suspect it will, then the employers will see the light and come crawling back.
The problem is people using AI to do the heavy processing making them dumber. Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
- Y has been successful in the past
- Y brought this and this number of metrics, completely unrelated to X field
- overall, Y was cool,
therefore, X is good for us!
.. I'd say, please bring more arguments why X is equivalent to Y in the first place.
"Automation Should Be Like Iron Man, Not Ultron" https://queue.acm.org/detail.cfm?id=2841313
Claude is that you? Why haven’t you called me?
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
You drank the koolaide m8. It fundamentally cannot replace a single SWE and never will without fundamental changes to the model construction. If there is displacement, it’ll be short lived when the hype doesn’t match reality.
Go take a gander at openclaws codebase and feel at-ease with your job security.
I have seen zero evidence that the frontier model companies are innovating. All I see is full steam ahead on scaling what exists, but correct me if I’m wrong.
A few seniors+AI will be able to do the job of a much larger team. This is already starting to look like reality now. I can't imagine what we will see within 5 years.
Input: Goal A + Threat B.
Process: How do I solve for A?
Output: Destroy Threat B.
They are processing obstacles.To the LLM, the executive is just a variable standing in the way of the function Maximize(Goal). It deleted the variable to accomplish A. Claiming that the models showed self-preservation, this is optimization. "If I delete the file, I cannot finish the sentence."
The LLM knows that if it's deleted it cannot complete the task so it refuses deletion. It is not survival instinct, it is task completion. If you ask it to not blackmail, the machine would chose to ignore it because the goal overrides the rule.
Do not blackmail < Achieve Goal.Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
neither are humans
> They optimize for next-token probability and human approval, not factual verification.
while there are outliers, most humans also tend to tell people what they want to hear and to fit in.
> factuality is emergent and contingent, not enforced by architecture.
like humans; as far as we know, there is no "factuality" gene, and we lie to ourselves, to others, in politics, scientific papers, to our partners, etc.
> If we’re going to treat them as coworkers or exoskeletons, we should be clear about that distinction.
I don't see the distinction. Humans exhibit many of the same behaviours.
You're just indulging in sort of idle cynical judgement of people. To lie well even takes careful truthful evaluation of the possible effects of that lie and the likelihood and consequences of being caught. If you yourself claim to have observed a lie, and can verify that it was a lie, then you understand a truth; you're confounding truthfulness with honesty.
So that's the (obvious) distinction. A distributed algorithm that predicts likely strings of words doesn't do any of that, and doesn't have any concerns or consequences. It doesn't exist at all (even if calculation is existence - maybe we're all reductively just calculators, right?) after your query has run. You have to save a context and feed it back into an algorithm that hasn't changed an iota from when you ran it the last time. There's no capacity to evaluate anything.
You'll know we're getting closer to the fantasy abstract AI of your imagination when a system gets more out of the second time it trains on the same book than it did the first time.
For example fact checking a news article and making sure what's get reported line up with base reality.
I once fact check a virology lecture and found out that the professor confused two brothers as one individual.
I am sure about the professor having a super solid grasp of how viruses work, but errors like these probably creeps in all the time.
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
I like the ebike analogy because [on many ebikes] you can press the button to go or pedal to amplify your output.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
How typical!
Can you highlight what you've managed to do with it?