Let's properly analyze an AI article for once
221 points
2 days ago
| 17 comments
| nibblestew.blogspot.com
| HN
vunderba
1 day ago
[-]
Spot on critical analysis of the blog post "Developers reinvented" by Github Thomas Dohmke which includes such quotes as:

> Many Computer Science (CS) programs still center around problems that AI can now solve competently.

Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]

Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.

[1] https://x.com/mxcl/status/608682016205344768

reply
Gigachad
1 day ago
[-]
Schools still make you manually understand math even though calculators have been perfect for decades. Because it turns out having some magic machine spit out an answer you can’t understand isn’t good and you’ll have no ability to understand when and why the answer is incorrect.
reply
xg15
1 day ago
[-]
I found Factorio being a good paradigm for that. The core game loop is essentially: "Do a task manually, then automate it; then do a higher-level task manually that only became feasible through the automation; then automate that as well, etc etc".

But throughout the game, you often drop back down into the lower level tasks, e.g. to understand problems or change the workflow. So in the end an understanding of the entire "stack" of tasks on different abstraction levels is necessary to make progress.

This always felt like a good analogy to programming, or really scientific knowledge in general.

reply
internet_points
1 day ago
[-]
Are there any games that teach automation like Factorio but that don't have that depressive dystopian magnasanti feel?
reply
drdrey
1 day ago
[-]
Dyson Sphere Program and a lot of zachtronics games do that very well
reply
pavel_lishin
1 day ago
[-]
Shapez 2 (and I guess Shapez 1) both feel cartoony and abstract enough that it doesn't make you feel like you're polluting a whole planet and killing the wildlife.

But that very same cartooniness also made it less interesting to me; the things you're producing are just too arbitrary.

reply
ThrowawayR2
1 day ago
[-]
Perhaps one of the successors to the original Tekkit mod for Minecraft, like Tekkit SMP? It's one of the Minecraft mods that inspired Factorio.
reply
jon_richards
1 day ago
[-]
Bombe is great. You might like the zactronics games. And there are tons of factory games now if you want factorio with different window dressing.
reply
navane
1 day ago
[-]
Solving a situation, moving up an abstraction layer, solving that, etc is inherently distopian. That's how we got here.
reply
xg15
1 day ago
[-]
Huh? That seems just like the basic pattern of any kind of civilization or even just personal learning. How would you do it otherwise?

Unless you mean to imply that civilization itself is already dystopian and we should go back to hunting and gathering?

reply
KronisLV
19 hours ago
[-]
> Are there any games that teach automation like Factorio but that don't have that depressive dystopian magnasanti feel?

And as the opposite question: are there games that give more of that feeling?

I want to feel like I'm playing the human faction in Starship Troopers or on Pandora in Avatar, but in the more factory building sense, where you supply a war machine or the industrial capacity that will inevitably make the local ecosystems and planet perish.

On the more bright and cheerful side, though, Satisfactory is great, Captain of Industry might be worthy of a look (you're literally helping a settlement of humans survive), maybe Mindustry for something a bit simpler or Factory Town. I'd also mention Urbek City Builder and Timberborn as loosely related, albeit they can feel just more like puzzle games.

reply
felixhummel
1 day ago
[-]
Shapez
reply
mock-possum
1 day ago
[-]
Satisfactory?

Or for a different take, magnum opus?

reply
jacquesm
1 day ago
[-]
I think until the third year of high school you should do without a calculator. There really is no substitute for being able to do basic math in your head. It also helps you to later on spot order of magnitude errors and such, as well as to do good first order of approximation estimates.
reply
wrs
1 day ago
[-]
In my ninth grade physics class we had to use a slide rule to do calculations for the first three months to help develop that intuition for orders of magnitude. Also because our teacher was just tired of people reporting results to nine significant figures for measurements they made with a meter stick.
reply
dleeftink
1 day ago
[-]
I'm a bit different then, maths only started to make sense well after picking up a calculator; Wolfram notebooks, Excel and SQL were much easier for me to grok than attempting the equivalent by heart/head/hand.

Nowadays, math concepts or papers only makes sense when I can properly implement them as query, it's somehow a basic translation step I need.

reply
mafuy
1 day ago
[-]
reply
fspeech
1 day ago
[-]
Unfortunately the writer's own understanding of statistics is flawed. For example sample size has not much to do with population size but a lot to do with effect size. For example to demonstrate that a chemical is fatal often a sample size of 1 is sufficient. Population comes into play if there is a large variance of the effect relative to the average size of the effect. You need a good statistical sampling of the population to study the variance but even here sample size if not determined by population size.
reply
charcircuit
1 day ago
[-]
>not your ability to invert a binary tree on a whiteboard.

Knowing how to swap 2 variables and traverse data structures are fundamentals.

reply
funcDropShadow
1 day ago
[-]
The goal of teaching binary trees is not that you can write binary trees in your sleep, the goal is that you train your ability to derive algorithms and data structures. If you look at what a world class soccer player does during training, most of it will never be applied identically during games. The same is true for university studies, if they focus on fundamentals.
reply
kubb
1 day ago
[-]
I’m surprised that the creator of Homebrew didn’t know how to do that.
reply
meindnoch
1 day ago
[-]
If you spend enough time with Homebrew, it's actually not that surprising.
reply
josephg
1 day ago
[-]
Of course, lots of people are employed despite giant holes in their knowledge of CS fundamentals. There’s more to being an effective developer than having good fundamentals. A lot more.

But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.

This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.

If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.

reply
meindnoch
1 day ago
[-]
Also, what people fail to realize is that the whiteboard coding interview was never about testing skills that are necessary for your day to day work.

Most fighter pilots don't fly missions that require superhuman reaction time or enduring 9.5g acceleration either.

Whiteboard coding exercises are just a proxy for certain thinking skills, a kind of je ne sais quoi that successful engineers tend to have.

reply
nradov
1 day ago
[-]
Alternatively, whiteboard coding exercises are a hazing mechanism to weed out candidates who have certain forms of anxiety or can't perform well under extreme scrutiny and time pressure. Which could be valid selection criteria for certain jobs. But let's be honest and admit that whiteboard coding exercises aren't actually a proxy for anything else, or at least we have no scientific evidence on that point.
reply
QuadmasterXLII
1 day ago
[-]
First, whiteboard interviews were a great selection criteria for about 30 seconds before it became public knowledge that being able to pass them was a ticket to a google salary. Subsequently, they functioned literally at all while under the pressure of literal billions of people knowing that they were the ticket to a google salary. A criteria surviving this second challenge is extremely impressive.

To put it another way: I can hire based on open source contributions instead of credentials and interview performance. If google decided tomorrow to start hiring based on open source contributions, then their new criteria would leak on monday, and on tuesday the pull requests queues of every major project would simultaneously splatter like bugs on windshields.

reply
josephg
19 hours ago
[-]
> But let's be honest and admit that whiteboard coding exercises aren't actually a proxy for anything else, or at least we have no scientific evidence on that point.

Nah. Whiteboard interviews test a bunch of traits that are important in a job. They aren't designed to be a baroque hazing ritual.

More generally, we could make a list of desirable / necessary qualities in a good hire based on what they'll spend their time doing. Imagine you're hiring someone to work in a team writing a web app. Their job will involve writing javascript & CSS in a large project. So they need to write code, and read and debug code written by their coworkers. They will need to present their work regularly. And attend meetings. The resulting website needs to be fast, easy to use and reliable.

From that, we can brainstorm a list of skills a good applicant should have:

- Programming skills. JS + CSS specifically. Also reading & debugging skills.

- Communication skills. (Meetings, easy to work with, can explain & discuss ideas with coworkers, etc).

- Understanding of performance, UX concepts, software reliability, etc

- Knowledge of how web browsers work

- Capacity to learn & solve unexpected problems

And so on.

Now, an idealised interview process would assess a candidate on each of these qualities. Then rank candidates using some weighted score across all areas based on how important those qualities are. But that would take an insane amount of time. The ideal assessment would assess all of this stuff efficiently. So you want to somehow use a small number of tasks to assess everything on that big list.

Ideally, that's what whiteboard interviews are trying to do. They assess - all at once - problems solving skills, capacity for learning, communication skills and ideally CS fundamentals. Thats pretty good as far as single task interviews go!

> we have no scientific evidence

There's a mountain of evidence. Almost all of it proprietary, and kept under lock and key by various large companies. The data I've seen shows success at whiteboard interviews is a positive signal in a candidate. Skill at whiteboard interviews is positively correlated with skill in other areas - but its not a perfect correlation. Really, the problem really isn't whiteboard interviews. Its that people think whiteboard interviews give you enough signal. They don't. They don't tell you how good someone is at programming or debugging. A good interview for a software engineer must assess technical skills as well.

Speaking as someone who's interviewed hundreds of candidates, yes. There are some people who will bomb a whiteboard interview but do well at other technical challenges you give them. But they are nowhere near as common as people on HN like to claim. Most people who are bad at whiteboard interviews are also bad at programming, and I wouldn't hire them anyway.

The reality is, most people who make homebrew get hired. There's plenty of work in our industry for people who have a track record of doing great work. Stop blaming the process.

reply
UncleMeat
1 day ago
[-]
He wasn't actually asked that question, but just used it as a stand-in for the entire category of interview questions.
reply
thrown-0825
1 day ago
[-]
Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

You can teach fundamentals all day long, but on their first day of work they are going to be asked adhere to some internal corporate process that is so far removed from their academic experience that they will feel like they should have just taught themselves online.

reply
bregma
1 day ago
[-]
Computer programming is to computer science as working a cash register is to economics.
reply
Der_Einzige
1 day ago
[-]
If this is true than I hope that AI kills both computer programming and computer science as fast as possible.

It’s not. If you’re a computer scientist who’s not coding, you are a bad computer scientist. “Those who cannot do, teach”

reply
deepburner
1 day ago
[-]
I dont know where that “Those who cannot do, teach” bullshit came from but it's absolute nonsense someone made up to dunk on teachers.

It doesnt even make sense in your post because "programming" isn't "doing computer science". You're not better than a teacher in any notion because you asked chatgpt to generate some slop.

reply
mafuy
1 day ago
[-]
Sorry but that's just nonsense. "Programming" has for a long time been a low level job. In Germany, it is taught as an apprentice level program. CS in practical universities will instead, or rather on top, teach program architecture and planning. At theory focussed universities, you'll learn how an LLM actually works, how to design high or low level protocols, how to design a new algorithm suited to your environment, etc. The programming required for these can be done for personal recreation, be outsourced, or replaced with an existing library as desired.
reply
rindalir
1 day ago
[-]
Professor Felleisen, is that you?
reply
evantbyrne
1 day ago
[-]
People love to make these weirdly diminutive comments about programmers, precisely because programmers are crushing it. Meanwhile most of the economy has been diverted to attempting to replicate the abilities of even just an entry-level drone who copy/pastes off Medium articles, which itself is heralded as a miracle.
reply
recipe19
1 day ago
[-]
I've heard that a number of times, but the vast majority of people who get into CS do it because they want a high-paying tech job, for which most of what they'll learn at the university is borderline useless (and to the extent that cutting-edge CS research happens, the academia is nowadays usually trailing the industry).

The problem, arguably, is that we don't have reputable trade schools that would actually teach what the students need. But if that changes, I think some CS departments will be in for a rude awakening.

reply
wavemode
1 day ago
[-]
There is such a thing as a Software Engineering degree.

The fact that people keep buying the wrong product (Computer Science degrees) and universities keep selling it, doesn't mean that there's something wrong with Computer Science.

reply
saagarjha
1 day ago
[-]
I don't get this viewpoint. Yes, of course when you start at your job you will have to learn how JIRA works or how to write a design doc. Obviously nobody is going to teach you that in college. But nobody is going to teach you that online either!
reply
mathiaspoint
14 hours ago
[-]
I learned git and project management by working with people in my college robotics club. (I already knew the barebones basics but that's where I learned the more complex stuff like rebasing.)

No professor or anyone else explained this, me and the other firmware programmers were on our own and had to figure out how to collaborate. That's what people mean when they say it's the social part of college that really matters IMO.

reply
nradov
1 day ago
[-]
Writing a design document isn't really part of computer science (a branch of applied math). But most colleges also have some sort of project based software engineering courses that do teach more practical skills to students who want to work in industry.
reply
thrown-0825
1 day ago
[-]
How about performing git bisect to identify when a regression was introduced, or debugging a docker container that is failing to start in your local environment, writing some unit tests for a CI suite, merging a pull request that has some conflicts, etc etc etc.

These a just a couple of examples of things that I see juniors really struggle with that are day 1 basics of the profession that are consistently missed by interview processes that focus on academic knowledge.

People won't teach you how to solve these problems online, but you will learn how to solve them while teaching yourself.

reply
lelanthran
1 day ago
[-]
> How about performing git bisect to identify when a regression was introduced, or debugging a docker container that is failing to start in your local environment, writing some unit tests for a CI suite, merging a pull request that has some conflicts, etc etc etc

That's called vocational training and isn't usually taught as part of academic curricula.

If you want non-academic graduates you've got your pick.

Maybe having a technical addendum to academic curricula that makes student work at the end of the studies a criteria for graduation might help. That's how it is done for doctors, lawyers and accountants after all. The difference is that they graduate but can't practice until they have completed training.

reply
throwup238
1 day ago
[-]
It’s not vocational training, it’s the equivalent to the lab portions of physical science classes. We don’t call pipetting vocational training for biologists or titration vocational for chemists, even though both will be doing a lot of that in professional and research careers.
reply
lelanthran
1 day ago
[-]
> It’s not vocational training, it’s the equivalent to the lab portions of physical science classes. We don’t call pipetting vocational training for biologists or titration vocational for chemists, even though both will be doing a lot of that in professional and research careers.

You're on the right path: learning how to use a specific tool produced by a specific $Corp is not vocational training, it's end-user training.

Learning pipetting and titration is very different from learning $MegaCorps software tool that will be replaced with something else in a few years.

Source: Me! I did undergrad in multiple different subjects, including chemistry, physics and biology.

Like I said, if you are looking for end-users, you don't have to search very hard. Universities should not be focused on training more end-users. It's fine if there's a half-credit or no-credit course somewhere on "How to use $PRODUCT".

A better option would be, like I said, industrial practice after graduation, before getting a license to practice, but that's way too professional for a field that seriously and unironically came up with SpaghettiCodeAsAFramework.

reply
throwup238
1 day ago
[-]
I’m not talking about SAP or Oracle databases or Matlab, but tools that are as fundamental to the application of computer science in industry as titration is to professional chemists.

There aren’t even that many of them: git, terminals and bash scripting, IDEs, and maybe a database. Vocational training would be stuff like managing VMs/cloud infrastructure, devops, testing, and so on that would be taught as dedicated classes.

The important thing is that these aren’t skills with a dedicated class but skills a student should pick up and masters across half a dozen classes with CS departments that coordinate their choice of tooling.

reply
thrown-0825
1 day ago
[-]
I agree with you.

SSH and even just managing your dev env in a sane manner are skills that I have to literally hand hold people through on a regular basis and would fully expect people to have coming out of a 4 year degree.

Git and basic SQL are next on the list.

reply
mafuy
1 day ago
[-]
SSH is absolutely not a core CS skill. I use it daily, and I think students should pick it up somewhere along the line. But it still is a mere tool, not a concept that should be taught in mandatory classes. Same goes for latex and git and C.

All of that is, or should be, vocational, because anyone can learn it given some time. Universities are about the hard stuff that is difficult to get right at even a mediocre level.

If your company requires it, include it in your regular training program. Don't dilute the material because you don't want to be bothered. If you think people spend too much time on hard stuff, hire BSc instead of MSc.

reply
nradov
1 day ago
[-]
Your expectations are completely irrational.
reply
PaulDavisThe1st
1 day ago
[-]
When learning how to git bisect, you're learning two things:

1. the specifics of how to use git (*) to carry out a task

2. the conceptual underpinnings of the task, which would exist whether you use git or perforce or bitwarden or any future RCS.

Being overly focused on either #1 or #2 is a mistake. It's not good understanding the task if you don't know how to use the tools you have right now to carry it out (or what available tools are appropriate). It's not good knowing how to run the tools if you don't understand what you're actually doing. The two go hand-in-hand.

(*) not exactly the product of a mega-corp

reply
layer8
1 day ago
[-]
If they learned how binary search works in university, they shouldn’t have difficulty to grasp what git bisect does. So that seems like a bad example.
reply
63stack
14 hours ago
[-]
Binary search can be explained in about 90 seconds, you don't need a university course on it either.
reply
saagarjha
1 day ago
[-]
Yes, and I did plenty of that during my university education. Except Docker because at the time I refused to use Docker.
reply
thrown-0825
1 day ago
[-]
Great, and if you got a job with us I would be having to explain how docker works because you refused to learn it for some reason.

Point is that what is deemed important in academic circle is rarely important in practice, and when it is I find it easier to explain a theory or algorithm than teach a developer how to use an industry standard tool set.

We should be training devs like welders and plumbers instead of like mathematicians because practically speaking the vast majority of them will never use that knowledge and develop an entirely new skill set the day they graduate.

reply
mafuy
1 day ago
[-]
You're simply wrong about CS. CS is a science. You are not looking for scientists, you are looking for apprentices. Don't hire an architect to paint your wall. Don't hire scientists to be code monkies. Universities are not the right place to look at.

Except if you instead just want smart people, yea, they tend to aggregate at unis. There, you can hire an athlete to paint your wall, if that's what you need.

reply
thrown-0825
19 hours ago
[-]
Stop kidding yourself, most CS grads are about as much of a scientist as your car mechanic.
reply
saagarjha
1 day ago
[-]
I use standard algorithms all the time. Sometimes I have to come up with new ones. And that's not just when I'm working on performance-sensitive roles.

Also, btw, I did eventually learn how to use Docker. I did actually vaguely know how it worked for a while but I didn't want Linux VM anywhere near my computer, but eventually I capitulated provided I didn't have Linux VM running all the time.

reply
thunky
1 day ago
[-]
> I didn't want Linux VM anywhere near my computer

This is like taking auto mechanic classes and refusing to touch a car because of the grease.

reply
jrh3
1 day ago
[-]
Ditto. I avoided Docker as long as possible and look forward to the day it is replaced.
reply
nilamo
1 day ago
[-]
Why are you against running a vm? And a superior OS in the vm, to boot?

Your comment gives spooky vibes. Like, I'd expect you to avoid pattern matching, because "nested ifs work fine".

reply
lelanthran
1 day ago
[-]
IME it is far far easier to teach a CS graduate how to use some software than to teach a user basic CS principles.

Besides, at the rate of change we see in this industry, focusing on producing users instead of developers will make half the stuff outdated by the time the student graduates.

I mean, okay, lets teach Jira. Then the industry switches to assure develops.

That's the general problem with vocational training: it ages much faster than academic stuff.

reply
thrown-0825
1 day ago
[-]
We have obviously had different experiences, but I would much rather have a high level conversation on an abstract academic topic like reinforcement learning, set theory and how it applies to sql, merkle trees, cpu architectures etc that a junior dev will be adjacent to but not necessarily need to interact with vs something like AWS IAM roles or the various gotcha's and foot guns in CSS.

I agree that vocational knowledge decays faster, which is why I would prefer stricter training and certification in those areas vs something like building a compiler from scratch in the your final year of undergrad based on a textbook written by the professor 10 years ago.

reply
janalsncm
1 day ago
[-]
A weaker version of your argument that might be more popular here involves math requirements.

I had to take calculus and while I think it’s good at teaching problem solving, that’s probably the best thing I can say about it. Statistics, which was not required, would also check that box and is far more applicable on a regular basis.

Yes calculus is involved in machine learning research that some PhDs will do, but heck, so is statistics.

reply
RugnirViking
1 day ago
[-]
Calculus is used in basically everything hard or worthwhile. Its like the most useful part of maths to learn, along with linear algebra, for doing real world stuff.

I've personally used it in my career for machine learning, non ml image processing, robot control, other kinds of control, animations, movement in games, statistics, physics, financial modelling, and more.

reply
wizzwizz4
1 day ago
[-]
Except statistics requires calculus – otherwise it makes no sense, and you can't determine when the tools in the toolbox are applicable.
reply
crinkly
1 day ago
[-]
Strangely I see a lot of people hitting things with the brute force hammer quite regularly rather than using calculus.

Some of our financial modelling stuff was CPU bound and took seconds because someone couldn’t be bothered or didn’t know how to work out an integral.

reply
meindnoch
1 day ago
[-]
>Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

80% of software development boils down to:

1. Get JSON(s) from API(s)

2. Read some fields from each JSON

3. Create a new JSON

4. Send it to other API(s)

Eventually people stopped pretending that you need a CS degree for this, and it spawned the coding bootcamp phenomenon. Alas it was short-lived, because ZIRP was killed, and as of late, we realized we don't even need humans for this kind of work!

reply
Esophagus4
1 day ago
[-]
I’ll challenge that assumption.

All of my rock star engineers have CS degrees OR are absolute savants of computer science who taught themselves. They solve problems, not just write CRUD apps.

I don’t want people who only have surface level knowledge of calling JSON APIs. They tend to be a serious drag on team productivity, and high performers don’t want to work with them.

reply
nradov
1 day ago
[-]
If you really believe that then you must have minimal real world experience outside of toy web applications. How much JSON do you think there is in the flight control software for an F-35?
reply
astrobe_
1 day ago
[-]
Academia has a lot to offer with regard to:

  - fundamental principles and concepts, like paradigms, SOLID, coupling, cohesion, etc. - Maybe design patterns too - and when and how to apply them
  - To paraphrase a famous author, how to avoid making  "programming the act of putting bugs in software"
  - And since you can't really avoid it, debugging techniques.
All of these require good logic and judgement - skills that you are better off having when talking with an AI.
reply
thrown-0825
1 day ago
[-]
And they were right.

We no longer hires junior engineers because it just wasn't worth the time to train them anymore.

reply
SilasX
1 day ago
[-]
I would also include:

5. Debug why the API's don't behave as documented and figure out a workaround.

reply
skywhopper
1 day ago
[-]
Computer science is not the same thing as software development.
reply
crinkly
1 day ago
[-]
Depends what you do. The sudden large accumulation of layers and corporate SaaS crap in the industry since about 2001 you're right on. But those of us a bit further down the stack or before that, it's pretty useful still.
reply
thrown-0825
1 day ago
[-]
Absolutely, but we are a dying breed in the same way that nobody really knows how to build nuclear power plants anymore.

Most CS grads will end up in a position that has more in common with being an electrician or plumber than an electrical engineer, difference is that we can't really automate installing wires and pipes to same degree we have automated service integration and making api calls.

reply
crinkly
1 day ago
[-]
Not a dying breed. There is just a relatively static demand. Proportionally it looks worse because the rest of the industry has grown massively.

Really the problem is there are too many CS grads. There should be a software engineering degree.

reply
thrown-0825
1 day ago
[-]
Personally I am a fan of bootcamp grads for simple stuff like FE.

They lack academic knowledge but understand the problem domain and tools, and are generally more teachable with lower salary expectations.

I would like to see more "trade schools" and its one of my pet peeves when devs call themselves engineers despite not being a licensed or regulated in any meaningful way.

reply
bsenftner
1 day ago
[-]
Those bootcamps create assembly line capable people, and nothing else. If your work is so static and unchanging that you can use such people, great, for those people doing that kind of limited scope work they are being used and discarded with little to no economic ladder to better their situation. It's exploitation of others, which is commonplace today, but I'd try to do better. It's still a poor way to treat others.
reply
thrown-0825
1 day ago
[-]
You just described people who go to bootcamps or trade schools as "assembly line capable, and nothing else".

By your definition running a welding company is also exploitive?

reply
bsenftner
1 day ago
[-]
I don't see welding companies with a series of 8 hour interviews where the welder is grilled on all manner of welding edge cases they'll never see in the job, then further evaluated for 'cultural fit', and then expected to work 12-14 hour days - including weekends - when the job listed no such hourly requirements.
reply
thrown-0825
1 day ago
[-]
Welding, machining, carpentry, plumbing, hvac etc are all highly technical trades with a lot more certification and regulatory oversight than the overwhelming majority of software development careers.
reply
pixl97
1 day ago
[-]
Eh, do you actually know anything about welding, especially in safety critical applications? Thats not even counting crazy stuff like welding in the nuclear industry.
reply
rsynnott
1 day ago
[-]
> Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.

reply
kibwen
1 day ago
[-]
> It's a truly baffling attitude.

I wish, but no, it's not baffling. We live in a post-truth society, and this is the sort of fundamental nihilism that naturally results.

reply
CoastalCoder
1 day ago
[-]
I agree that it fits in with a certain trope. But do people really believe that?

What I mean is:

Some people recognize that there are circumstances where the social aspects of agreement seem to be the dominant concern, e.g. when the goal is to rally votes. The cynical view of "good beliefs" in that scenario is group cohesion, rather than correspondence with objective reality.

But most everyone would agree that there are situations where correlation with objective reality is the main concern. E.g., when someone is designing and building the bridge they cross every day.

reply
xpe
1 day ago
[-]
> We live in a post-truth society

Oversimplified to an awful degree. There is a lot of variation between people, cultures, even countries.

reply
Gigachad
1 day ago
[-]
They always market the % of lines generated by AI. But if you are forced to use a tool that constantly inserts generations, that number is always going to be high even if the actual benefit is nil or negative.

If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.

reply
diggan
1 day ago
[-]
> They always market the % of lines generated by AI

That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)

reply
986aignan
1 day ago
[-]
It is possible to take this too far, though - consider the OpenAI IMO proofs[1], for instance, and compare them to Gemini's.[2]

[1] https://github.com/aw31/openai-imo-2025-proofs

[2] https://arxiv.org/pdf/2507.15855 Appendix A

reply
ThrowawayR2
1 day ago
[-]
It's a perfectly understandable attitude: "Get me a fat paycheck with the least effort possible." The longer term problem is that those who see the most benefit in productivity from LLMs are also the ones weak enough to be most easily replaced by LLMs entirely.
reply
tempodox
1 day ago
[-]
And yet this fine example of a used car salesman is being rewarded by everyone and their dog hosting their stuff on GitHub, feeding the Copilot machinery with their work for free, so it can be sold back to them.
reply
thrown-0825
1 day ago
[-]
All of the crypto grifters have shifted to AI.

Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.

reply
bsenftner
1 day ago
[-]
They never mattered, at least a long as you've been alive. The "Soviet statistics" discussion at the start of the article was an amazing example of Western capitalistic propaganda, because the same nonsense with statistics is also out of control in the West, just not so mind numbingly obvious. The USA is the king of propaganda, far in advance of all rivals.
reply
exq
1 day ago
[-]
All of the finance grifters have shifted to tech.
reply
thrown-0825
19 hours ago
[-]
nah they are in government now
reply
heresie-dabord
1 day ago
[-]
> the fundamental guiding ideology of LLM boosterism

It's the same as the ideology of FOMO Capitalism:

= The billionaire arseholes are saying it, it must be true

= Stock valuations are in the trillions, there must be enormous value

= The stock market is doing so well, your concerns about fundamental social-democratic principles are unpatriotic

= You need to climb aboard or you will lose even worse than you lost in the crypto-currency hornswoggle

reply
vemv
1 day ago
[-]
> Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

Spot on, I think this every time I see AI art on my Linkedin feed.

reply
Gigachad
1 day ago
[-]
I’ve become super sensitive to spotting it now. When I see a restaurant using AI food pictures I don’t want to eat there. Why would I want to do business with people who are so dishonest to lie about the basics?
reply
thrown-0825
1 day ago
[-]
using food pics as an example is hilarious, food photos in advertising have been "faked" for about as long as photography has existed
reply
Gigachad
1 day ago
[-]
Sure, they went through a long process of dressing them up as nice as possible. But they were still just ideal versions of the thing you actually receive. While the AI food looks pretty much nothing like what the restaurant is actually making.

And outside of mega chains like McDonalds, most restaurants used fully real images.

reply
thrown-0825
1 day ago
[-]
This is provably false.

There is a large industry based around faking food, you can watch some pretty interesting videos on the process and you will quickly find that they rarely use anything resembling the actual food you will be eating.

Japan is an extreme example, but there they literally use wax models to advertise their food.

reply
Gigachad
1 day ago
[-]
Those fake food models are still made to look just like the actual meal. I don't know if you've looked at any of these AI food pictures but they look nothing like the end result. And they are also signalling low effort and low initial investment unlike commissioning custom models of ramen bowls.
reply
bmicraft
1 day ago
[-]
I've been to plenty of local joints where photos clearly a) weren't taken by them, and b) didn't at all match the food. Still delicious food for a good price, but the picture is more a vague idea of what it could look like if you payed 3x as much somewhere else. It's been that way for as long as I can remember.
reply
bevr1337
1 day ago
[-]
Yes and no. Magazines, newspapers, highly profitable restaurants, and professional publications employ food photography. The local greasy spoon did not. Now instead of sloppy burgers and pizza pics, I get weird AI blobs of pepperoni. (Real example.)

Dressing up a pizza so it photographs well is different than an AI generated pizza. Maybe I cannot perfectly articulate that, but I'm confident

reply
Gigachad
1 day ago
[-]
Very much agree. Traditional food photography is kind of just enhanced reality, better lighting, structurally placing and fixing everything in it's ideal position, substitutes for things that would melt under studio lights.

While AI food is like some kind of fever dream alternate reality that has no connection to the thing you'll actually receive.

reply
pacifika
1 day ago
[-]
Isn’t this just the digital version of hand waving / guesswork? People not checking assumptions have long since been a leading cause of weak software delivery.
reply
nhinck3
1 day ago
[-]
This isn't the first time Github (or it's CEO) has produced a completely garbage article about the wonders of AI and it won't be the last.
reply
pcwelder
1 day ago
[-]
>I found, a required sample size for just one thousand people would be 278

It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.

I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.

While bad, if done properly, it may still be insightful.

reply
ethan_smith
1 day ago
[-]
Sample size requirements are primarily determined by desired confidence level and margin of error rather than population size, which explains why the required sample barely changes between 1,000 and 1 billion people.
reply
HarHarVeryFunny
1 day ago
[-]
> This helps explain the – perhaps unintuitive at first – observation that many of the developers we interviewed were paying for top-tier subscriptions.

It doesn't sound like an unbiased sample.

reply
bsenftner
1 day ago
[-]
This analysis is not complete, it needs to continue with an analysis of how many people, and critically how many business owners, believe the lies and non-truths propagandized by that article and the entire marketing push of LLMs.

That article is not for developers, it's for the business owner, their management, and the investor class. If they believe it, they will try to enforce it.

This is serious, destroy our industry type of idiot logic.

reply
card_zero
1 day ago
[-]
The Miyazaki quote get taken out of context and altered. It was "an insult to life itself", not "an abomination on art itself", and the context was creepy AI animations of zombies. He said that he won't use the technology, but the implied reason was because he saw a crass demo. "Whoever creates this stuff has no idea what pain is." https://en.wikiquote.org/wiki/Hayao_Miyazaki
reply
nailer
17 hours ago
[-]
Original video - yes the AI content looks like silent hill type visions: https://youtu.be/ngZ0K3lWKRc?si=t5uzOZDJZ5DPrRCU
reply
overgard
1 day ago
[-]
One thing that seems to always get lost in the AI hype cycle is actually screening articles by the source. Maybe it's just because in this age of social media we don't have much in the way of journalism. I have zero interest in what Sam Altman, Dario Amodei, Thomas Dohmke, etc. have to say because they obviously want to sell something.
reply
crinkly
1 day ago
[-]
Professional statistician here. Not that I get to do any of that these days, bar read Significance magazine and get angry occasionally.

Looking at the original blog post, it's marketing copy so there's no point in even reading it. The conclusion is in the headline and the methodology is starting with what you want to say and working back to supporting information. If it was in a more academic setting it would be the equivalent of doing a meta-analysis and p-hacking your way to the pre-defined conclusion you wanted.

Applying any kind of rigour to it is pointless but thanks for the effort.

reply
NoahZuniga
1 day ago
[-]
It's a bit annoying that this article on AI is hallucinating itself:

> To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself.

He never said this. This is just false, and it seems like the author didn't even fact check if Hayao Miyazaki ever said this.

reply
meindnoch
1 day ago
[-]
Context: https://youtu.be/ngZ0K3lWKRc

Miyazaki is repulsed by an AI-trained zombie animation which reminded him of a friend with disabilities. So the oft quoted part is about that zombie animation.

When he the team tells him that they want to build a machine that can draw pictures like humans do, he doesn't say anything just stares.

reply
tough
1 day ago
[-]
Right it was an out of context presentation of their own company and some very rough AI generated content and years ago before the existence of LLMs

but yeah sensationalism and all and people don't do research so unless you remember well

and also lost in translation from Japanese to English, the work sampled by their engineers, it depicted some kinda of zombie like pictures in a very rough form, thus the -insult to life- as in literally

reply
bgwalter
1 day ago
[-]
"AI" could certainly replace Dohmke. It excels at writing such meaningless articles.
reply
ma73me
1 day ago
[-]
I'll never judge an article by its HN header again
reply
quantum_state
1 day ago
[-]
if history is of reference, vibe coding would turn out to be the most effective tool to produce technical debt …
reply
tempodox
1 day ago
[-]
If history is any indication, too few will care to make a difference.
reply
jmull
1 day ago
[-]
Technical debt will have a strong negative impact on a project/product over time if left unresolved (if it doesn't, it probably isn't really something we should call technical debt).

That is, technical debt matters whether anyone cares about it or not.

reply
righthand
1 day ago
[-]
As a junior engineer I don’t care about technical debt and instead will push that we should rewrite the app so the new wave of employees can understand it.

Convincing my manager and leadership of this is 100x easier with generated code. I get approval and generate a new stack of tech debt that I try to pass to the next wave of employees.

reply
RugnirViking
1 day ago
[-]
A near insurmountable amount of problems and technical debt are due to be created in problems teams. Sounds like a great time to get into consulting. It's like knowing that y2k is coming up ahead of time. Learn your debuggers and your slide decks folks
reply
skywhopper
1 day ago
[-]
“ It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: ‘Either Embrace AI or Get Out of This Career’ ”

Is it clickbait if it’s literally quoting the author? I mean, yes, it was clickbait by Thomas Dohmke, but not by the source that used that headline.

reply
saulpw
1 day ago
[-]
Apparently it's not a quote by the author, but a participant.
reply
sixhobbits
1 day ago
[-]
> The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278

I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

The bit about running the study multiple times also seems kinda random.

I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.

reply
foma-roje
1 day ago
[-]
> You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

Certainly? Now, who is ranting?

reply
brabel
1 day ago
[-]
It’s a basic property of statistics. You need an extremely varied population to need a sample of 25%, and almost no human population is that varied in practice. Humans are actually very uniform in fact.
reply
integralid
22 hours ago
[-]
What if the group is 8 people
reply
astrobe_
1 day ago
[-]
> The bit about running the study multiple times also seems kinda random.

Reproducibility? But knowing it comes from the CEO of Github, who has vested interests in that matter because AI is one of the things that will allow to maintain Github's position on the market (or increase revenue of their paid plans, once everyone is hooked on vibe coding etc.), anyone would anyway take it with a grain of salt. It's like studies funded by big pharma.

reply
righthand
1 day ago
[-]
This was excellent!
reply
croes
1 day ago
[-]
The statistics part will also be relevant for the rest of Trump‘s presidency
reply