> Many Computer Science (CS) programs still center around problems that AI can now solve competently.
Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]
Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.
But throughout the game, you often drop back down into the lower level tasks, e.g. to understand problems or change the workflow. So in the end an understanding of the entire "stack" of tasks on different abstraction levels is necessary to make progress.
This always felt like a good analogy to programming, or really scientific knowledge in general.
But that very same cartooniness also made it less interesting to me; the things you're producing are just too arbitrary.
Unless you mean to imply that civilization itself is already dystopian and we should go back to hunting and gathering?
And as the opposite question: are there games that give more of that feeling?
I want to feel like I'm playing the human faction in Starship Troopers or on Pandora in Avatar, but in the more factory building sense, where you supply a war machine or the industrial capacity that will inevitably make the local ecosystems and planet perish.
On the more bright and cheerful side, though, Satisfactory is great, Captain of Industry might be worthy of a look (you're literally helping a settlement of humans survive), maybe Mindustry for something a bit simpler or Factory Town. I'd also mention Urbek City Builder and Timberborn as loosely related, albeit they can feel just more like puzzle games.
Or for a different take, magnum opus?
Nowadays, math concepts or papers only makes sense when I can properly implement them as query, it's somehow a basic translation step I need.
Knowing how to swap 2 variables and traverse data structures are fundamentals.
But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.
This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.
If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.
Most fighter pilots don't fly missions that require superhuman reaction time or enduring 9.5g acceleration either.
Whiteboard coding exercises are just a proxy for certain thinking skills, a kind of je ne sais quoi that successful engineers tend to have.
To put it another way: I can hire based on open source contributions instead of credentials and interview performance. If google decided tomorrow to start hiring based on open source contributions, then their new criteria would leak on monday, and on tuesday the pull requests queues of every major project would simultaneously splatter like bugs on windshields.
Nah. Whiteboard interviews test a bunch of traits that are important in a job. They aren't designed to be a baroque hazing ritual.
More generally, we could make a list of desirable / necessary qualities in a good hire based on what they'll spend their time doing. Imagine you're hiring someone to work in a team writing a web app. Their job will involve writing javascript & CSS in a large project. So they need to write code, and read and debug code written by their coworkers. They will need to present their work regularly. And attend meetings. The resulting website needs to be fast, easy to use and reliable.
From that, we can brainstorm a list of skills a good applicant should have:
- Programming skills. JS + CSS specifically. Also reading & debugging skills.
- Communication skills. (Meetings, easy to work with, can explain & discuss ideas with coworkers, etc).
- Understanding of performance, UX concepts, software reliability, etc
- Knowledge of how web browsers work
- Capacity to learn & solve unexpected problems
And so on.
Now, an idealised interview process would assess a candidate on each of these qualities. Then rank candidates using some weighted score across all areas based on how important those qualities are. But that would take an insane amount of time. The ideal assessment would assess all of this stuff efficiently. So you want to somehow use a small number of tasks to assess everything on that big list.
Ideally, that's what whiteboard interviews are trying to do. They assess - all at once - problems solving skills, capacity for learning, communication skills and ideally CS fundamentals. Thats pretty good as far as single task interviews go!
> we have no scientific evidence
There's a mountain of evidence. Almost all of it proprietary, and kept under lock and key by various large companies. The data I've seen shows success at whiteboard interviews is a positive signal in a candidate. Skill at whiteboard interviews is positively correlated with skill in other areas - but its not a perfect correlation. Really, the problem really isn't whiteboard interviews. Its that people think whiteboard interviews give you enough signal. They don't. They don't tell you how good someone is at programming or debugging. A good interview for a software engineer must assess technical skills as well.
Speaking as someone who's interviewed hundreds of candidates, yes. There are some people who will bomb a whiteboard interview but do well at other technical challenges you give them. But they are nowhere near as common as people on HN like to claim. Most people who are bad at whiteboard interviews are also bad at programming, and I wouldn't hire them anyway.
The reality is, most people who make homebrew get hired. There's plenty of work in our industry for people who have a track record of doing great work. Stop blaming the process.
You can teach fundamentals all day long, but on their first day of work they are going to be asked adhere to some internal corporate process that is so far removed from their academic experience that they will feel like they should have just taught themselves online.
It’s not. If you’re a computer scientist who’s not coding, you are a bad computer scientist. “Those who cannot do, teach”
It doesnt even make sense in your post because "programming" isn't "doing computer science". You're not better than a teacher in any notion because you asked chatgpt to generate some slop.
The problem, arguably, is that we don't have reputable trade schools that would actually teach what the students need. But if that changes, I think some CS departments will be in for a rude awakening.
The fact that people keep buying the wrong product (Computer Science degrees) and universities keep selling it, doesn't mean that there's something wrong with Computer Science.
No professor or anyone else explained this, me and the other firmware programmers were on our own and had to figure out how to collaborate. That's what people mean when they say it's the social part of college that really matters IMO.
These a just a couple of examples of things that I see juniors really struggle with that are day 1 basics of the profession that are consistently missed by interview processes that focus on academic knowledge.
People won't teach you how to solve these problems online, but you will learn how to solve them while teaching yourself.
That's called vocational training and isn't usually taught as part of academic curricula.
If you want non-academic graduates you've got your pick.
Maybe having a technical addendum to academic curricula that makes student work at the end of the studies a criteria for graduation might help. That's how it is done for doctors, lawyers and accountants after all. The difference is that they graduate but can't practice until they have completed training.
You're on the right path: learning how to use a specific tool produced by a specific $Corp is not vocational training, it's end-user training.
Learning pipetting and titration is very different from learning $MegaCorps software tool that will be replaced with something else in a few years.
Source: Me! I did undergrad in multiple different subjects, including chemistry, physics and biology.
Like I said, if you are looking for end-users, you don't have to search very hard. Universities should not be focused on training more end-users. It's fine if there's a half-credit or no-credit course somewhere on "How to use $PRODUCT".
A better option would be, like I said, industrial practice after graduation, before getting a license to practice, but that's way too professional for a field that seriously and unironically came up with SpaghettiCodeAsAFramework.
There aren’t even that many of them: git, terminals and bash scripting, IDEs, and maybe a database. Vocational training would be stuff like managing VMs/cloud infrastructure, devops, testing, and so on that would be taught as dedicated classes.
The important thing is that these aren’t skills with a dedicated class but skills a student should pick up and masters across half a dozen classes with CS departments that coordinate their choice of tooling.
SSH and even just managing your dev env in a sane manner are skills that I have to literally hand hold people through on a regular basis and would fully expect people to have coming out of a 4 year degree.
Git and basic SQL are next on the list.
All of that is, or should be, vocational, because anyone can learn it given some time. Universities are about the hard stuff that is difficult to get right at even a mediocre level.
If your company requires it, include it in your regular training program. Don't dilute the material because you don't want to be bothered. If you think people spend too much time on hard stuff, hire BSc instead of MSc.
1. the specifics of how to use git (*) to carry out a task
2. the conceptual underpinnings of the task, which would exist whether you use git or perforce or bitwarden or any future RCS.
Being overly focused on either #1 or #2 is a mistake. It's not good understanding the task if you don't know how to use the tools you have right now to carry it out (or what available tools are appropriate). It's not good knowing how to run the tools if you don't understand what you're actually doing. The two go hand-in-hand.
(*) not exactly the product of a mega-corp
Point is that what is deemed important in academic circle is rarely important in practice, and when it is I find it easier to explain a theory or algorithm than teach a developer how to use an industry standard tool set.
We should be training devs like welders and plumbers instead of like mathematicians because practically speaking the vast majority of them will never use that knowledge and develop an entirely new skill set the day they graduate.
Except if you instead just want smart people, yea, they tend to aggregate at unis. There, you can hire an athlete to paint your wall, if that's what you need.
Also, btw, I did eventually learn how to use Docker. I did actually vaguely know how it worked for a while but I didn't want Linux VM anywhere near my computer, but eventually I capitulated provided I didn't have Linux VM running all the time.
This is like taking auto mechanic classes and refusing to touch a car because of the grease.
Your comment gives spooky vibes. Like, I'd expect you to avoid pattern matching, because "nested ifs work fine".
Besides, at the rate of change we see in this industry, focusing on producing users instead of developers will make half the stuff outdated by the time the student graduates.
I mean, okay, lets teach Jira. Then the industry switches to assure develops.
That's the general problem with vocational training: it ages much faster than academic stuff.
I agree that vocational knowledge decays faster, which is why I would prefer stricter training and certification in those areas vs something like building a compiler from scratch in the your final year of undergrad based on a textbook written by the professor 10 years ago.
I had to take calculus and while I think it’s good at teaching problem solving, that’s probably the best thing I can say about it. Statistics, which was not required, would also check that box and is far more applicable on a regular basis.
Yes calculus is involved in machine learning research that some PhDs will do, but heck, so is statistics.
I've personally used it in my career for machine learning, non ml image processing, robot control, other kinds of control, animations, movement in games, statistics, physics, financial modelling, and more.
Some of our financial modelling stuff was CPU bound and took seconds because someone couldn’t be bothered or didn’t know how to work out an integral.
80% of software development boils down to:
1. Get JSON(s) from API(s)
2. Read some fields from each JSON
3. Create a new JSON
4. Send it to other API(s)
Eventually people stopped pretending that you need a CS degree for this, and it spawned the coding bootcamp phenomenon. Alas it was short-lived, because ZIRP was killed, and as of late, we realized we don't even need humans for this kind of work!
All of my rock star engineers have CS degrees OR are absolute savants of computer science who taught themselves. They solve problems, not just write CRUD apps.
I don’t want people who only have surface level knowledge of calling JSON APIs. They tend to be a serious drag on team productivity, and high performers don’t want to work with them.
- fundamental principles and concepts, like paradigms, SOLID, coupling, cohesion, etc. - Maybe design patterns too - and when and how to apply them
- To paraphrase a famous author, how to avoid making "programming the act of putting bugs in software"
- And since you can't really avoid it, debugging techniques.
All of these require good logic and judgement - skills that you are better off having when talking with an AI.We no longer hires junior engineers because it just wasn't worth the time to train them anymore.
5. Debug why the API's don't behave as documented and figure out a workaround.
Most CS grads will end up in a position that has more in common with being an electrician or plumber than an electrical engineer, difference is that we can't really automate installing wires and pipes to same degree we have automated service integration and making api calls.
Really the problem is there are too many CS grads. There should be a software engineering degree.
They lack academic knowledge but understand the problem domain and tools, and are generally more teachable with lower salary expectations.
I would like to see more "trade schools" and its one of my pet peeves when devs call themselves engineers despite not being a licensed or regulated in any meaningful way.
By your definition running a welding company is also exploitive?
This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.
I wish, but no, it's not baffling. We live in a post-truth society, and this is the sort of fundamental nihilism that naturally results.
What I mean is:
Some people recognize that there are circumstances where the social aspects of agreement seem to be the dominant concern, e.g. when the goal is to rally votes. The cynical view of "good beliefs" in that scenario is group cohesion, rather than correspondence with objective reality.
But most everyone would agree that there are situations where correlation with objective reality is the main concern. E.g., when someone is designing and building the bridge they cross every day.
Oversimplified to an awful degree. There is a lot of variation between people, cultures, even countries.
If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.
That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)
[1] https://github.com/aw31/openai-imo-2025-proofs
[2] https://arxiv.org/pdf/2507.15855 Appendix A
Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.
It's the same as the ideology of FOMO Capitalism:
= The billionaire arseholes are saying it, it must be true
= Stock valuations are in the trillions, there must be enormous value
= The stock market is doing so well, your concerns about fundamental social-democratic principles are unpatriotic
= You need to climb aboard or you will lose even worse than you lost in the crypto-currency hornswoggle
Spot on, I think this every time I see AI art on my Linkedin feed.
And outside of mega chains like McDonalds, most restaurants used fully real images.
There is a large industry based around faking food, you can watch some pretty interesting videos on the process and you will quickly find that they rarely use anything resembling the actual food you will be eating.
Japan is an extreme example, but there they literally use wax models to advertise their food.
Dressing up a pizza so it photographs well is different than an AI generated pizza. Maybe I cannot perfectly articulate that, but I'm confident
While AI food is like some kind of fever dream alternate reality that has no connection to the thing you'll actually receive.
It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.
I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.
While bad, if done properly, it may still be insightful.
It doesn't sound like an unbiased sample.
That article is not for developers, it's for the business owner, their management, and the investor class. If they believe it, they will try to enforce it.
This is serious, destroy our industry type of idiot logic.
Looking at the original blog post, it's marketing copy so there's no point in even reading it. The conclusion is in the headline and the methodology is starting with what you want to say and working back to supporting information. If it was in a more academic setting it would be the equivalent of doing a meta-analysis and p-hacking your way to the pre-defined conclusion you wanted.
Applying any kind of rigour to it is pointless but thanks for the effort.
> To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself.
He never said this. This is just false, and it seems like the author didn't even fact check if Hayao Miyazaki ever said this.
Miyazaki is repulsed by an AI-trained zombie animation which reminded him of a friend with disabilities. So the oft quoted part is about that zombie animation.
When he the team tells him that they want to build a machine that can draw pictures like humans do, he doesn't say anything just stares.
but yeah sensationalism and all and people don't do research so unless you remember well
and also lost in translation from Japanese to English, the work sampled by their engineers, it depicted some kinda of zombie like pictures in a very rough form, thus the -insult to life- as in literally
That is, technical debt matters whether anyone cares about it or not.
Convincing my manager and leadership of this is 100x easier with generated code. I get approval and generate a new stack of tech debt that I try to pass to the next wave of employees.
Is it clickbait if it’s literally quoting the author? I mean, yes, it was clickbait by Thomas Dohmke, but not by the source that used that headline.
I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.
The bit about running the study multiple times also seems kinda random.
I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.
Certainly? Now, who is ranting?
Reproducibility? But knowing it comes from the CEO of Github, who has vested interests in that matter because AI is one of the things that will allow to maintain Github's position on the market (or increase revenue of their paid plans, once everyone is hooked on vibe coding etc.), anyone would anyway take it with a grain of salt. It's like studies funded by big pharma.