The next two years of software engineering
118 points
10 hours ago
| 20 comments
| addyosmani.com
| HN
babblingfish
7 hours ago
[-]
My experience hasn't been LLMs automate coding, just speeds it up. It's like I know what I want the solution to be and I'll describe it to the LLM, usually for specific code blocks at a time, and then build it up block-by-block. When I read hacker news people are talking like it's doing much more than that. It doesn't feel like an automation tool to me at all. It just helps me do what I was gonna do anyways, but without having to look up library function calls and language specific syntax
reply
Aurornis
7 hours ago
[-]
> My experience hasn't been LLMs automate coding, just speeds it up.

This is how basically everyone I know actually uses LLMs.

The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.

reply
kylecazar
6 hours ago
[-]
Half strawman -- a mudman, perhaps. Because we're seeing proper experts with credentials jump on the 'shit, AI can do all of this for me' realization blog post train.
reply
eaurouge
6 hours ago
[-]
So another strawman?
reply
noufalibrahim
29 minutes ago
[-]
I'm somewhere in between myself. Before LLMs, I used to block a few sites that distracted me by adding entries in /etc/hosts file to mapping them to 127.0.0.1 on my work machine. I also made the file immutable so that it would take a few steps for me to unblock the sites.

The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.

I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.

I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).

Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).

I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.

Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).

Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.

So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.

So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.

reply
trueismywork
1 hour ago
[-]
You can think of LLMs as a higher level language for whatever programming language you are using, but informal with ambiguous grammar.
reply
noufalibrahim
12 minutes ago
[-]
I don't think that works. The fact that it can produce different output for the same input, usage of tools etc. don't really fit into the analogy or mental model.

What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.

reply
jvans
6 hours ago
[-]
i notice a huge difference between working on large systems with lots of microservices and building small apps or tools for myself. The large system work is what you describe, but small apps or tools I resonate with the automate coding crowd.

I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.

reply
antonymoose
7 hours ago
[-]
It’s a better Google for me. Instead of searching AWS or StackOverflow it hallucinates a good enough output that I can refactor into an output.
reply
bryanrasmussen
1 hour ago
[-]
The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.

It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.

It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.

reply
petesergeant
6 hours ago
[-]
I’m doing both. For production code that I care about, I’m reading every line the LLM writes, correcting it a lot, chatting with an observer LLM who’s checking the work the first LLM and I are writing. It’s speeding stuff up, it also reduces the friction on starting on things. Definitely a time saver.

Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.

reply
burnermore
44 minutes ago
[-]
Something very odd about the tone of this article. Is it mostly AI written? There is a lot of references and info. But I am feeling far more disconnected with it.

For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.

reply
nerdsniper
42 minutes ago
[-]
Yes, lots of AI style/writing in this article. I wouldn't necessarily discredit an article just based on stylization if the content was worth engaging with ... but like you mentioned, when the AI is given too much creative control it goes off the rails by the middle and turns into what the kids call "AI slop".

It resembles an article, it has the right ingredients (words), but they aren't combined and cooked into any kind of recognizable food.

reply
burnermore
21 minutes ago
[-]
Thanks a lot for taking the time to confirm. Not hating on AI slop or something. But I do genuinely feel if he/she/they tried to invest time in writing it, people would consume and enjoy it better.

Its hard to put my finger on it. But it lacks soul, it factor or whatever you want to call it. Feels empty in a way.

I mean, this is not the first AI assisted article am reading. But usually, it's to a negligible level. Maybe it's just me. :)

reply
ch4s3
6 hours ago
[-]
> junior developer employment drops by about 9-10% within six quarters, while senior employment barely budges. Big tech hired 50% fewer fresh graduates over the past three years.

This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.

> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.

Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.

The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.

Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.

[1] https://download.ssrn.com/2025/11/6/5425555.pdf

reply
wefzyn
5 hours ago
[-]
AI became very popular suddenly. This is something that wasn't in anyone's budget. I believe cost savings from hiring freezes and layoffs are to pay for AI projects and infrastructure.
reply
ch4s3
5 hours ago
[-]
Right so you shift budget away from other things. The “study” looked at ai integration job listings. You have to budget those.
reply
garbawarb
6 hours ago
[-]
Hiring was booming until about 2020 though.
reply
ch4s3
5 hours ago
[-]
The TCJA change (of 2017) went into effect in 2022, I should have been more clear.
reply
garbawarb
5 hours ago
[-]
I didn't know that but that makes perfect sense. A lot of layoffs and outsourcing coincided with that. Are there any signs it'll be reintroduced?
reply
ch4s3
5 hours ago
[-]
It was late last year.
reply
hncoder12345
2 hours ago
[-]
Sometimes I wonder if I made the wrong choice with software development. Even after getting to a senior role, according to this article, you're still expected to get more education and work on side projects outside of work. Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.
reply
johnfn
2 hours ago
[-]
To put it very directly - if you are OK with being good but not exceptional at your job, this is totally fine. If you want to be exceptional you will probably need to put in the extra work. Not everyone is OK with this tradeoff and it's totally fine to "just" be good and care more about having outside hobbies and a social life and etc.
reply
jedberg
2 hours ago
[-]
It's funny you should ask this. When I started out, 30 years ago, here were the answers you'd get from most people:

> Am I supposed to want to code all the time?

Yes.

> When can I pursue hobbies,

Your hobby should be coding fun apps for yourself

> a social life, etc.

You social life should be hanging out with other engineers talking about engineering things.

And the most successful people I know basically did exactly that.

I'm not saying y'all should be doing that now, I'm just saying, that is in fact how it used to be.

reply
osigurdson
5 hours ago
[-]
>> The skillset is shifting from implementing algorithms to knowing how to ask the AI the right questions and verify its output.

The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.

With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.

reply
tigrezno
17 minutes ago
[-]
The next two years of software engineering will be the last two years of software engineering (probably).
reply
kubb
2 minutes ago
[-]
Please don’t get my hopes up. Adaptable people like me will outcompete hard in the post-engineering world. Alas, I don’t believe it’s coming. The tech just doesn’t seem to have what it takes to do the job.
reply
stack_framer
3 hours ago
[-]
Funny that he mentions people not pivoting away from COBOL. My neighbors work for a bank, programming in COBOL every day. When I moved in and met them 14 years ago, I wondered how much longer they would be able to keep that up.

They're still doing it.

reply
bossyTeacher
1 hour ago
[-]
The market can stay irrational longer than you can stay solvent
reply
bryanrasmussen
52 minutes ago
[-]
it sounds like these people are staying solvent as long as the market stays irrational.
reply
austin-cheney
9 hours ago
[-]
I have been telling people that, titles aside, senior developers were the people not afraid to write original code. I don’t see LLMs changing this. I only envision people wishing LLMs would change this.
reply
HarHarVeryFunny
9 hours ago
[-]
I disagree.

1) Senior developers are more likely to know how to approach a variety of tasks, including complex ones, in ways that work, and are more likely to (maybe almost subconsciously) stick to these proven design patterns rather than reinvent the wheel in some novel way. Even if the task itself is somewhat novel, they will break it down in familar ways into familar subtasks/patterns. For sure if a task does require some thinking outside the box, or a novel approach, then the senior developer might have better intuition on what to consider.

The major caveat to this is that I'm an old school developer, who started professionally in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.

2) I think the real distinction of senior vs junior programmers, that will carry over into the AI era, is that senior developers have had enough experience, at increasing levels of complexity, that they know how to architect and work on large complex projects where a more junior developer will flounder. In the AI coding world, at least for time being, until something closer to AGI is achieved (could be 10-20 years away), you still need to be able to plan and architect the project if you want to achieve a result where the outcome isn't just some random "I let the AI choose everything" experiment.

reply
austin-cheney
8 hours ago
[-]
I completely agree with your second point. For your first point my experience tells me the people least afraid to write original code are the people least oppositional to reinventing wheels.

The distinguishing behavior is not about the quantity of effort involved but the total cost after consideration of dependency management, maintenance time, and execution time. The people that reinvent wheels do so because they want to learn and they also want to do less work on the same effort in the future.

reply
BoiledCabbage
7 hours ago
[-]
> in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.

I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.

The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.

LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.

I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".

Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.

LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.

reply
Terr_
7 hours ago
[-]
To me, the "hacker" distinction is not about novelty, but understanding.

Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.

LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.

The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.

reply
layer8
5 hours ago
[-]
No, the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers. Case in point, you can’t really cleverly “hack” LLMs. It’s more a roll of the dice that you try to affect using hit-or-miss incantations.
reply
bossyTeacher
1 hour ago
[-]
>the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers

Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.

reply
bossyTeacher
1 hour ago
[-]
>I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs

You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs

reply
hooverd
3 hours ago
[-]
an unexplored wilderness that you pour casino chips into (unless you're doing local model stuff yea yea)
reply
CSSer
9 hours ago
[-]
I almost think what a lot of people are coming to grips is with is how much code is unoriginal. The ones who've adjusted the fastest were humble to begin with. I don't want to claim the title, but I can certainly claim the imposter syndrome! If anything, LLMs validated something I always suspected. The amount of truly unique, relevant to success, code in a given project is often very small. More often than not, it's not grouped together either. Most of the time it's tailored to a given functionality. For example, a perfectly accurate Haversine distance is slower than an optimized one with tradeoffs. LLMs have not yet become adept at housing the ability to identify the need for those tradeoffs in context well or consistently, so you end up with generic code that works but not great. Can the LLM adjust if you explicitly instruct it to? Sure, sometimes! Sometimes it catches it in a thought loop too. Other times you have to roll up your sleeves and do the work like you said, which often still involves traditional research or thinking.
reply
mellosouls
10 hours ago
[-]
On the junior developer question:

A humble way for devs to look at this, is that in the new LLM era we are all juniors now.

A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.

We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.

reply
zqna
54 minutes ago
[-]
My question: are those people who were building crappy, brittle software, which was full of bugs and and orher suboptimal behavior, that were the main reasons of slowing down the evolution that software, will they now begin writing better software because of AI? Answering yes, implies that the main reason of those problems was that those developers didn't have enough time to spend on analyzing those problems or to build protection harnesses. I would stronly argue that was not the case, as the main reason is of intelectual and personal nature - inability to build abstractions, to follow up the route causes (thus not aquiring necessary knowledge), or to avoid being distracted by some new toy. In 2-5 years I expect the industry going into panic mode, as there will be a shortage of people who could maintain the drivel that is now being created en masse. The future is bright for those with the brains, just need to wait this out
reply
mawadev
10 minutes ago
[-]
I mean it's pretty simple: management will take bad quality (because they don't understand the field) over having and paying more employees any day. Software engineer positions will shrink and be unrecognizable: one person expected to be doing the work of multiple departments to stay employed. People may leave the field or won't bother learning it. When the critical mass is reached, AI will be paywalled and rug pulled. Then the field evens itself out again over a long, expensive period of time for every company that fell for it.
reply
Eong
6 hours ago
[-]
Love the article, I had a struggle with my new identity and thus had to write https://edtw.in/high-agency-engineering/ for myself, but also came to the realisation that the industry is shifting too especially for junior engineers.

Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?

reply
PraddyChippzz
8 hours ago
[-]
The points mentioned in the article, regarding the things to focus on, is spot on.
reply
tommica
2 hours ago
[-]
One thing that fucks with juniors is the expecration of paying for subscriptions for AI models. If you need to know how the AI tools work, you need to learn them with your own money.

Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.

reply
boulos
1 hour ago
[-]
You can get by pretty well with the ~$20/month plans for either Claude or Gemini. You don't need to be doing the $200/month ones just to get a sense of how they work.
reply
tommica
25 minutes ago
[-]
Again, not everyone can afford it, and it becomes a hurdle. Computers are acquirable, but 20$ extra a month might not be.

And yes, that plan can get you started, but when I tested it, I managed to get 1 task done, before having to wait 4 hours.

reply
ares623
1 hour ago
[-]
If the AI gets so good then they shouldn’t need to pre-learn.
reply
bradleyjg
6 hours ago
[-]
The bottom up and top down don’t seem to match.

Where is all the new and improved software output we’d expect to see?

reply
streetcat1
4 hours ago
[-]
For some reason miss two important points:

1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?

reply
NitpickLawyer
2 hours ago
[-]
I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.

1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.

The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).

That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.

2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).

reply
gassi
7 hours ago
[-]
> Addy Osmani is a Software Engineer at Google working on Google Cloud and Gemini

Ah, there it is.

reply
ahmetomer
10 hours ago
[-]
> Junior developers: Make yourself AI-proficient and versatile. Demonstrate that one junior plus AI can match a small team’s output. Use AI coding agents (Cursor/Antigravity/Claude Code/Gemini CLI) to build bigger features, but understand and explain every line if not most. Focus on skills AI can’t easily replace: communication, problem decomposition, domain knowledge. Look at adjacent roles (QA, DevRel, data analytics) as entry points. Build a portfolio, especially projects integrating AI APIs. Consider apprenticeships, internships, contracting, or open source. Don’t be “just another new grad who needs training”; be an immediately useful engineer who learns quickly.

If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.

reply
ares623
1 hour ago
[-]
What. That’s written in a way that’s like “men writing women”. Not putting themselves in the shoes of a junior who has no context or almost no opportunities.
reply
wakawaka28
8 hours ago
[-]
The outlook on CS credentials is wrong. You'll never be worse off than someone without those credentials, all other things equal. Buried in this text is some assumption that the relatively studious people who get degrees are going to fall behind the non-degreed, because the ones who didn't go to school will out-study them. What is really going to happen generally is that the non-degreed will continue to not study, and they will lean on AI to avoid studying even the few things that they might have otherwise needed to study to squeak by in industry.
reply
doug_durham
9 hours ago
[-]
The author has a bizarre idea of what a computer science degree is about. Why would it teach cloud computing or dev ops? The idea is you learn those on your own.
reply
happytoexplain
9 hours ago
[-]
If that's "the idea", then clearly we need a more holistic, useful degree to replace CS as "the" software degree.
reply
kibwen
8 hours ago
[-]
Despite what completely uninformed people may think, the field "computer science" is not about software development. It's a branch of mathematics. If you want an education in software development, those are offered by trade schools.
reply
AnimalMuppet
7 hours ago
[-]
What I want is for universities to offer a degree in Software Engineering. That's a different field from Computer Science.

You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.

But if chemical engineering belongs at a university, so does software engineering.

reply
collingreen
6 hours ago
[-]
Plenty of schools offer software engineering degrees alongside computer science, including mine ~20 years ago.

The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.

reply
xboxnolifes
5 hours ago
[-]
Many do. Though, the one I'm familiar with is basically a CS-lite degree with software specific project design and management courses.

Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.

reply
none2585
6 hours ago
[-]
Saying this as a software engineer that has a degree in electrical engineering - software "engineering" is definitely not the same as other engineering disciplines and definitely belongs in a trade school.
reply
pkaye
6 hours ago
[-]
My university had Electrical Engineering, Computer Engineering, Software Engineering and Computer Science degrees (in additional to all the other standard ones.)
reply
mxkopy
7 hours ago
[-]
Last I checked ASU does, and I’m certain many other universities do too.
reply
throwaway7783
7 hours ago
[-]
The degree is (should be) about CS fundamentals and not today's hotness. Maybe a "trades" diploma in CS could teach today's hotness.
reply
wrs
8 hours ago
[-]
Cloud computing is not some new fundamental area of computer science. It’s just virtual CPUs with networks and storage. My CS degree from 1987 is still working just fine in the cloud, because we learned about CPUs, virtualization, networks, and storage. They’re all a lot bigger and faster, with different APIs, but so what?

Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.

School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.

reply
wakawaka28
8 hours ago
[-]
There has to be a balance of practical skills and theory in a useful degree, and most CS curricula are built that way. It should not be all about random hot tech because that always changes. You can easily learn tech from tutorials, because the tech is simple compared to theory. Theory is also important to be able to judge the merits of different technology and software designs.
reply
tibbar
9 hours ago
[-]
Why is this necessarily true?
reply
sys_64738
9 hours ago
[-]
A CS degree is there to teach you concepts and fundamentals that are the foundation of everything computing related. It doesn't generally chase after the latest fads.
reply
tibbar
7 hours ago
[-]
Sure, but we need to update our definitions of concepts/fundamentals. A lot of this stuff has its own established theory and has been a core primitive for software engineering for many years.

For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.

Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.

We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.

reply
michaelsalim
4 hours ago
[-]
I do agree that the scale has expanded a lot. But this is true with any other fields. Does that mean that you need to learn everything? Well at some point it becomes unfeasible.

See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.

reply