I don’t know if distributed systems is consider part of “Computer Science” but it is a much more common problem that I see needs to be solved.
I try to write systems in the simplest way possible and then use observability tools to figure out where the design is deficient and then maybe I will pull out a data structure or some other “computer sciency” thing to solve that problem. It turns out that big O notation and runtime complexity doesn’t matter the majority of the time and you can solve most problems with arrays and fast CPUs. And even when you have runtime problems you should profile the program to find the hot spots.
What computer science doesn’t teach you is how memory caching works in CPUs. Your fancy graph algorithm may have good runtime complexity but it completely hoses the CPU cache and you may have been able to go faster with an array with good cache usage.
The much more common problems I have is how to deal with fault tolerance, correctness in distributed locks and queues, and system scalability.
Maybe I am just biased because I have a computer/electrical engineering background.
It is hard to design systems if you don't have the perspective of implementing them. Yes, you move up the value chain to designing things, no, but no, you don't get to skip gaining experience lower down the value chain.
> What computer science doesn’t teach you is how memory caching works in CPUs.
That was literally my first quarter in my CS undergrad 30 years ago, the old Hennessy and Patterson book, which I believe is still used today. Are things so different now?
> The much more common problems I have is how to deal with fault tolerance, correctness in distributed locks and queues, and system scalability.
All of that was covered in my CS undergrad, I wasn't even in a fancy computer engineering/EE background.
At my uni 10 years ago the CS program didn’t touch anything related to hardware, hell the CS program didn’t even need to take multivariable calculus. In my computer engineering program we covered solid state physics, electromagnetism, digital electronics design, digital signals processing, CPU architecture, compiler design, OS design, algorithms, software engineering, distributed systems design.
The computer engineering program took you from solid state physics and transistor design to PAXOS.
The CS program was much more focused on logic proofs and more formalism and they never touched anything hardware adjacent.
I realize this is different between programs, but from what I read and hear many CS programs these days start at Java and never go down abstraction levels.
I do agree with you that learning the fundamentals is important, but I would argue that a SICP type course is not fundamental — physics is fundamental. And once you learn how we use physics to build CPUs you learn that fancy algorithms and complex solutions are not necessary most of the time given how fast computers are today. If you can get your CPU pipelined properly with high cache hits, branch prediction hits, prefetch hits, and SIMD you can easily brute force many problems.
And for those 10% of problems which cannot be brute forced, 90% of those problems can be solved with profiling and memoization, and for the 10% of those problems you cannot solve with memoization you can solve 90% of them with b-trees.
If you are into physics and mechanics, then you have to check the SICM (SICP’s less famous cousin) out as well. Again, MIT went the extra mile with that as well.
The only kids who learned anything else learned C++ so they could get jobs with DOD contractors
In many applications, the amount of data grows at least as quickly as computer performance. If the time complexity of an algorithm is superlinear, today's computer needs more time to run it with today's data than yesterday's computer did with yesterday's data. Algorithms that used to be practical get more and more expensive to run, until they eventually become impractical.
The more data you have, the more you have to think about algorithms. Brute-forcing can be expensive in terms of compute costs and wall-clock time, while low-level optimizations can take a lot of developer time.
The difference of course was anti-aliasing, and much greater bit depth, and running multiple programming environments/toolkits (Carbon and Java).
Edit: we also did formal methods and proofs as part of core curriculum
Do you mean "Computer Organization and Design - The Hardware / Software Interface" or "Computer Architecture: A Quantitative Approach"? Thanks.
Isn’t t it what SICP is all about?!
As for the OP's contention that computer science doesn't teach you to look for higher level things like cache thrashing, I wholeheartedly dissent with that supposition.
I recall at least 3 courses where substantial coursework was devoted to hacking, stack smashing, reverse compiling, profiling/introspection, kernel modification, so much beyond 'dynamic typing is an OO antipattern' stuff that gets IMO erroneously conflated with CS degrees.
Maybe these shit schools exist, but in a top 20 program you will definitely learn cache pitfalls.
There are still jobs where people write frameworks, database engines or version control tools. Those jobs require heavy CS and algorithms, data structures day to day. But there are less of those jobs nowadays as no one is implementing db engine for their app they just use Postgres.
Other jobs that is vast majority is dealing with implementing business logic. Using database with understanding how it works in details is of course going to produce better outcomes. Yet one still can produce great amount of working software without knowing how indexes are stored on disk.
Also a lot of CS graduates fell into a trap where they think their job is to write a framework - where in reality they should just use frameworks and implement business logic- while using CS background to fully understand frameworks already existing.
Most frameworks today are so complicated that you typically cannot understand them fully, and even understanding them somewhat partially is more than a full-time job.
If not, the developer would add one more feature. It is due to the entirely human-made aspect of this discipline.
Then for any details or unexpected behavior knowing where to look in documentation.
I agree... up to a point. Most software will likely be replaced/obsolete before it even reaches a scale where indexes even matter (at all) given how fast the underlying hardware is at this point.
... but I don't think this is particularly relevant wrt. the "to CS or not CS" question. If a CS grad has been paying any attention they usually have a decent idea of what kinds of problems are intractable vs. problems that are tractable (but maybe expensive to compute) vs. easy. Also just general exposure to different ways to approach solving a problem (logic programming, pure functional, etc.) can be very valuable. There's just much that one couldn't come up with on their own if one weren't exposed to the ideas from the vast expanse of ideas that are known in CS. (And even a master's doesn't come close to scratching the surface of it all.)
Traditionally, the field of databases is largely about solving algorithm problems in the scenario where you have much more data that can fit in memory. Data exists on disk as "pages", you have a fixed number of "page slots" in RAM. Moving pages from disk to RAM or RAM to disk is slow, so you want to do as little of that as you can. This makes trivial problems interesting -- e.g. there's no notion of a 'join' in classic computer science because it's too trivial to bother naming.
We're used to thinking of the study of algorithms as a sort of pure essence, but one could argue that algorithmic efficiency is only meaningful in a particular data and hardware context. That's part of what keeps our jobs interesting, I guess -- otherwise algorithm expertise wouldn't be as useful, since you could just apply libraries/cookbook solutions everywhere.
"Software Design for Flexibility: How to Avoid Programming Yourself into a Corner" by Chris Hanson and Gerald Jay Sussman
It's from 2021.
The introduction to that book is brilliant at identifying just how much room software has to grow (you can find similar talks from various Strange Loop sessions Sussman has done), and is really quite inspirational for anyone seriously thinking about the future of computing.
But the rest of the book fails to provide a coherent answer to the questions that are brought up in the intro. It shows off some neat functional programming tricks, but repeatedly fails to deliver on solving the (admittedly ambitious) challenges it provides for itself.
I'm still glad I have a copy, and have re-read the first half multiple times now, but sadly it's not the book it wants to be. To be fair though, that is because we haven't come close to understanding computation enough to solve those problems.
It's a very ambitious book that falls short of it's own ambitious.
_A Philosophy of Software Design_ by John Ousterhout (the guy behind Tcl)
https://www.goodreads.com/book/show/39996759-a-philosophy-of...
Written as the textbook for a software engineering course, it developed out of that course being taught multiple times _and_ all the code reviews which that entailed.
Previous discussions/mentions here which had a notable number of comments:
https://news.ycombinator.com/item?id=41017367
https://news.ycombinator.com/item?id=34733120
https://news.ycombinator.com/item?id=17779953
https://news.ycombinator.com/item?id=8055868
I re-wrote my current project in the course of reading it (I would read a chapter, then read through the code and where appropriate apply the relevant principle) and once I finish the current re-write (from OpenSCAD to Python) will be repeating that process to see if what I was supposed to have learned stuck/survived the re-write.
My opinion, I'd welcome others on the book; there was a small splash when it came out but not much discussion since.
When I've used (or built) something that was built in the style like you're talking about, it's almost always wrong, and the extra complexity and stuff now makes it harder to do right. It's not surprising: unknown future requirements are unknown. Over building is trying to predict the future.
It's like someone building a shed and pouring a foundation that can work for a skyscraper. Except it turns out what we needed was a house that has a different footprint. Or maybe the skyscraper is twice the height and has a stop for the newly-built underneath. Now we have to break apart the foundation before we can even begin work on new stuff; it would have been less work if the original just used a foundation for a shed.
Nevertheless. It must be done. Theory and practice.
I do agree with you though.
Because SICP's starting point was so high, they could describe many concepts easily from the ground up, from object oriented programming, backtracking, constraint programming and non-determinism.
This taught me a number of techniques to apply in real-life, because I could readily identify the missing building blocks in the language or system I was given to work with. For example, I was able to build a lightweight threads system in Java quite readily because I knew that the missing piece was a continuations feature in Java.
Writing a network between n such machines is left as an exercise to the reader.
Systems engineering is a seperate discipline in engineering fields, same applies to CS I would think, specifically if you look at what most developers do as computer engineering it would only make sense that there is forward progression within the discipline that builds on the foundations tought in most CS courses.
Regarding CS not teaching caching, well any course writer in a CS program that doesn't at least touch on that should feel like they failed their students, it is really something fundamental that should be tought pretty soon.
I feel like profiling should be something that is tought somewhere in a CS program, even a half semester course can dramatically improve peoples understanding of what the machine is actually doing.
On that note, even if you never intend to use the language, watch some C++ conference talks, almost every year at every conference thwre is a talk discussing performance.
https://www.amazon.ca/Distributed-Systems-Maarten-van-Steen/...
That's true, but that doesn't mean that there is no value in having an understanding of how established technology works under the hood.
> What computer science doesn’t teach you is how memory caching works in CPUs.
That is also a very good point. There is a lot of daylight between the lambda calculus and real systems.
I have the exact opposite experience.
Software comes out best if you always ensure to use an approach with sensible runtime complexity, and only make trade-offs towards cache-friendly-worse-O implementations where you benchmarked thoroughly.
Most cases where I encounter mega slow programs are because somebody put in something quadratic instead of using a simple, standard O(n logn) solution.
Check out https://www.tumblr.com/accidentallyquadratic for many examples.
Computer architecture and organization should teach this, no?
The industry has expanded to include a lot of large-scale distributed cloud projects (often where we might have expected mainframes and cobol before), with many of today's largest employers doing most of their work there, but none of that other stuff really went away. It's still being done every day.
You need a book for what you're doing, and not every book is going to be that. Apparently, SICP is not it. I possess and have read many books, and only some small number of them are applicable to the projects I'm working on at any time.
They don't compete with each other, they complement each other.
Yes it can, and there are tons of papers about data structures to use in various scenarios to handle not just L1, L2, L3, but also NUMA. Sure, this isn’t in SICP, but claiming CS as a field completely ignores how memory works is incorrect.
I’m being slightly facetious, but only slightly. If you really think everything is solvable with arrays, you are not going to scale well and of course you’re going to need to throw a lot more hardware at the problem.
It is good to know that solutions to the 2% exists, but what we should be focusing on is writing the simplest code possible which solves the problem and then only optimize afterwards using a profiler. God forbid you have to work on some codebase written by someone who believes they are the second coming of haskell with crazy recursion and backtracing, monads, red black trees, and a DSL on top of the whole thing.
You are right that many problems can be solved with a single box, but my argument is that you do not need fancy algorithms to solve problems on a single box. We should strive to use single boxes whenever possible to reduce complexity.
Computation is designed by humans to serve humans, we should make it as easy as possible for humans to understand. I’m probably going to start a flamewar here, but this is why simple solutions like UNIX and golang have prevailed in the past. Simple code is easy to understand and therefore it is easy to modify and reason about. Some people think simple means that you decompose programs into the smallest possible functional parts, but simple to me is a 500 line main function.
It surely was part of my Informatics Engineering degree, with Tanenbaum book being one of the required reads.
Still, I think the time investment to learn algos and data structures isn't too much of a burden.
https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
https://web.mit.edu/6.001/6.037/sicp.pdf
I hadn't seen a blessed PDF version until today. Circa 2001, only the HTML version was freely available, and someone converted it to TeXinfo: https://www.neilvandyke.org/sicp-texi/
If anyone wants to work through SICP today, you can run the code in MIT Scheme, or in DrRacket: https://www.neilvandyke.org/racket/sicp/
All of that is to say: if you do not need MIT Scheme and don't want to fuss with compiling it, then Racket might be a better way to go.
I'm a huge fan of MIT Scheme, and have used it since 1984, but I would recommend using another implementation these days, especially on Mac.
See Scheme.org.
I still find their description of how to create and group abstractions in various layers to be useful personally and as a mentor. (In the videos, lesson 3A, 1:07:55)
You may accidentally wish something you don’t yet know the true nature of, and this will create a fragile mess at the bottom. It usually does, cause algorithmic nature of things is rarely intuitive. Starting from the bottom is like starting from quarks that you have rather than from “I want magic to exist”. Well it does not. You reach the bottom and there’s quarks instead of magicules and you’ve lost all context clues on the way which could help to convert between two physics.
Both approaches have their use, because sometimes you have to be bold with your wishes to solve a deep problem. But personally I prefer magic to be packed into the before-topmost layer. I.e. build from the bottom up, and then, just before the business logic, create a convenience magic layer that translates to/from business speak. It becomes adjustable and doesn’t induce a tangled mess all the way down.
I've been trying a similar thing for my own effort to create a library for modeling G-code in OpenSCAD --- hopefully with the recent re-write in "pure" OpenPythonSCAD it will become something usable.
After many years of hobbyist programming (and consuming 'structured programming' books as well as languages from Pascal to Common LISP) we used Abelson & Sussmann at my undergraduate comp. sci. course, and it was eye-opening.
It demonstrates the simplicity, beauty and interactivity of Scheme while teaching you that computer science is the layering of different kinds of abstractions (from procedural abstraction and data abstraction, over defining your own (domain specific) language and implementing a compiler for it to defining new hardware in software). All of it seems so effortless, how only true masters can make things look like.
Make sure you buy the second edition, not the first or more recent ones, however (which use Python instead of Scheme - ugh).
Is that a downside? I never wrote or used metacurcular interpreter in my life and still don’t know why I had to read about it. Is it an interesting implementation technique of lisp? Yes. Does anyone really need that?
You can rip off that part and everything that follows and that will be enough for a regular programmer. No one itt needs to know how to design metacircular interpreter on register machines.
It probably doesn't help that I've seen many courses/documents that are (in hindsight) derivatives from SICP, so I have the nagging thought "not this again" when a topic is introduced in SICP.
There is a Python version of SICP. I have never worked through it or even given it more than a cursory scan so this is not an endorsement more just a link to prove it exists:
https://wizardforcel.gitbooks.io/sicp-in-python/content/0.ht...
I myself, but probably because I knew and respect the guy, I reread the works of Dijkstra ever so often; books + papers. Not really applicable anymore, but good for the brain and he was a good writer (imho).
There are things to love about Dr. Racket: hovering over a variable and visually seeing its connections to other places in the code is really cool. But ultimately I was a bit frustrated that it wasn't vs code.
So I stood up an configuration that let me use vs code (cursor actually) to work through the exercises. The LLM integration into cursor is cool as you can give it your code and whatever narrative you wrote and ask for feedback.
I am a tiny way through the exercises but having turned my code, the responses that I write, and the feedback that I get from the LLM into a static site.
It's been a fun way to spend a little time. For sure, I'm not getting the full benefit of working through SICP just with my own thoughts (without the aid of an LLM), but it's neat to see how you can integrate an LLM into the exercise.
Javascript version: https://sourceacademy.org/sicpjs/index
If you want to get it elsewhere, the full info is: Structure and interpretation of computer programs by Hal Abelson and Jerry Sussman (MIT Press. 1984. ISBN 0-262-01077-1).
K&R influenced a generation of programmers.
Hennessy and Patterson influence a generation of architects.
etc. etc.
It's not just SICP.
But the greater point: a book can be meaningful, and we can always use more good ones.
This is probably the part where you'd step up and post a link to your repo with solutions to the exercises to back up your talk, but generally I only see this sort of casual dismissal from people who haven't actually worked through the book.
Telling someone that he/she is smarter than 90% of the people is not a praise. :-)
My ta told me that everybody should take the class twice when you first come in and when you're graduating.
When you first take it especially if you know other languages like C at the time you don't get the full depth of the problems you're given a great introduction and you think you understand everything but you don't realize the depth of complexity. Message passing the metacircular evaluator, continuations as the basis of all flow control, etc
You think they are neat tricks that you understand the curriculum because you can do the homework you don't understand how those neat tricks are really the basis of everything else you'll do.
When you're graduating you've had time to go through all your classes you realize just how foundation was principles are and you get so much more out of the book.
Well I didn't take the class a second time I need help grade and TA for a couple semesters.
I work as a quant developer and in trading now and even though my field has nothing to do with that I still think it's the basis of me as a developer.
Ironic, given the increasing use of functional programming in domains where old-fashioned imperative/OO programming used to reign alone.
The idea that a 'procedural programming paradigm' exists in contrast with a 'functional programming paradigm' is blogspeak imho.
I am using Elixir’s Livebook to take notes and complete the exercises. It is very helpful to have a live notebook tool while reading it!
- Godel, Escher, Bach - Hoffstadter
- The Soul of a New Machine - Kidder
- The Emperor's New Mind - Penrose
- The Connection Machine - Hillis
- Algorithmics - Harel
Why? The modus operandi of problem solving in these books is object oriented programming masquerading as functional programming, and it is presented as a _neutral_ beginner book. It is _not neutral_. This is a very opinionated approach to programming.
To be fair, I do not believe the authors intended for this style of programming to be taken as gospel, but it is often presented _without counterpoint_.
The most powerful technique introduced -- implementing complex behavior via extensible polymorphic generics -- is virtually unmaintainable without a compiler-supported static type checker. You would know that if you ever tried to implement the code yourself in a dynamic language of your choice.
The ramifications of these choices can be felt far and wide and are largely unquestioned.
Ironically, they make code hard to understand, hard to extend, and hard to maintain. I need to reiterate, I do not believe the intention of the authors was to suggest these ideas should be used beyond a pedagogical setting, but they often are.
As a specific critique to SD4F, which states as a goal making code more resilient by emulating biology, I would point to Leslie Lamport's talk on logic vs biology[1].
I would add that I think SICP would be fine if it were taught in tandem with Paradigms of Artificial Intelligence Programming by Peter Norvig[2]. PAIP offers a completely different approach to solving problems, also using lisp. This approach is much closer to constructing a language to model a problem and then solving the problem symbolically using the language created. Areas that use OO techniques, such as the chapter in CLOS, are clearly marked as such.
In other words, I say "SICP considered harmful" because thrusting it upon an eager newcomer as a trusted neutral guide to beginner coding (without offering any counterpoint) could set them back by a decade, filling their head with "functional object oriented programming" concepts that don't translate well to industry or CS.
[*]: I say this as someone who has thoroughly studied both books, implemented the code, taken Dave Beazely courses to have the information spoon fed to me (dabeaz is awesome btw, take all his stuff) and used the techniques in production code bases.
[1]: https://lamport.azurewebsites.net/pubs/future-of-computing.p...
I'd counter that by saying it would set them forward by a decade (compared to people who don't know these techniques). Knowing advanced techniques doesn't mean trying to shoehorn them into every run of the mill problem you encounter in the industry. But if you encounter a gnarly problem where some advanced techniques will help you out, you'll sure be glad you learnt them.
I might agree with your hot take in sense that leaving choice is important though.
I work with distributed systems, writing business logic and dealing with infrastructure concerns. For me, learning about databases, quirks of distributed systems, and patterns for building fault-tolerant services is more important than reading the nth book on structuring programs, deciding which algorithm to use, or figuring out whether my algorithm has O(1) or O(n) complexity.
This doesn’t mean CS fundamentals aren’t important—they are—but I work in a different space. I’d get more value out of reading Designing Data-Intensive Applications than SICP. If I were in the business of building frameworks or databases, I’d probably be the target audience.
We need someone knowing this, just like we need someone to run the nuclear plant we use. But we do not need much those but we need more how to use electricity. Hence unlike another post physics are not the key even if it is more foundational.
For personal growth, it might be still though.
But frankly lisp is such a non-multi-system language, it has a hard time to deal with external world by its nature. It can be done as lisp is really the god level programming language. But as said it is NOT used by the gods for a reason.
We need find a system level language to express ourselves so that we can stand in giants. We need giants but no need to be one.
I recall that when MIT stopped teaching with SICP, one of the main claims was that programming now is often not about thinking abstractions through from first principles, and creating some isolated gem of composing definitions. Instead, we interact with and rely on a rich ecosystem of libraries and tools which often have individual quirks and discordant assumptions, and engineering then takes on a flavor of discovering and exploring the properties and limitations of those technologies.
I think now, (some) people also are at the point of not even directly learning about the limitations and capability of each tool in their toolbox, but leaning heavily on generative tools to suggest low-level tactics. I think this will lead to an even messier future, where library code which works on (possibly generated) unit tests will bear some fragile assumption which was never even realized in the head of the engineer that prompted for it, and will not only fail but will be incorporated in training data and generated in the future.
Which is a category mistake that they actually address in the lectures. SICP is not a programming course, it’s a computer science course. Computer science is not about computers, let alone programming, just as geometry is not about surveying instruments and astronomy is not about telescopes.
When they stopped teaching SICP — in response to the pressure to teach more modern tools — they abandoned their scientific principles to satisfy commercial concerns. They stopped teaching computer science and became a vocational school for the tech industry.
1. Primitive operations
2. Means of combination
3. Means of abstraction
These are the three main features of programming languages, according to Sussman and Abelson. They wanted us to stop bikeshedding over the superficial details and just look at a tool for these 3 features, then be able to implement the algorithms and data structures we already know. This is how you become a wizard who can cast magic spells.
I don't see what you mean by this at all. Furthermore this doesn't strike me as a useful distinction when a) it doesn't cover most topics labeled by consensus as "computer science" and b) it very clearly does teach a great deal about programming.
Why not say it teaches computer science and programming skills? Why do these have to be exclusive? There's obviously a great deal of overlap in general.
The goal of the course is not to teach programming skills, it's to teach computer science. The difference is explained quite thoroughly in the lectures. One might even say that answering the question "what is computer science?" is one of the core goals of the course and a major part of the philosophy of the professors who created the course.
The argument being made by the comparisons to geometry and astronomy is that in any discipline there is a difference between means and ends: what you are attempting to achieve is distinct from the tools you're using to achieve it. Furthermore, it's a mistake to believe that the discipline is all about the tools. No, the tools are the means, not the end.
Who cares what the goal is? It teaches programming skills too. The intent is irrelevant and for the most part so too is the distinction (outside the american education system, anyway).
> Furthermore, it's a mistake to believe that the discipline is all about the tools.
Who outside the american education gives a damn about "the discipline", if that refers to anything meaningful outside the american education system in the first place? It's arbitrary and has no purpose or benefit aside from organizing the education system. This is a course that miraculously, against all odds, manages to teach useful skills in addition to jargon patterns of thought. Why not celebrate this?
Anyway, programming is a useful pedagogical tool for teaching CS. CS is a useful pedagogical tool for teaching programming. To brag about not teaching one is just hobbling your own insight into the value you provide students.
I myself have a CS degree from a prestigious institution and largely enjoyed my education. But this attitude you alude to is just jerking off for the sake of jerking off. Particularly in the case of SICP.
Unless you have a TLA+ model of all your components and how they interact, I would argue you don't understand your distributed system either, for all inputs.
https://web.archive.org/web/20160505011527/http://www.poster...
Sheldon always said that MIT is a trade school
I imagine people who were taught SICP would be more respectful, if not inclined, towards a formal articulation of a system's principles.
This philosophy is described in depth in the original 1985 article https://gwern.net/doc/cs/algorithm/1985-naur.pdf and in more accessible language in https://www.baldurbjarnason.com/2022/theory-building/. You can also observe engineers opposing/misunderstanding the need for specification in https://news.ycombinator.com/item?id=42114874
Then, when you hit the job market, you learn the ecosystem of what other engineers have built and you work in that context.
In this way, you can eventually reach extreme productivity. Just look at humanity's GDP over the last 200 years.
> It's old and feels old. originally in scheme, they recently re released the book in JavaScript which is more approachable to today's audiences and there are still good things in there about encapsulation and building dsls. ymmv. Though the language and programming design concepts hold up, we're playing at higher levels of abstraction on more powerful machines and consequently the examples sometimes seem too tiny and simple.
I had studied economics in a similar way, but learning slightly old/outdated ideas demotivated me - I was much more interested in learning what works and what's considered the best way to do things, not what had been considered a good idea at some point in the past.
I don't want to be a downer on SICP (especially since I haven't even read it), but I hope this info might help others (or elicit a strong refutation).
In the more practical area Racket (the most modern Scheme) has basically any practical functionality you would want, while amazingly remaining a platform for an incredible amount of experimentation in computation and programming language theory.
But SICP is a book that is for people interested in the study of computation what programming languages can be. If you're worried about getting a job in software it won't be all that useful, but it will remain a classic for anyone interested in engaging in creating the future of software.
It's for people that would like to learn rather advanced programming techniques and foundational ideas in computer science.