He also praises async/await as a revolutionary leap forward for programming languages, but without a mention of the language that invented/pioneered it (that's right, it's C#, in version 5).
Now, I'm not going to sit here and claim that C# is perfect for all purposes. It has made mistakes which have to live on due to backwards compatibility. The mutability of structs is, in my opinion, one such mistake. But even so, of all programming languages I've seen, C# fits this author’s ideas like hand in glove.
Edit: the scenario he describes of putting a large array or other static data structure directly into the code is also better supported by modern C# compilers. For small structures you still get the element-by-element-adding bytecode, but there's also a serialization format that can bake large objects directly into the assembly.
But your idea of Java is pretty dated - it has type inference, full algebraic types (records and sealed classes), pattern matching (switch expression - though many of its more advanced features are TBD/experimental yet).
And then Java has virtual threads, which C# suddenly wants to add as well, that language is definitely on the same road as C++, and this "adding every conceivable feature" is not sustainable, it will crumble under its own way. Java is much more conscious of it, which I greatly value.
For the record, I didn't say Java doesn't have those. The only thing I said it doesn't have is value types (and in another post, true generics). I'm aware that it has a couple features where C# is the one that's lagging behind, but on the whole, it's really no competition. I'm also aware that stuffing a language full of features willy-nilly is unwise and I'm curious where this will take C#.
Did C# finally implement some kind of discriminated unions with exhaustive pattern matching? Last time I checked not even plain `enum`s supported exhaustive pattern matching. For comparison, even Java supports discriminated unions in the shape of sealed classes/interfaces with exhaustive pattern matching since Java 21
Data comes from outside your application code. Your algorithms operate on the data. A complaint like "There isn’t (yet?) a format for just any kind of data in .class files" is bizarre. Maybe my problem is with his hijacking of the terms 'data' and 'object' to mean specific types of data structures that he wants to discuss.
"There is no sensible way to represent tree-like data in that [RDBMS] environment" - there is endless literature covering storing data structures in relational schemas. The complaint seems to be to just be "it's complicated".
Calling a JSON payload "actual data" but a SOAP payload somehow not is odd. Again the complaint seems to be "SOAP is hard because schemas and ws-security".
Statements like "I don’t think we have any actually good programming languages" don't lend much credibility and are the sort of thing I last heard in first year programming labs.
I'm very much about "Smart data structures and dumb code works a lot better than the other way around" and I think the author is starting there too, but I guess he's just gone off in a different direction to me.
Ya, this one really confused me. Tree-like data is very easy to model in an RDBMS in the same way it is in memory, with parent+child node pointers/links/keys.
It's possible he was thinking of one of these challenges:
1-SQL doesn't easily handle the tree structure. You can make it work but it's square peg round hole.
2-He mentioned JSON so maybe he was really thinking in terms of different types of data for each node. Meaning one node might have some text data and the next node might have a list or dictionary of values. That is tougher in RDBMS.
When I first encountered RESTful web services using JSON the ability to easily invoke them using curl was such a relief... (and yes, like lots of people, I went through a phase about being dogmatic about what REST actually is, HATEOAS and all the rest - but I've got over that years ago).
NB I also am puzzled as to the definition of "data" used in the article.
I guess the angle is that there was a style of SOAP where the payload was interpreted as a remote procedure call conforming to an interface described in a WSDL document.
So there would be a SOAP library or server infrastructure (BizTalk) on the receiving end that would decode the message and turn it into a function call in some programming language.
In that sense, the payload would be "data" only for the infrastructure code but not on the application level.
And now I'm going to have to spend the rest of the day trying to forget this again :)
Most implementations don't retrieve parameters by tag, they retrieve parameters by order, even if that means the tags don't match at all. This is completely unlike JSON.
Also, almost nobody uses REST, so stop calling things REST, when you are talking about HTTP APIs.
I'm not dogmatic about this. People don't understand what REST is. REST is some completely useless technology that absolutely nobody needs. Using the right words for things isn't dogmatism, it's being truthful. The backlash from most people comes from Fielding's superiority complex where he presents REST as superior, when it is merely different, different in ways that aren't actually useful to most people, and yet people constantly give this guy a platform by referring to his obsolete architecture, earning them the "well actually"s they deserve.
IME this leads you to consider more and more featureful languages which are worse and worse at actually building software, and never match the flexibility of a tool like pen and paper.
Poor design is your own fault. Write more, draw more, prototype more. You may need to develop your own notation, you may need to get better at drawing or invest in a drawing tool. You may need to learn another programming language, which you use only for prototyping.
The design of your system does not need to be perfectly represented in your source code. The source code needs to produce runnable machine code, which behaves in the ways that your design dictates, and that's the only link between the design, the code, and the running system. Programming languages today are pretty good at producing working software, but not very good at doing that and designing systems and communicating designs and documenting choices, etc.
In contrast with languages like C++ and Java where things are shakey from the ground up. If you can't get an integer type right (looking at you, boxing, or you, implicit type conversions), the rest of the language will always be compensating. It's another layer of annoyances to deal with. You'll be having a nice day coding and then be forced to remember that int is different to Integer and have to change your design for no good reason.
Perhaps you disagree with Erlang's approach, but at least it's solid and thought-out. I'd take that over the C++ or Java mess in most cases.
Yes, immutable does provide some guarantees for "free" to prevent some types of bugs but offering it also has "costs". It's a tradeoff.
Mutating in place is useful for highest performance. E.g. C/C++, assembler language "MOV" instruction, etc. That's why performance critical loops in high-speed stock trading, video games, machine learning backpropagation, etc all depend on mutating variables in place.
That is a good justification for why Erlang BEAM itself is written in C Language with loops that mutate variables everywhere. E.g.: https://github.com/erlang/otp/blob/master/erts/emulator/beam...
There's no need to re-write BEAM in an immutable language.
Mutable data helps performance but it also has "costs" with unwanted bugs from data races in multi-threaded programs, etc. Mutable design has tradeoffs like immutable has tradeoffs.
One can "optimize" immutable data structures to reduce the performance penalty with behind-the-scenes data-sharing, etc. (Oft-cited book: https://www.amazon.com/Purely-Functional-Data-Structures-Oka...)
But those optimizations will still not match the absolute highest ceiling of performance with C/C++/asm mutation if the fastest speed with the least amount of cpu is what you need.
E.g. the value 5 can't be changed, neither in Haskell, neither in C. A place that stores that value can, which makes the place an "object".
Mutability fundamentally means identity (see Guy Steele), so I think this is a fundamental distinction.
As for immutable data, sure, on single threaded code they do carry some overhead, but this may also translate to higher performance code in a multi-core setting due to better algorithms (lockless, or finer grade locking).
Having separate types for these is the problem; not the boxing. C#/IL/CLR handles boxing in a way that doesn't exhibit the problem. If your code is dealing with integers, they are never boxed. They are only boxed when you cast to a reference type such as object. As soon as you cast back to int, you are unboxing.
Java exhibits the problem in a big way because it doesn't have true generics, so you can't have (say) a list of integers or a dictionary with integer values, so they must always be boxed, so you need a separate “boxed integer” type to maintain type safety. In C# you can just use unboxed integers everywhere.
This. I discovered this by implementing Flow Based Programming[1] (FBP) in Erlang[2] - the FBP is best suited to languages that are message based and immutable data structures. When using a mutable language, messages are constantly being clone as they are passed around. This does not need to be done using Erlang/BEAM.
My feeling is that FBP is a much under-rated programming paradigm and something really worth exploring as an alternative to the current programming paradigms.
My take on FBP is that data is being passed to functionality, as opposed to function being passed to data - the functional programming paradigm, or data and function is passed around as is the case with OOP.
IMHO it makes sense to pass data to functions, particularly in the times of SaaS and cloud computing.
Erlang and Elixir (and Clojure), however, lack a static type system, which makes it really difficult to use them at large scale (I am happy if you can provide convincing evidence to the contrary - I just haven't seen any). There's Gleam, which is a beautiful, simple language, that has a very good type system, but unfortunately, it's a bit too simple and makes certain things harder, like serialization (e.g. https://hexdocs.pm/gleam_codec/).
Haskell and Ocaml are more usable, but for some reason are extremely niche. I don't think there's any popular language that's in the "statically typed, functional" school, which I think shows that humans just don't prefer using them (they have been hyped for decades now, yet they just never stick). Perhaps a new generation of these languages will change things, like Unison for example (which stays away from Monads but provide an arguably superior abstraction, Abilities - also known as Effects). I think I would love for that to happen, though as I said before: sometimes you still need to reach out for bare metal performance and you have to use Rust/C++/D or even Java.
I've only seen this pattern used in contexts, where there is no filesystem to begin with.
The only alternative is to have the data in a separate file, which needs to be available, read in, and parsed. .NET/CLR provides a mechanism to bake large objects into the assembly and I don't see that as abusive. It's way more convenient when you can treat the object as being “just there”.
In my dream language I'd push this further; case classes should not offer any object identity APIs (essentially the whole of java.lang.Object) and not be allowed to contain non-value classes or mutable fields, and maybe objects should be a bit more decoupled from their state. But for now I wouldn't let perfect be the enemy of good.
Do you mean allowing the "class" of an object to be changed - CLOS can do that. Mind you it's a long time since I wrote any code using CLOS and even then I'm pretty sure I never used change-class.
Closure is different of course. But not more functional than Haskell for example.
Every language I've used since then feels like it makes this issue needlessly complicated and implicit.
However, OCaml’s first class modules are frequently a useful and serviceable alternative. I think they’re a precise middle ground between what we here call “objects” and “data.”
> [Data] The schema gives us a fixed set of variants, over which you can of course write any function you want.
> [Objects] We have a fixed set of exposed operations, but different variants can be constructed (including an improved ability to evolve a variant without impacting clients).
No mention of the expression problem? The TL;DR is, sometimes[1] we want both. And sometimes[2] it’s an exceptionally good idea for a lot of slightly different sets of variants to coexist withn the same program, which there also isn’t really a satisfactory solution for.
[1] https://www.craftinginterpreters.stuffwithstuff.com/represen...
The expression problem absolutely captures the nature of the issue and a lot has been written about it.