You have to use encapsulation, inheritance and polymorphism. Fields and properties shouldn't have public setters. You assign a value only through a method, otherwise you make the gods angry.
You have gazillions of constructors, static, public, protected, private and internal. And you have gazillions of methods, static, public, private, protected and internal.
You inherit at least an abstract class and an interface.
You have at least some other types composed in your type.
Unless you do all this, some people might not consider them proper types.
I had my engineering manager warn me that data classes are "anemic models". Yes, but why? "We should have logic and methods to set fields in classes". "Yes, but why?" "It's OOP and we do DDD and use encapsulation." "Yes, but why? Imagine we have immutable records that hold just data and static classes as function containers, and those functions just act on the records, return some new ones and change no state. This way we can reason with ease about our goddam software, especially if we don't encapsulate state and mutate it all over the place, Uncle Bob, be damned." He shook his head in horror. He probably thinks I am a kind of heretic.
But he eventually turned to the dark side. He claims clojure, a functional lisp, is his absolute favorite language.
He’s even got blog posts about it! Multiple!
I stumbled on this while reading about clojure. I really like his blog!
https://blog.cleancoder.com/uncle-bob/2019/08/22/WhyClojure....
Previously, I thought FP was a way to happy-path incidental habits to avoid studying every pattern. But if patterns are discovered, arise out of independently invented idioms, then the best I could do is reinvent what everyone else has found worked for them (and turned out to be a design pattern).
It has also helped me look at Gang of Four (GoF) examples less literally--if we don't have exactly these classes, it's wrong--to context matching a potential solution with a given problem.
The light bulb moment is when OS artifacts like filesystem, programming constructs like modules, and even some one-off scripts can also participate in a pattern implementation, not just having a specific constellation of classes.
[1] Holub on Design Patterns
No! Patterns are just crutches for missing language features or "design patterns are bug reports against your programming language." GoF patterns are concepts useful in OOP, but the recurring patterns and architectures you see in other paradigms are totally different. And they don't apply to Lisp: https://www.norvig.com/design-patterns/ Most don't even apply in Go: https://alexalejandre.com/programming/software-architecture-...
> visitor is type switching, singleton global variables, command and strategy are first class functions with closures, state is building a state machine with first class functions
If you could perfectly compress your code, repeated patterns would be factored away. Macros do this. In Lisp, when you find a pattern, you write code which generates that pattern, so you don't have to.
https://mishadoff.com/blog/clojure-design-patterns/
> if patterns are discovered, arise out of independently invented idioms
Yes. That's the point behind Christopher Alexander's pattern concept - he found architectural patterns which seemed to promote good social habits, happiness etc. Gabriel's Patterns of Software presents this far better than GoF. I strongly suggest you read it: https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf
Another aspect of FP is compositionnality.
Singleton -> memoization of zero-arg fn (pro: initialization is implicit)
Memoization -> fn accepting a fn and returning a memoized version
Command & Strategy -> just a lambda then ?
Objects/classes are complex. Composing them is not simple, it means composing every single method. In fact composition is handled as a special case of class derivation, interface implementation or wrapper class implementation usually, i.e. you gotta write another class. OOP has given up on this issue from the beginning. There is no point in building a framework to create classes by composing other classes because you'll have to control how each single method compose with its counterpart. Say farewell to your neat composition expressions, the granularity is not there. Just write another class instead.Yes, Holub mentions C folks had to make do with what they had--but that those implementations, despite differences, would point to one or another common pattern (or done enough times similarly to be a pattern).
> bug reports against your programming language
Yes and no. I daresay no programming language can expect to be a universal one and still have the same ergonomics as a purposeful one or even a DSL.
At the same time, with so many amazing languages out now, new language authors may see adopters clamor for features seen in those contemporary ones.
> Macros... Lisp... Clojure
Yes, definitely have tooling for boilerplate. But that also doesn't mean the pattern could have been implemented differently as yet another idiom, or written in a way that isn't idiomatic according to one group or another.
Thank-you for the references.
We've only been speaking in terms of single languages, though. Have you combined different programs together before and thought, "Hmm, this is kind of like a pattern for..."?
That would speak to the universality of patterns.
It's metacircular all the way down. In Forth or Tcl, every command can be its own program. Deploying over many machines has similar dynamics to multithreading on one etc. The same concepts, notation etc. apply - you abstract over them and voila. Better primitive sets make things manageable, or don't.
You have reversed the point of patterns, to be a tool of thought and something to aim for. Instead, they are something you notice by a certain outcome, which guides you when you want that result again.
> Lisp ... tooling for boilerplate
That's where you've missed it. This isn't some functional propaganda; there are many paradigms, OOP and functional are just single ones. You are thinking in terms of boilerplate when these other paradigms just don't have any of it. You can even do OOP without the boilerplate - it's incidental to your tools, Common Lisp with CLOS is an OOP language. The patterns in Lisp architecture are about fundamental issues of domain modeling, how to structure teams, organizations and manage who should implement what. But you can even jump higher in scope and model that in code and "compile your company". As the code executes, at some points it will ask for user input, having an accountant do so and so action or asking a committee to assemble for something else. My company works this way.
-------
> no programming language can expect to be a universal one and still have the same ergonomics as a purposeful one or even a DSL
Yes and no. Yes, overfitting a tool to the current problem space restricts it, but DSLs can have the exact same ergonomics as any other language. Cf. Language Oriented Programming: https://beautifulracket.com/appendix/why-lop-why-racket.html
Agreed. Dr. Samek says as much too and presents an object-oriented version of C, C+, in Practical Statecharts in C/C++.
Thank-you for your replies. If you happen to see this comment, I wanted to ask about reification.
The pattern is the idea, the design reifies--"makes real"--the pattern. Something else--whether code, constellation of programs, or pigeons--implements the design.
There can be many reifications of a pattern, and there can be many implementations of a design.
Do those ideas ring true?
> compile your company
I've seen this done. Human-readable instructions coexist with SQL snippets and scripts. However, once it becomes tribal knowledge, folks may not understand it.
If manual approvals had also been processed through a pipeline instead of emails and "stop and start," it may have still been in use.
Does object handle everything? Of course not, but having it as a capability at hand allows some neat tricks and tidy code.
I believe every problem needs a different mix of capabilities/features, and that particular mix directs my choice about which tools to use. This mindset always served me well, but who knows, maybe I'm the dumbest one in the room.
I also write my templated queries myself and talk with the server directly. No need to go "FizzBuzz Enterprise Edition" for a simple task.
In your case, if the data is not fitting into neat boxes, it's not being forced into that in the first place. I select tools/features according to the problem at hand, not try to fit what I have into the problem at hand. If that requires a new language to learn, I'm down with that too. This is why I learnt Go, for example.
Sometimes the design stretches to its limits. If that happens, I stop, refactor and continue development. I call this outgrow/expand model.
It's not the fastest way to develop software, but it consistently gives me the simplest architecture to implement the requirements, with some wiggle room for the future.
For example, the latest tool I have written has broken the design I made in the beginning, because I overgrown what I designed (i.e. added more functionality than I anticipated). Now, I'm refactoring it, and will continue adding things after refactoring.
Every tool, every iteration brings learnt lessons, which are stored in a knowledge base.
What I am saying is that Objects are not rows. And we sometimes try to force the Object model onto a data-schema that ultimately is more rich than a chain of objects.
Which is not using the most appropriate tool for the job, per your example.
An ORM is _fine_ for stuff that has a fairly standard shape, like blob posts, user accounts, things like that. Lots of relation questions against persisted data end up not being those exact shapes and patterns yet folks generally reach for the ORM.
Again that's a bedroom thought, maybe people do this with Category Theory in a haskell library, or caml modules, I'm just not aware of it.
[0] Then there are monadic types to embed a secondary type but that seems too restrictive still.
There are plenty of examples where you want to use an abstraction over various immutable record types. Services vs. records is a false dichotomy and there is power in mixing the two.
Yes, there are lots of functions that don't make sense to co-locate with their operands. Static classes are fine for those functions if both of these are true: the function requires no dependencies, and there is only one reasonable implementation of the function. In practice I find it rare that both of these are true. With a good Dependecy Injection system (and bad ones abound), requesting a service instance is just as simple as referencing a static class.
There's probably a deeper question, how to make objects 'transposable' in general case (like going from row-based to column-based representation) without duplicating code or exposing internals?
But you're right that it's commonly nice to operate on sets of things instead of individual things. If I were offsetting millions of points repeatedly, I'd look hard for a good data structure to optimize for whatever I'm trying to do.
"Exposing Internals" is not really the big issue here; the big issue is resilience against change. The time when it's appropriate to finely optimize CPU cycles is long after that code has settled into a very durable form. It's just that, for most systems, there's a lot more time spent in the volatile stage than the durable stage. Get your Big-O right and don't chatter over networks and you won't need to worry about performance most of the time. It's much rarer that you don't have to worry about change.
DI in .NET is very good and you can access an object with ease with DI. Still, why use it? It's another layer between the caller and calee. Creating objects and resolving those through DI takes some CPU cycles without any major added benefits and your code becomes more complex so more things can go wrong.
> there is only one reasonable implementation of the function
this. In the absence of polymorphism, a static function is just fine. I am in the phase of avoiding object notation in preference of functional notation whenever possible. I leave OO notation to cases where, for example, there is polymorphism as the functional wrapper will add little to no value.
May I ask for one or two examples?
For a more complex example, let's consider mixing colors.
Say we have red = ColorRGB(196, 0, 0) and blue = ColorRGB(0, 0, 196) and our current task is "allow colors to be mixed". Which looks better:
purple = red.MixWith(blue) or
purple = Color.Mix(red, blue) ?
There are many different ways to mix colors, and many other things you might want to do with them too. When you start getting into things like red.MixHardLight(blue) vs.
ColorMixing.HardLight(red, blue)
The advantage of the latter becomes more clear. And it naturally extends into moving your mixing into an interface instead of a static class, so you can have "ColorMixer.Mix(red, blue)", etc. It's about focusing more on the operation you're doing than the nouns you're doing them on, which just tends to be a cleaner way to think about things. There's just a lot more variety in "different operations you can do with things" than in "types of things", at least in the kind of software development I've experienced.Lasagna is when you have your software organized in layers. In other words instead of having a big ball of mud where A calls B calls C calls A calls C calls B you have layers (like in lasagna) so that A calls B calls C and you keep your code base so that classes/modules/types that are in the lower layer do not depend on or know of the anything above them and the dependencies only go one way.
I love lasagna. It's great (both as design and as food) !
https://stackoverflow.com/questions/2052017/ravioli-code-why... https://en.wikipedia.org/wiki/Spaghetti_code#Ravioli_code
A related principle that I don't think is talked about enough is "locality": I'd rather have all the code about one feature in one file or close together, rather than it strewn across files where it's harder to read and understand as a whole. Traditional Java was notorious for being the opposite of this. Traditional HTML+CSS+JavaScript is also very bad for this problem.
> Traditional Java was notorious for being the opposite of this.
Is modern Java better? If so, why? Also, what languages do it better than Java and why? Any as old as Java that do it better?I have this issue with: Java, Kotlin, Erlang, and Elixir, but especially Java and Kotlin.
> because in OOP land
Speaking on behalf of enterprise CRUD devs (the vast majority of programmers), the Java POJO (pure data struct) is alive and well.Unfortunately YES, because of "Java Entreprise" pushed by consultancies 15 years ago, a lot of developers insist on encapsulating everything, even when it's redundant.
Fortunately, Java is a better language today.
> a lot of developers insist on encapsulating everything, even when it's redundant.
Can you give an example? 15 years ago was 2010 and Java 8 was already released. > Fortunately, Java is a better language today.
In what ways? To me, it has barely changed since JDK 8 when it got lambdas. To be clear, the JVM is leaps and bounds better each long term supported release.Independently none of these are game changing, but together they provide much improved ergonomics and can reduce boilerplate. Now if only we had null safe types <3.
Run a copy of the server in a container, run client E2E tests against it.
The mock itself may simply be a new type defined in your test case.
You certainly would want to use an `interface` and that means you need an object. It could be an object that has no fields though and receives all data through its methods.
But it does go against the spirit of objects: You want to make use of `this` because you are in the land of nouns.
Or imagine those functions are part of the immutable record and create new instances. The aspect of (im)mutability is orthogonal to where you place your logic. In the context of domain models, if the logic is an inherent part of the type and its domain, then there are good reasons to model the logic as part of the type, and those have nothing to do with Java or the typical OOP dogma (Rust chrono: `let age = today.years_since(birthday)` - Yes, you could argue that the logic is part of the trait implementation, but the struct's data is still encapsulated and correct usage of the type is enforced. There is only one way to achieve this in Java.)
> we [snip] use encapsulation." "Yes, but why?"
Simple: because when your program gets large you cannot know everything needed to modify the code. You need places where you can say "something really complex happens here and I must trust it is right" (not to be confused with it is right, only that right or wrong is irrelevant)
OOP is a great answer to the problems of large complex programs. It is not the only answer. It sometimes is the best and sometimes not. Applying it like a religion is wrong.
Why not, again?
The larger cost is maintenance. Every time you want to change/add/remove something you now need to touch not only every place that uses the thing in question, but also all the tests. That is by mocking your are tightly coupling your tests to how your code is implemented instead of the code should do. If there is more than one implementation this is okay as the code can do different things in different situations anyway, but if there is only one then you are adding coupling that isn't useful.
If you have never worked with non-trivial code that is more than 10 years old the above won't make sense. However as code ages you will discover that the original beautiful architecture ideas where always wrong for some need that happened since and so you have had to change things. Mocks make all those changes much harder.
> mocks/fakes do not count
Why not?
> In general if there is only one implementation just use it.
This is good advice for people who don't understand what interfaces are for -- which seems to be most people. If you make the mistake of believing that interfaces are for describing the commonalities between similar implementations, then demanding an IFoo for every Foo is indeed a waste. Generally if I see literally the same word twice, but one has an "I" in front of it, it's a flag that the interface might not need to exist. I think this is the advice you're giving here. But you've missed the other important point: interfaces are not "about" their implementations, they're "about" their consumers. The point of an interface is for the consumer to define exactly what it needs to do its job. That's why interfaces can be useful even if there are 0 implementations: it's still a clear definition of what some service or function requires. Assuming that service isn't private/internal, it's a hook that a consumer of your module can implement if they choose.
I'd say the frequency of the following patterns is correlated with different levels of code quality -- sometimes causally, sometimes spuriously.
1. StringWarbler : IStringWarbler (worst)
2. StringWarbler (fine)
3. StringWarbler : IWarbler (maybe worse or better, but a bit smelly)
4. StringWarbler : ITextProcessor (good)
Yes, #2 is fine if all you need to do is warble strings and there's really only one thing that means. That's common and it's fine. #3 is suspect because it looks like it was abstracted along the wrong line. It looks like there were a lot of different "ThingWarblers" and someone tried to define what was similar about them, without really focusing on the consumers of these services. Whereas with #4, it looks like someone identified a need to process text in various ways, and one way to do that is to warble them. It sounds like the interface was created to describe a need, not an implementation. When you do it this way, you start to see classes that implement two or three interfaces as well.When I see #2 everywhere, I don't think "these classes need to be implementing more interfaces!". Instead I think "the services relying on these classes are probably not doing a good job specifying exactly what they need to do their job, and are therefore brittle against changes in how those needs are fulfilled." Whether that's a problem depends on the exact situation. I feel like your advice is "prefer #2 over #3", which I generally agree with. But the better advice is "ask for what you need to do your job, in the most general form you can", which #2 (in excess) violates.
Can you elaborate? Types don’t contain data in any way that I understand… Do you mean constrain?
Types are just a layer on top of your code that help enforce certain logical / mathematical properties of your code; those that your particular type system enables you to constrain.
Most actual type systems are not powerful enough to allow you to fully specify the logical objects you are actually working with, but thinking about exactly what they are and how to constrain them as well as you practically can (or should) is in my experience one of the important skills of an advanced programmer.
By the way, if you know what I mean and have found a good way to effectively teach this way of thinking, let me know, because I have often been unsuccessful. I see people left and right thinking about code as a procedure that will with trial and error ultimately be made to work, rather than as an implementation of logical / mathematical objects that can be correct or not.
I don't know if I really agree with that. It seems to presume the default form of data is a map<any,any> and you constrain that down to a specific set of fields. (Or similar logic over void pointer spaghetti.) If you build up a record type from scratch you're doing something quite different. Adding a field where there used to be nothing, no way to store data there in an instance, is not what I would call a constraint.
The form of data in a computer is simply a sequence of binary information. It is up to you as a programmer to decide what it represents.
In some cases, you can use a type system to have that automatically enforced in part.
It's important to realize that the types you can use in practice as part of a formal type system are very rarely correct in the sense of fully describing what some data represents.
For example, if a string of bytes represents a HTTP request, then "valid HTTP request bytes" is the correct type, but the type known to the compiler might just be string of bytes.
Similarly, if you represent a graph by a hash table of int to list of int (with the ints labeling nodes), then the type of that is a map from int to list of int, where the int values in the latter must exist as keys as well - but the type system might not actually know that.
In practice, there will therefore often be functions that validate the contents of the data - these are basically manual run time type checking functions.
Let's say I'm making a new record type with a .firstname field.
And let's say I'm not declaring what data type is stored in the field. Zero description or constraint is happening there.
The .firstname field itself was unrepresentable before I added it. There is no previously existing concept of accessing a .firstname field that I am constraining.
In this scenario, my language doesn't have access to raw memory, and even once I create this record type I don't know how it's going to be stored in memory. So I'm not constraining raw pointers into organized records because there are no pointers at all. I don't have raw byte access like in your HTTP example.
In that scenario, what am I constraining? What is the unconstrained version?
I like thinking of types as a syntactical layer, just like in natural language. Without knowing the meaning of words, we can show that certain compositions of verbs, nouns, etc. can a priori never lead to an intelligible sentence. So trying to actually turn the sentence into a meaning or reading it (compiling/running) is useless.
But then again I was trained as a linguist so…
Excel is a pure functional programming language, BTW.
IME, people don't actually “reason better with objects”, either.
Every darn little thing in Java needs a name.
If there is no good name, that's a hint that maybe you don't need a new type.
Obligatory Clojure example:
(defn full-name [{:keys [first-name last-name]}]
(str first-name " " last-name))
This defines a function named `full-name`.
The stuff between [] is the argument list. There's a single argument. The argument has no name.
Instead it is using destructuring to access keys `:first-name` and `:last-name` of a map passed in (so the type of the unnamed argument is just Map)This function works for anything that has a key `:first-name` and `:last-name`.
There's no need to declare a type ObjectWithFirstNameAndLastName. It would be quite silly.
Don't all types need names, regardless of what language you use?
Look at typescript, and how it supports structural typing. They don't seem to have a problem with names. Why do you think Java has that problem when nominal type systems simplify the problem?
> There's no need to declare a type ObjectWithFirstNameAndLastName. It would be quite silly.
Naming things is hard, but don't criticize typing for a problem caused by your lack of imagination. A basic fallback strategy to name specialized types is to add adjectives. Instead of ObjectWithFirstNameAndLastName you could have NamedObject. You don't need to overthink it, with namespaces and local contexts making sure conflicts don't happen.
There are two mindsets: try to work around problems to reach your goals, and try to come up with any problem to find excuses to not reach your goals. Complaining about naming sounds a lot like the second.
No.
- In very dynamic languages (like javascript), most types arguably don't have names at all. For example, I can make a function to add 2d vectors together. Even though I can use 2d vectors in my program, there doesn't have to be a 2d vector type. (Eg, const vecAdd = (a, b) => ({x: a.x+b.x, y: a.y+b.y}) ).
- Most modern languages have tuples. And tuples are usually anonymous. For example, in rust I could pass around 2d vectors by simply using tuples of (f64, f64). I can even give my implicit vector type functions via the trait system.
- In typescript you can have whole struct definitions be anonymous if you want to. Eg: const MyComponent(props: {x: number, y: string, ...}) {...}.
- There's also lots of types in languages like typescript and rust which are unfortunately impossible to name. For example, if I have this code:
#[derive(Eq, PartialEq)]
enum Color { Red, Green, Blue }
fn foo(c: Color) {
if c == Color::Red { return; }
// What is the type of 'c' here?
}
Arguably, c is a Color object. But actually, c must be either Color::Green or Color::Blue. The compiler understands this and uses it in lots of little ways. But unfortunately we can't actually name the restricted type in the program.Rust can do the same thing with integers - even though (weirdly) it has no way to name an integer in a restricted range. For example, in this code the compiler knows that y must be less than 256 - so the if statement is always false, and it skips the if statement entirely:
https://rust.godbolt.org/z/3nTrabnYz
But - its impossible to write a function that takes as input an integer that must be within some arbitrary range.
I think that's less a question whether you can, but rather whether you should or shouldn't design it that way... (I'll use TypeScript here for the simpler syntax)
It'd be perfectly fine to do something like this:
type ColorR = 'red';
type ColorG = 'green';
type ColorB = 'blue';
type ColorRGB = ColorR | ColorG | ColorB;
But which constraint should your new type ColorGB (your variable c) adhere to? // constraint A
type ColorGB = ColorG | ColorB;
// constraint B
type ColorGB = Exclude<ColorRGB, ColorR>;
I'd argue if the type ColorGB is only needed in the derived form from ColorRGB within a single scope, then just let the compiler do its control flow analysis, yes - it'll infer the type as constraint B.But if you really need to reuse the type ColorGB (probably some categorization other than all the colors), then you'd need to pay close attention to your designed constraint.
Say I have some enum like ColorRGB here. In some contexts, only a limited subset of variants are valid - say, ColorGB. There's a few ways to code this - but they're all - in different ways - horrible:
1. Use ColorRGB in all contexts. Use asserts or something to verify that the value is one of the expected variants. This fails at expressing what I want in the type system - and the code is longer, slower and more error prone as a result.
2. Have two enums, ColorRGB and ColorGB. ColorRGB is defined as enum ColorRGB { Red, Restricted(ColorGB) }. This lets me encode the constraint - since I can use ColorGB explicitly. But it makes Color harder to use - since I need to match out the inner value all over the place.
3. Have two enums, ColorRGB and ColorGB which both have variants for Green and Blue. Implement conversion methods (impl From) between the two types. Now I have two types instead of one. I have conversions between them. And I'll probably end up with duplicate methods & trait impls for ColorRGB and ColorGB.
Luckily this doesn't come up that often. But - as your typescript example shows - it can just be expressed directly in the type system. And LLVM already tracks which variants are possible throughout a function for optimisations' sake. I wish rust had a way to express something like Exclude<ColorRGB, ColorR>.
[0]: https://www.typescriptlang.org/play/?#code/C4TwDgpgBAogdgVwL... from StackOverflow[1]
[1]: https://stackoverflow.com/questions/39494689/is-it-possible-...
I think its more useful in a language like rust, because the compiler can use that information to better optimize the emitted assembly.
It starts with the question of how I would like to design my sequences:
- Is it the range (0, 10] in steps of 1?
- Is it the range (0, 2^10) in steps by the power of 2?
- Is it the range (0.0, 2PI] in steps of EPSILON?
How would the compiler engineer generalize this optimization?
And the question would continue whether I'd be really able to define them that precisely before runtime. Most applications are just dealing with lightweight structures, mostly serialized JSON nowadays, and then even there are enough fuck-ups[0] where such an optimization wouldn't help at all.
I can imagine the places where they really matter are some heavy memory intensive data structures like deep neural networks, graphics or video and the like - for the time being they're just dealing with tensors of data type floatX, and that seems to be fine AFAIK.
I mean, I'd be really nice if the smaller memory footprint could come out-of-the-box during compilation. But all the CLI tools written in rust certainly don't have the use case to put this complication on the shoulders of compiler research.
[0]: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
It would be better if the type system could encode types like this directly. Better for ergonomics and better for optimisation.
This is a quote from Alan J. Perlis, not Rich Hickey, and it's certainly not what Rich is famous for.
To the point many think the famous patterns book used Java, when it is all about Smalltalk and C++ patterns.
Java's problem is that it's hugely successful, and to some it's the only language they ever experience in their formative years. Thus, because poor workmen always blames the tools, Java becomes the root of all evil.
People complain about this all the time but I'd rather take a verbose name than not knowing what is going on. Sometimes these naming conventions help.
Digging into older poorly named, structured and documented codebases is not fun.
Good comments are better than good function names.
No. Your statements are fundamentally wrong at many levels. Factories are tools to instantiate objects of a specific type a certain way. They are constructors that don't suffer from the limitations of constructors. A factory does not impose any requirement on object life cycle.
On the other hand, it may do something with the object it's instantiating before passing it back. Such as putting it into global scope, or tying it to something outside its chain. Factories that sit outside the class file are often prone to all kinds of grotesque later mucking-with that causes the instantiated object to bind to other scope, and never be properly garbage collected. That's what I'm talking about.
Build better constructors and you won't usually need factories. If you need factories as an API, build them next to the constructor.
It sounds like you're confusing things. Factories are a way to instantiate objects of a specific type a certain way. Singletons is just a restriction on how many instances of a type there can be.
These are not Java concepts, nor are they relevant to the topic of declaring and naming types.
Because the types in typescript don't need names. And the type "object with firstName and lastName" is one such type that doesn't need a name.
So:
> They don't seem to have a problem with names.
Yes. The problem is much smaller there, and mostly caused by programmer cultures that insist on naming everything.
Here's a simple example: https://github.com/dharmab/skyeye/blob/main/pkg/bearings/bea...
Sufficiently flexible namespaces do solve most of these problems. Java is kind of perverse though.
Map m = new HashMap() {{ System.out.println("I am a unique subclass of HashMap with a single instance!");}};
fullName = map list \(firstName, lastName) -> firstName + " " + firstName
and type it as `funfullName: (String, String)[] -> String`.I have worked on large scale systems in both types and untyped languages and I cannot emphasize strongly enough how important types are.
But one thing is certain: When you have that one function that is used 165 times throughout the code base, having a type checker is certainly going to help you when you add in the users middle initial.
In an ideal world :)
In the real world the customer doesn't know what they want and you can't fully guess what they want or need ahead of time no matter how many diagrams you draw.
Incidentally, one of the few good things that came out of the "agile" religion.
And the exact point I tried to communicate.
When you decide to have loose architectural structures, you might allow just writing these types of functions adhoc.
The further you go in the project, the more your strengthen the architecture, where needed.
Duck typing is great, what's even better is documenting when when they need to quack or waddle.
class Command:
def execute(self) -> None:
# some implementation
and class Prisoner:
def execute(self) -> None:
# some other implementation
The implementor of the Prisoner class might not want the Prisoner class to be able to be slotted in where the Command class can be slotted in. Your type checker will be of no help here. If you use abstract base classes, your type checker can prevent such mistakes.So when it comes to your own code, the drawbacks of the structural Protocols in comparison the nominal ABCs are pretty big. The pros seem non-existent. The pro, I guess, is that you don't have to type the handful of characters "(Baseclass)" with every concrete implementation.
But they do have one major advantage: if you have third party code that you have no control over, and you have some part of this codebase that you want to replace with your own, and there's no convenient way to do something like the adapter pattern, because it's somehow a bit deeply nested then a Protocol is a great solution.
Code that just works right now never scales.
There's no way to tell.
1. This (or maybe a less trivial form of this) will bite you in the ass when you end up using other people's unnamed types. Or even when you use your own unnamed types that come from code you haven't touched in three years.
2. That's what interfaces are for in Java. Or at least modern Java.
Sadly, at the beginning, many people came to Java from C/C++, and they did the thing we used to call "writing C/C++ code in Java".
The interfaces is the only thing i loved from it.
At least in Javascript you have JSDoc.
Why is it that dynamically typed languages usually develop static typing extensions (including Clojure)? Perhaps people don’t enjoy hunting down tedious spelling issues such as last-name vs family-name?
And as others replied, the issue of naming a type exists in all languages.
Abstractions have a maintenance cost associated with it ie, another developer or possibly yourself must be able to recreate the "algebra" associated with that type (your thought process) at the time of making modifications. This creates some problems:
1. Since there's no requirement to create a cohesive algebra (API), there was probably never a cohesive abstraction to begin with.
2. Requirements may have changed since the inception of the abstraction, further breaking its cohesion.
3. Since we largely practice "PR (aka change) driven development", after a few substantial repetitions of step 2, now the abstraction has morphed into something that's actually very tied into the callsites (verbs), and is essentially now tech debt (more like a bespoke rube goldberg machine than a well-designed re-usable software component).
You can introduce types if you follow the open/closed principle which means you don't change abstractions after their creation (instead create new ones and then delete old ones when they have no callsites).
The article is talking about simple "bundle of values" types; the example is the CreateSubscriptionRequest. This is not an abstraction. It is simply a declaration of all the fields that must be provided if you want to create a subscription. And it is usually superior to passing those N fields around individually.
And if you need to change abstraction at some point, then refactor.
problem is more when you have types that "do things" and "have responsibilities" (usually to "do things with other types they hold pointers to, but do not totally own"), such a type is very difficult to maintain because there's now:
- a boundary of its responsibilities that is subjective,
- responsibility of building collaborators and initializing the type
- dealing with test doubles for the collaborators.
structuring what lives where became easier, naming things became systematic and consistent, and writing unit tests became simple.
It makes for great APIs (dot-chaining from one type to another), well-defined types (parse, don’t validate) and keeps the code associated with the type.
class Profile:
...
class User:
@classmethod
def from_profile(cls, profile: Profile) -> 'User':
...
def to_profile(self) -> Profile:
...
...are about all the methods I need in my data records. Three simple rules though:1. Keep isomorphisms to one class only: Don't put two def to_${OTHER_MODEL_NAME} in each class, instead (like you said) create one static mapping (@classmethod) and one instance mapping
2. Add a mapping to the one class that feels more generalized out of the two: A more generalized data model will probably be used a lot more throughout the application
3. The creation of instances should be pure: If a mapping has side effects and needs to await something then it isn't just a mapping - first resolve all necessary dependencies, then do the mapping
Frontend, backend, databases, services, reports, whatever - ETL.
In that context, the types and transformations between types are the most important feature. (Everything is actually Category Theory)
I guess with general computers everything we do is basically defining nested specific computers. That's, I think, the insight behind SmallTalk and the original concept of objects it used: the objects were supposed to represent computers and the message passing was an abstract network layer.
To me, looking at it as "functional" approach instead (data in, operate over that data, data out) is cognitively simpler.
> But take it from someone that’s had do deal with codes passing through and returning several values of strings, ints, and bools through a series of function calls: a single struct value is much easier to work with.
This presupposes that the code should be very strict with the data, which may or may not be desirable. For example, in many CRUD apps the client and the database enforce constraints, while the middle tier just needs to martial data between the two, and it’s questionable whether the middle tier should do type enforcement. As always: Challenge your assumptions, try to understand how a “bad” practice might actually be the right approach in certain contexts, etc, etc.
While types can be used for that, they are a much broader concept.
I would say the general purpose of types is to tell apples from oranges.
Having types helps the IDE help YOU! That’s my favorite part about types and a strong IDE like Webstorm or IntelliJ. I agree it’s not a substitute for proper testing though.
It’s okay, you really can have hundreds of tables, your DBMS can handle it.
Obviously don’t create them for their own sake, but there’s no reason to force reuse or generic design for different things.
Unless you don't have anyone with any semblance of DB design, you usually have more friction when you want to de-normalize the DB instead.
At least, that's been my experience, but I did have the luck of mostly working with experts in DB design.
I've encountered the same phenomenon, and I too cannot explain why it happens. Some of the highest-value types are the small special-purpose types like the article's "CreateSubscriptionRequest". They make it much easier to test and maintain these kinds of code paths, like API handlers and DAO/ORM methods.
One of the things that Typescript makes easy is that you can declare a type just to describe some values you're working with, independent of where they come from. So there's no need to e.g. implement a new interface when passing in arguments; if the input conforms to the type, then it's accepted by the compiler. I suspect part of the reason for not wanting to introduce a new type in other languages like Java is the extra friction of having to wrap values in a new class that implements the interface. But even in Typescript codebases I see reluctance to declare new types. They're completely free from the caller's perspective, and they help tremendously with preventing bugs and making refactoring easier. Why are so many engineers afraid to use them? Instead the codebase is littered with functions that take six positional arguments of type string and number. It's a recipe for bugs.
I think that some languages lead developers to think of types as architecture components. The cognitive cost and actual development work required to add a type to a project is not the one-liner that we see in TypeScript. As soon as you create a new class, you have a new component that is untested and unproven to work, which then requires developers to add test coverages, which then requires them to add the necessary behavior, etc.
Before you know it, even though you started out by creating a class, you end up with 3 or 4 new files in your project and a PR that spans a dozen source files.
Alternatively, you could instead pass an existing type, or even a primitive type?
> But even in Typescript codebases I see reluctance to declare new types.
Of course. Adding types is not free of cost. You're adding cognitive load to be able to understand what that symbol means and how it can and should be used, not to mention support infrastructure like all the type guards you need to have in place to nudge the compiler to help you write things the right way. Think about it for a second: one of the main uses of types is to prevent developers from misusing specific objects if they don't meet specific requirements. Once you define a type, you need to support the happy flows and also the other flows as well. The bulk of the complexity often lies in the non-happy flows.
It's not any languages doing that; it's their company culture doing that.
Java-style languages (esp. those using nominative typing, so: Java, C#, Kotlin, Swift, but not Go, Rust, etc) have never elevated their `class` types as illuminated representations of some grandiose system architecture (...with the exception of Java's not-uncontroversial one-class-one-file requirement); consider that none of those languages make it difficult to define a simple product-type class - i.e. a "POCO/POJO DTO". (I'll pre-empt anyone thinking of invoking Java's `java.beans.Bean` as evidence of the language leading to over-thinking architecture: the Bean class is not part of the Java language any more than the MS Office COM lib is part of VB).
The counter-argument is straightforward: reach for your GoF Design Patterns book, leaf through to any example and see how new types, used for a single thing, are declared left, right and centre. There's certainly nothing "architectural" about defining an adapter-class or writing a 10-line factory.
...so if anyone does actually think like that I assume they're misremembering some throwaway advice that maybe applied to a single project they did 20 years ago - and maybe perhaps the company doesn't have a meritocratic vertical-promotion policy and doesn't tolerate subordinates challenging any dictats from the top.
> Think about it for a second: one of the main uses of types is to prevent developers from misusing specific objects if they don't meet specific requirements.
...what you're saying here only applies to languages like TypeScript or Python-with-hints - where "objects" are not instances-of-classes, but even then the term "type" means a lot more than just a kind-of static precondition constraint on a function parameter.
The current Typescript hype / trend is to infer types.
Problem is at some point it slow things down to a crawl and it can get really confusing. Instead of having a type mismatch between type A and type B you get an error report that looks like a huge json chain.
But in many more modern languages, a "new type" is something the equivalent of
type MyNewType string
or data PrimaryColor = Red | Green | Blue
and if that's all your language requires, you really shouldn't be afraid of creating new types. With such a small initial investment it doesn't take much for them to turn net positive.You may need more, but I don't mind paying more to get more. I mind paying more just to tread water.
And I find they tend to very naturally accrete methods/functions (whatever the local thing is) that work on those types that pushes them even more positive fairly quickly. Plus if you've got a language with a halfway modern concept of source documentation you get a nice new thing you can document.
For this reason, I very much appreciate the dataclass decorator. I notice that I define classes more often since I started using it, so I'm sure that boilerplate is part of the issue.
You don't even need a modern language for that kind of thing, plenty of languages from a half century or so ago also let you do that. From Ada (40+ years old):
type PrimaryColor is (Red, Green, Blue);
Or if you're content with a mere alias, C (50+ years old) for your first example: typedef char* MyNewType;C has its own problem. First, typedef isn't a new type, just an alias. But C's problem isn't the ceremony so much as the global namespace. Declaring a new type was nominally easy, but it had to have a crappy name, and you paid for that crappy name on every single use, with every function built for it, and so forth. You couldn't afford to just declare a new "IP" type because who knows what that would conflict with. A new type spent a valuable resource in C, a namespace entry. Fortunately modern languages make namespaces cheap too.
At some point I needed to change the types to capture a byte range from a buffer rather than just referring to the base+offset and length, and it was trivial to make that change and have it “just work”.
These were no vtable classes with inline methods, within a single compilation unit - so they just poof go away in a stripped binary.
‘Tis better to create a class, than never to have class at all. Or curse the darkness.
Another similar approach is building a "digital twin" of something that is happening in the real world.
But very often the software is the only implementation, there's nothing to simulate.
People expect for a type to contain some logic, but it doesn't have to. e.g. a configuration is a type that contains other types that contains yet other types. But I have never seen it done like that in languages like Java.
assembler : everything is a word
C : everything is an array of bytes
fortran/APL/matlab/octave : everything is a multi-dimensional array of floats
lua : everything is a table
tcl : everything is a string
unix : everything is a file
In some of these languages there are other types, OK, but it helps to treat these objects as awkward deviations from the appropriate thing, and to feel a bit guilty when you use them (e.g., strings in fortran).
The types certainly exist. They're in my mind and, increasingly through naming conventions, embedded within some of the comments of my assembler code. But nothing is there to check me. Nothing can catch if I have made an error, and accessed a pointer to a data structure which contains a different type than I thought it did. Without a type system, that error is silent. It may even appear to work! Until 6 months later, when I rearrange my code and the types are arranged differently in memory, and only THEN does it crash.
The original goal of hungarian notation :) But Petzold mistakenly used 'type' in the paper and we ended up with llpcmstrzVariableName instead of int mmWith vs int pixelWidth, which was what they were doing in Office and frankly makes a lot of sense.
You can't add a number to a string, only to another number.
If you are dealing with a float, you better be careful how you check it for equality.
If it's pure binary, what kind of byte is it? Ascii, unicode code point, unsigned byte, signed multi-byte int, ... whatever.
There's no escaping the details, friend.
And your saying "everything is a word" for assembler is just plain wrong.
Works in C, as long as the integer keeps the resulting pointer within the bounds of the allocation. See a trivial example[1].
So, I ask: what size and signedness of int? 1, 2, 4, 8? What if the string is of length 3, 2, 1, 0?
Why bother with all those corner cases. Everything has a memory layout and appropriate semantics of representation and modification. Pushing those definitions is a recipe for problems.
I like to keep it simple, keeping the semantics simple in how I code specific kinds of transforms.
The less kinds of techniques you use, the less kinds of patterns you have to develop, test, and ensure consistent application of across a codebase.
Especially down in C land, which is effectively assembler.
Gone are the days of Carmac having to save bytes in Doom, unless you're doing embedded work, in which case that's all the more reason to be very careful how you handle those bytes.
That's entirely how "string" indexing works in C. Strings in C are just pointers to `char` with some valid allocation size. As long as the integer used for the pointer offset results in a pointer into the allocation after the addition, it's valid to dereference the result. Remember, `array[index]` is syntactic sugar for `(array + index)` in C. Lots of the C stdlib string functions use this, e.g. `char strchr(const char* str, int character)` has a naive implementation as a simple loop comparing `char`s[1]. Glibc does it one `unsigned long int` at a time, as an optimization, with some extra work at the start going one `char` at a time to ensure alignment.
> So, I ask: what size and signedness of int? 1, 2, 4, 8?
Doesn't matter, as long as the result of the addition points to within the pointer's allocation. Otherwise you get UB as usual.
> What if the string is of length 3, 2, 1, 0?
Doesn't matter, as long as the result of the addition points to within the pointer's allocation. Otherwise you get UB as usual. For a 0-length string (pointer to '\0\'), the only valid value to add is 0.
> The less kinds of techniques you use, the less kinds of patterns you have to develop, test, and ensure consistent application of across a codebase.
100% agreed. The less C you use for string handling the better. C strings are fundamentally fragile.
Any time the microprocessor accesses memory for use as an int, it's a specific kind of int, meaning size and signedness, and the flags are adjusted properly as per the operation performed.
> Strings in C are just pointers to `char` with
I'm gonna end this here. I taught myself C programming by reading K&R in the late 80s, and then proceeded to do so professionally for YEARS and YEARS.
There are people that know, and there are people that act like they know. You ever read the first two chapters of Windows Internals? You ever write C code that could make Windows system calls from the same program that could be 32- or 64-bit with a simple compiler flag?
I have.
> C strings are fundamentally fragile.
Not if you know what you're doing. You're almost certainly using a C program to type this response in an operating system largely written in C. You get any segfaults lately? I don't EVER on either my Ubuntu or Debian systems.
Thanks for playing.
> Any time the microprocessor accesses memory for use as an int, it's a specific kind of int, meaning size and signedness, and the flags are adjusted properly as per the operation performed.
Sure. But the C standard specifies how addition of a pointer to an integer works in section 6.5.7, particularly paragraph 9. The specifics of what flags get set & the width of integer used are up to the implementation & the programmer, but
> For addition, either both operands shall have arithmetic type, or one operand shall be a pointer to a complete object type and the other shall have integer type. (Incrementing is equivalent to adding 1.)
should be a pretty clear statement that pointer + integer is valid!
> > Strings in C are just pointers to `char` with
> I'm gonna end this here. I taught myself C programming by reading K&R in the late 80s, and then proceeded to do so professionally for YEARS and YEARS.
> There are people that know, and there are people that act like they know. You ever read the first two chapters of Windows Internals? You ever write C code that could make Windows system calls from the same program that could be 32- or 64-bit with a simple compiler flag?
> I have.
I'm an embedded C developer. I've been writing C for decades, but not for windows. But I do write code that can work on both 8-bit and 32-bit systems with just a compiler flag. Strings are arrays of character type with a null terminator, and array-to-pointer decay works as usual with them.
>> C strings are fundamentally fragile.
> Not if you know what you're doing. You're almost certainly using a C program to type this response in an operating system largely written in C. You get any segfaults lately? I don't EVER on either my Ubuntu or Debian systems.
> Thanks for playing.
C strings are arrays of character type with a null terminator. That is fundamentally fragile, since it includes no information about the encoding or length of the string, and thus allows invalid states to be represented. That doesn't mean you will get segfaults, only that it's possible for someone to screw up & interpret your UTF-8 data as ASCII or write a `\0` in the middle of a string or other such mistake, and you'll get no protection from the type system.
I also love x87 registers, but they are becoming rarer these days.
I learned C from reading K&R in the late 80s.
Every C compiler I've worked with could output the code as assembler, so C is really a thin layer of abstraction that wraps assembler. Having programmed in pure assembler before, I understand the benefits of C's abstractions, which began with its minimal, but helpful, type system.
Should I not be taking you seriously?
We are not just talking with each other but sharing our expertise with those who may be reading.
Sometimes I forget that other people can just be unpleasant on purpose. I find no other explanation for your response.
It's just that my post rate is severely limited, even to reply to people replying to me.
I'm on some kind of naughty list in "the algorithm", which is something beyond dang's control, by what I gather from his reproachments to me.
That's why I've got to minimize my number of replies.
And 'harmless banter' doesn't communicate in pure text, friend.
Peace be with you.
Haven't written JavaScript?
And Javascript is garbage no matter how many people use it successfully, as I have done professionally.
The article's perspective would be that structs are useful, so use them liberally. And nearly all good, large C programs do, as far as I can tell.
Of course there are tradeoffs and you can take it too far. The article mentions that as well.
Types give me a language enforced way to track what the data really means. For small programs I don't care, but types are one of the most powerful tricks needed for thousands of developers to work on a program with millions of lines of code.
A human mind is a cache -- if you overload it, something will fly out and you won't even notice. Anyone who claims that types have no use probably doesn't experience overloads. If it works for them, good, but it doesn't generalize.
It's okay to create a new data structure that combines some primitive data types in a "struct", like an array that tracks its length.
But we don't want to "build abstractions and associate behavior to them" (just associate behavior to data structures like push/pop).
I suggest this is a good notation for data structures like, stack.push(10) or heap.pop()
I'm suggesting we don't use this notation for things like rules to validate a file, so I suggest we write validate(file, rules) instead of rules.validate(file).
Then we can express the rules as a data structure, and keep the IMO unrelated behavior separate. Note then we don't need to worry about whether it should be file.validate(rules) perhaps. Who does the validation belong to? the rules or the file? the abstractions that are created by non-obvious answers to "who does this behavior belong to" are generally problems for future changes.
"Everything is a file" rather refers to the fact that every resource in UNIX(-like) operating systems is accessible through a file descriptor: devices, processes, everything, even files =)
So how about using generic (=parameterized) types? Isn't that the answer?
Classes in Java hold logic and internal mutating state. I don’t want to create this just because I need a type.
Really you want to create a struct or an interface as a type.
This has nothing to do with OOP.
You must have a funny definition of distributed computing, then. OO never really caught on, probably because accepting "globs of undefined bytes" is kind of sucky, but we see some examples in the wild. Interface Builder is probably the best example of where we really leaned into OO, orienting a program's UI objects from externally defined messages. I'm not sure what building user interfaces has to do with distributed computing, but... I guess?
Multiple processes are involved, giving meaning to a byte stream.
I am quite sure Interface Builder has enough Objective-C type definitions, as does Portable Distributed Objects.
Additionally, those mapping definitions on the UI are nothing else than a type system powering the whole experience.
Maybe in the case of Erlang, but that's a fairly unique case. Smalltalk[1], and by extension Objective-C, do not pass messages across processes.
Languages don't normally concern themselves with that kind of thing at all. You can build distributed computing on top of languages (pretty much any language), but that's not a trait of the language and way beyond the topic at hand.
[1] Which is what matters most as OO was literally defined by the design of Smalltalk.
https://dl.acm.org/doi/10.1145/38765.38836
https://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/html/...
> do not pass messages across processes
Except "The Smalltalk-80 system provides support for multiple independent processes with three classes named Process, ProcessorScheduler, and Semaphore. A Process represents a sequence of actions that can be carried out independently of the actions represented by other Processes."
"Ch. 15 Multiple Independent Processes" 1983 "Smalltalk-80 The Language and its Implementation"
https://rmod-files.lille.inria.fr/FreeBooks/BlueBook/Blueboo...
More or less. While the whole point of object-oriented programming is that behaviour is defined at runtime through the passage of messages. The messages may originate from the very same codebase, but not necessarily. Consider something like NeXT's Interface Builder, which relied heavily on injecting messages into the program externally.
> But in pure form, a well typed OO language should never run into undefined behaviour at runtime.
It is not known until runtime what kind of messages will appear. You can define what should happen when an "unknown" message shows up (ignore it, for example), but you cannot statically prove that it won't occur. –– If you could, what do you need OO for?
Take a particle system. Let's say each particle is an instance of a class with velocity and position. An attractor/repeller is also an object with V/P and some other features. These could all just be a [x,y,vx,vy,u,v,w] array that some global functions operate on, but then you'd lose inheritance as well as readability. The thing that runs that particle system on a loop, which manages all those tiny objects? It doesn't need to be its own object. Tie the delta function to the main timer for your game. And the main timer for your game? It shouldn't have multiple instances, right? It's about as central to Main() as anything else. There are lots of things in code that there should only be one of. Trying to shoehorn those things into objects and classes is where OO goes wrong and you end up with multiple timers or things that aren't expunged from memory.
However: Is it nice to manage an array of Particle objects with .x .y .vx and .vy, and some nifty methods to get and set those? Instead of writing that code to crunch through the same data on a flat 1-D array of all particles? Yeah. That's the point of OO.
Encapsulation is a trait of functional programming. You don't need OO to create 'objects' that offer 'methods' to extract or modify the encapsulated data.
OO is all about message passing; that is what sets it apart from other programming concepts. If you don't allow messages from external sources (thus fundamentally requiring dynamic runtime behaviour), you can do boring old static binding which is going to be way faster.
You don’t need OO for that. Procedural languages have structures and procedures. Functional languages have records and functions.
The real benefit is type narrowing. (Or declaring the space of all possible combinations of values.)
Instead of
id: null | string
name: null | string
...
You'd have
user: null | User
And
User: { id: string, name: string, ..}
declaring that all are not null together.
Pushing validation to static checker instead of runtime.
Which can be an extremely significant difference. If you're already using a single object and add a 6th field, you only need to update the places where the object is constructed and where the new field is consumed. If you're using individual variables, you also need to update all the code that those 5, now 6, variables simply pass through.
I try hard to push the validation to static checks, but it can't always be done. For example:
- a "probability" represented as a double must always be between 0 and 1
- A vector of values given to some group of functions must always be sorted (by some given order)
But even in those cases, using types allows me to ensure that this validation always happens (when the type instance is created). It also let's me avoid having to explicitly validate this inefficiently (e.g. redundantly in preconditions in functions that receive these values).
This doesn't change the fact that static validation is a better approach, but compliments it.
I wrote a bit about it here: https://github.com/alefore/weblog/blob/master/edge/correctne...
On top of new cognitive load, introduction of redirection, and anti-patterns like pretend-simplification where you shuffle a bunch of unrelated parameters into a struct to make a function receive only a single argument (which quickly evolves into functions receiving parts of that struct they don't need, making use and testing much harder — I am surprised an article is really recommending this as a strategy), your types frequently get exposed to external users, and you need to start thinking about both backwards- and forwards-compatibility.
Benefits of types should be carefully balanced against the cost of introducing them, and while calling that "fear" might not be appropriate, it should be a conscious cost-benefit analysis.
Most languages and their typing systems would still allow adding speed and fuel-left if you only aliases them, so you are likely going to need to sacrifice ergonomics by wrapping them in a compound type instead.
> Benefits of types should be carefully balanced against the cost of introducing them
Does "benefits of types" not imply to you that there are downsides to not having them?
Does one need to always "fairly" represent both sides when it's clear that there are obvious properties that you are not discounting? This is a discussion, and that was a single comment — I couldn't imagine people would see it as a comprehensive look at a topic as complex as types in programming.
In that case, why not respond to the arguments I raise but instead attack what was never stated? I can see how part of the comment can prejudice you to an unbalanced opinion, but that's more on the reader than on the author — at least I think so.
The example of the article is a great one: replace all your arguments to a function with a single argument of a new "type".
Soon, this new type gets used in another function, and it "just" needs another field added. Suddenly, you inadvertently changed the API for the previous function.
So yes, you need to worry about backwards and forwards compat either way, but misusing types will make it harder and more error prone.
Instead, if you carefully used types only when they really represent a new logical thing, you wouldn't replace unrelated set of arguments with a single argument of a new combo type, and code would be cleaner and saner.