I think it may be one of those things you have to see in order to understand.
With a very basic concrete example:
x = 7
x = x + 3
x = x / 2
Vs
x = 7
x1 = x + 3
x2 = x1 / 2
Reordering the first will have no error, but you'll get the wrong result. The second will produce an error if you try to reorder the statements.
Another way to look at it is that in the first example, the 3rd calculation doesn't have "x" as a dependency but rather "x in the state where addition has already been completed" (i.e. it's 3 different x's that all share the same name). Doing single assignment is just making this explicit.
In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.
It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.
People jump ahead using AI to improve their reading comprehension of source code, when there are still basic practices of style, writing, & composition that for some reason are yet to be widespread throughout the industry despite already having a long standing tradition in practice, alongside pretty firm grounding in academics.
My faith in this presumption dwindles every year. I expect AI to only exacerbate the problem.
Since we are on the topic of Carmack, "everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase." [0]
I think that Rust made this decision because the x1, x2, x3 style of code is really a pain in the ass to write.
   let x = "29"
   let x = x.parse::<i32>()
   let x = x.unwrap()
Once you actually "change" the value, for example by dividing by 3, I would consider it unidiomatic to shadow under the same name. Either mark it as mutable for preferably make a new variable with a name that represents what the new value now expresses
let x = Foo::new().stuff()?; let x = Bar::new(x).other_stuff()?;
So with the math example and what the poster above said about type changing, most rust code I write is something like:
let x: plain_int = 7
let x: added_int = add(x, 3);
let x: divided_int = divide(x, 2);
where the function signatures would be fn add(foo: plain_int, int); fn divide(bar: added_int, int);
and this can't be reordered without triggering a compiler error.
Like, if you have a constraint is_even(x) that's really easy to check in your head with some informal Floyd-Hoare logic.
And it scales to extracting code into helper functions and multiple variables. If you must track which set of variables form one context x1+y1, x2+y2, etc I find it much harder to check the invariants in my head.
These 'fixed state shape' situations are where I'd grab a state monad in Haskell and start thinking top-down in terms of actions+invariants.
1. A property known at compile time.
2. A property that can't change after being initially computed.
Many of the benefits of immutability accrue properties whose values are only known at runtime but which are still known to not change after that point.
As in - it's not very "constant" if you keep re-making it in your loop, right?
Whereas "immutable" throws away that extra context and means "whatever variable you have, for however long you have it, it's unchangeable."
you cant change a constant though
Immutability doesn’t have this connotation.
You’re allowed to rebind a var defined within a loop, it doesn’t mean that you can’t hang on to the old value if you need to.
With mutability, you actively can’t hang on to the old value, it’ll change under your feet.
Maybe it makes more sense if you think about it like tail recursion: you call a function and do some calculations, and then you call the same function again, but with new args.
This is allowed, and not the same as hammering a variable in place.
> why are you calling it mutable?
Mostly just convention. Rust has immutable by default and you have to mark variables specifically with `mut` (so `let mut var_name = 10;`). Other languages distinguish between variables and values, so var and val, or something like that. Or they might do var and const (JS does this I think) to be more distinct.
They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.
JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.
That’s always felt very odd to me.
    const pi = 3.1415926
    const 2pi = 2 * pi
    const circumference = 2pi * radiusI think it's simply the difference between the curious mind, who explores stuff like Clojure off the job (or is very lucky to get a Clojure job) and the 9 to 5 worker, who doesn't know any better and has never experienced writing a FP codebase.
I tried to learn Haskel before but I just got bogged down in the type system and formalization - that never sat with me (ironically in retrospect Monads are a trivial concept that they obfuscated in the community to oblivion, yet another Monad tutorial was a meme at the time).
I used F# as well but it is too multi paradigm and pragmatic, I literally wrote C# in F# syntax when I hit a wall and I didn't learn as much about FP when I played with it.
Clojure had the lisp weirdness to get over, but it's homoiconicty combined with the powerful semantics of core data structures - it was the first time where the concept of working with values vs objects 'clicked' for me. I would still never use it professionally, but I would recommend it to everyone who does not have a background in FP and/or lisp experience.
On a positive note I have taken those lessons from clojure (using values, just use maps, Rich’s simplicity, functional programming without excessive type system abstraction, etc) and applied them to the rest of my programming when I can and I think it makes my code much better.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.
However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?
I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.
However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?
Does it disappear into the call stack?
Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.
That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.
Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.
So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.
Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.
And how do you do that without understanding how the program works at a high level?
I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.
What happens if the change you need to make is at a level higher than a single function?
Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.
What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.
This is one way of thinking about it: https://news.ycombinator.com/item?id=45701901 (Simplify your code: Functional core, imperative shell)
For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.
It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.
For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system. Sometimes they even fail to even realise that it's what they are doing.
In the end, the goal is always the same: better control and warranties about the impact of side effects with minimum fuss. Carmack approach here is sensible. You want practices which make things easy to debug and reason about while mainting flexibility where it makes sense like iterative calculations.
That's because Haskell is a predominantly a research language originally intended for experimenting with new programming language ideas.
It should not be surprising that people use it to come up with or iterate on existing features.
I think the authors are quite aware of the relationship between these techniques and mutable state! I imagine it's similar for other canonical functional programming texts.
Besides the "pure" functional languages like Haskell, there are languages that are sort of immutability-first (and support sophisticated effects libraries), or at least have good immutable collections libraries in the stdlib, but are flexible about mutation as well, so you can pick your poison: Scala, Clojure, Rust, Nim (and probably lots of others).
All of these go further and are more comfortable than just throwing `const` or `.freeze` around in languages that weren't designed with this style in mind. If you haven't tried them, you should! They're really pleasant to work with.
----
1: https://www.manning.com/books/functional-programming-in-scal...
2: https://www.manning.com/books/functional-programming-in-kotl...
This is a thoughtful response, but I can't help but chuckle at a response that starts with, just read this book!.
Because that’s not what they’re doing. They’re isolating state in a systemic, predictable way.
In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.
It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.
How do you write code that actually works?
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.
I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.
That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?
If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
The nice thing here is you can pass around pointers to fooA and never worry that anything is going to change it underneath you.
You don't need to protect private variables because your internal workings cannot be mutated. Other code can copy it but not disrupt it.
This is the bit I don't get.
Why would I do that? I will never want a fooA and a fooB. I can't see any circumstances where having a correct fooB and an incorrect fooA kicking around would be useful.
The beautiful thing about this is you can stop naming things generically, and can start naming them specifically what they are. Comprehension goes through the roof.
Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?
   Array<Float> append(Float value);
   Array<Float> replace(int index, Float value);
The trick is: How do you make this fast without copying a whole array?
Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.
No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.
Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.
But then I need to update a bunch of stuff to point to the new array, and I've still got the old incorrect array hanging around taking up space.
This just sounds like a great way to introduce bugs.
Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.
If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.
    Universe nextStateOfTheUniverse = oldUniverse.modifyItSomehow();
Old states don't hang around if you don't keep references to them. They get garbage collected.
Is Python that different from JavaScript? Because it's easy in JavaScript. Just stop typing var and let, and start typing const. When that causes a problem, figure out how to deal with it. If all else fails: "Dear AI, how can I do this thing while continuing to use const? I can't figure it out."
For example:
  (let [result {:a 1}
        result (assoc result :b 2)]
    ...)
clj-kondo has a :shadowed-var rule, but it will only find cases where you shadow a top-level var (not the case in my example).
The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.
This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.
I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.
I know there are alternate names available to us, but even in the context of this very conversation (and headline), the thing is being called a "variable."
What is a "variable" if not something that varies?
Example:
  function circumference(radius)
      return 2 * PI * radius
It doesn’t have to be a function parameter, that was just for brevity of example. If you read external input into a variable, or store in it the result of calling a non-pure function, or of calling even a pure function but passing non-constant expressions as arguments to it, then the resulting value will generally also vary between executions of that code.
(unfortunately, Kotlin then goes on and introduces "val get()" in interfaces, overloading the val term with the semantics of "read only, but may very well change between reads, perhaps you could even change it yourself through some channel other than simple assignment which is a definite no")
Suppose I have a function which sums up all the prices of products in a cart, the total so far will frequently mutate, that's fine. In Rust we need to mark this variable "mut" because it will be mutated as each product's price is added.
After calculating this total, we also add $10 shipping charge. That's a constant, we're (for this piece of code) always saying $10. That's not a variable it's a constant. In Rust we'd use `const` for this but in C you need to use the C pre-processor language instead to make constants, which is kinda wild.
However for each time this function runs we do also need to get the customer ID. The customer ID will vary each time this function runs, as different customers check out their purchases, but it does not mutate during function execution like that total earlier, in Rust these variables don't need an annotation, this is the default. In C you'd ideally want to label these "const" which is the confusing name C gives to immutable variables.
Those are synonyms, and this amounts to a retcon. The computer science term "variable" comes directly from standard mathematical function notation, where a variable reflects a quantity being related by the function to other variables. It absolutely is expected to "change", if not across "time" than across the domain of the function being expressed. Computers are discrete devices and a variable that "varies" across its domain inherently implies that it's going to be computed more than once. The sense Carmack is using, where it is not recomputed and just amounts to a shorthand for a longer expression, is a poor fit.
I do think this is sort of a wart in terminology, and the upthread post is basically right that we've been using this wrong for years.
If I ever decide to inflict a static language on the masses, the declaration keywords will be "def" (to define a constant expression) and "var" (to define a mutable/variable quantity). Maybe there's value in distinguishing a "var" declaration from a "mut" reference and so maybe those should have separate syntaxes.
The point is that it varies between calls to a function, rather than within a call. Consider, for example, a name for a value which is a pure function (in the mathematical sense) of the function's (in the CS sense) inputs.
What's happening is that each iteration of the loop these are new variables but they have the same name, they're not the same variables with a different value. When a language designer assumes that's the same thing the result is confusing for programmers and so it usually ends up requiring a language level fix.
e.g. "In C# 5, the loop variable of a foreach will be logically inside the loop"
In C the first needs us to step outside the language to the macro pre-processor, the second needs the keyword "const" and the third is the default
In Rust the first is a const, the second we can make with let and the third we need let mut, as Carmack says immutable should be the default.
The point is that the word "variable" inherently reflects change. And choosing it (a-la your malapropism-that-we-all-agree-not-to-notice "immutable variables") to mean something that does (1) is confusing and (2) tends to force us into worse choices[1][2] elsewhere.
A "variable" should reflect the idea of something that can be assigned.
[1] In rust, the idea of something that can change looks like a misspelled dog, and is pronounced so as to imply that it can't speak!
[2] In C++, they threw English out the window and talk about "lvalues" for this idea.
  fn sin(x: f64) -> f64 {
    let x2 = x / PI;
    ...
Anyway this is kind of pointless arguing. We use the word "variable". It's fine.
It's in fact us programmers who are the odd ones out compared to how the word variable has been used by mathematics and logicians for a long time
If I define `function f(x) { ... }`, even if I don't reassign x within the function, the function can get called with different argument values. So from the function's perspective, x takes on different values across different calls/invocations/instances.
Another way to look at it is a variables are separate from compile time constants whether you mutate them or not.
(let [a 10] a)
Let the symbol `a` be bound to the value `10` in the enclosing scope.
[1] https://en.wikipedia.org/wiki/Free_variables_and_bound_varia...
If I have a `result` and I need to post-process it. I'm generally much happier doing `result = result.process()` rather than having something like `preresult`. Works nicely in cases where you end up moving it into a condition, or commenting it out to test an assumption while developing. If there's an obvious name for the intermediate result, I'll give it that, but I'm not over here naming things `result_without_processing`. You can read the code.
At the end of the day, you're all saying different ways of keeping track of the intermediate results. Composition just has you drop the intermediate results when they're no longer relevant. And you can decompose if you want the intermediates.
Common != Good
Like chaining or composing function calls.
result = x |> foo |> bar |> baz (-> x foo bar baz)
Or map and reduce for iterating over collections.
Etc.
What result? What process?
...says every person who has to read your code later.
That doesn’t make logical sense. You already have a result. It shouldn't need processing to be a result.
Are you suggesting that the results of calculations should always be some sort of primitive value? It's not clear what you're getting hung up on here.
I usually write code to help local debug-ability (which seems rare). For example, this allows one to trivially set a conditional breakpoint and look into the full response:
    response = get_response()
    response = response.json()
and I think is just as clear as this:
    response = get_response().json()
Oh well continues day job as a Clojure programmer that is actively threatened by an obnoxious python take over
That is why vibe coding, JavaScript and Python are so attractive.
Who needs to calculate load bearing supports, walls, and floors when you can just vibe oversize it by 50%.
But especially now that coding agents are radically enabling gains in developer productivity, you don't need to feel excluded by the artificial tribal boundaries.
If you haven't, I recommend reading: https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...
Thanks for the reminder. Will work on putting these ideas back into practice again.
I wish the IDE would simply provide a small clue, visible but graphically unobtrusive, that it was mutated.
In fact, I end up wishing this about almost every language feature that passes my mind. For example, I don't need to choose whether I can or can't append to a list; just make it unappendable if you can prove I don't append. I don't care if it's a map, list, set, listOf, array, vector, arrayOf, Array.of(), etc unless it's going to get in my way because I have ten CPU cores and I'll optimize the loop when I need to.
    private static void blah()
    {
        final int abc = 3;
        for (int def = 7; def < 20; ++def)
        {
            System.out.print(def);
        }
    }
https://www.jetbrains.com/help/idea/annotating-source-code.h...
Seeing @NotNull in there even if the author hasn't specifically written that can help in understanding (and not needing to consider) various branches.
Although I do agree with the sentiment of choosing a construct and having it optimize if it can. Reminds me of a Rich Hickey talk about sets being unordered and lists being ordered, where if you want to specify a bag of non-duplicate unordered items you should always use a set to convey the meaning.
It's interesting that small hash sets are slower than small arrays, so it would be cool if the compiler could notice size or access patterns and optimize in those scenarios.
Seriously though, I do find it slightly difficult to reason about `const` vars in TypeScript because while a `const` variable cannot be reassigned, the value it references can still be mutated. I think TypeScript would benefit from more non-mutable values types... (I know there are some)
Swift has the same problem, in theory, but it's very easy to use a non-mutable value types in Swift (`struct`) so it's mitigated a bit.
To me, this seems initially like some very minor thing, but I find this very helpful working with non-trivial code. For larger methods you can directly discern whether a not-as-immutable-declared variable behaves immutable nonetheless.
const isn’t really it though. It could go further.
Rust mentioned!
It made the code easier to read because it was easier to track down what could change and what couldn't. I'm now a huge fan of the concept.
It probably won't come as a surprise to you, but I am a big fan of Rust.
    # Immutable by default
    x = 2
    items = [1,2,3]
    with mutable(x, items):
        x = 3
        items.append(4)
    # And now back to being immutable, these would error
    x = 5
    items.append(6)  
Then, suddenly, the enlightenment
The brain is essentially dreaming it’s escaped while the body is still programming in Java.
I'm sure others have written about this, but these days I think good code is code which has a very small area of variability. E.g code which returns a value in a single place as a single type, has a very limited number of params (also of single type), and has no mutable variables. If you can break code into chunks of highly predictable logic like this it's so so much easier to reason about your code and prevent bugs.
Whenever I see methods with 5+ params and several return statements I can almost guarantee there will be subtle bugs.
Early returns at the very top for things like None if you pass in an Option type don't increase the risk of bugs, but if you have a return nested somewhere in the middle it makes it easier to either write bugs up front or especially have bugs created during a refactor. I certainly have had cases where returns in the middle of a beefy function caused me headaches when trying to add functionality.
I don't see why that's a problem. If a function implements an algorithm with several parameters (e.g. a formula with multiple variables), those values have to be passed somehow. Does it make a difference if they're in a configuration object or as distinct parameters?
I don't like all scripting languages, but Python, for example, has a simple syntax and is easy and fast to learn and write. I think it also has a good standard library. Some things can be simpler because of the missing guardrails. But that's also the weakness and I wouldn't use it for large and complex software. You have to be more mindful to keep a good style because of the freedoms.
C++ and Rust are at the opposite end. Verbose to write and more difficult syntax. More stuff to learn. But in the end, that's the cost for better guarantees on correctness. But again, they don't fit all use cases as well as scripting languages.
I've experienced the good and the bad in both kind of languages. There are tradeoffs (what a surprise) and it's probably also subjective what kind of language one prefers. I think my _current_ favorites are Rust, C# and Python.
His AGI work was entirely his own? As in he literally stepped down from a high level corporate role where he was responsible for Oculus (3D games/applications) to do this in his own time. Similar to his work on Armadillo Aerospace.
That said it's worth listening when he chimes in about C/C++ or optimisation as he has earned his respect when it comes to these fields.
Will he crack AGI? Probably not. He didn't crack rockets either. Doesn't make him any less awesome, just makes him human.
(Regarding this specific tweet, this seems to be him visiting his occasional theme of how to write C++ in a way that will help rather than hinder the creation of finishable software products. He's qualified to comment.)
Quake, which was a good game, but arguably a better engine that lead to things like Half-Life 1?
Other games?
Shared the code to Doom and Quake?
I guess you dont understand how big of a game Doom was. The first episode holds suprisngly well up to this day, even after hundreds of doom-clones as they used to call FPS games.
- this makes them really stand out, much easier to track the mutation visually,
- the underscore effect is intrusive just-enough to nudge you to think twice when adding a new `var`.
Nothing like diving into a >3k lines PR peppered with underscores.
I don't mind the idea here, seems good. But I also don't move a block of code often and discover variable assignment related issues.
Is the bad outcome more often seen in C/C++ or specific use cases?
Granted my coding style doesn't tend to involve a lot of variables being reassigned or used across vast swaths of code either so maybe I'm just doing this thing and so that's why I don't run into it.
It is not about mutable/immutable objects , it is about using a name for a single purpose within given scope.
    a = 1
    b = 2
    a = b
Though both "single purpose" and immutability may be [distinct] good ideas.
1) You get intermediate results visible in the debugger / accessible for logs, which I think is a benefit in any language.
2) You get an increased safety in case you move around some code. I do think that it applies to any language, maybe slightly more so in C due to its procedural nature.
See, the following pattern is rather common in C (not so much in C++):
- You allocate a structure S
- If S is used as input, you prefill it with some values. If it's used as output, you can keep it uninitialized or zfill.
- You call function f providing a pointer to that struct.
Lots of C APIs work that way: sockets addresses, time structures, filesystem entries, or even just stack allocated fixed size strings are common.
  df = pd.concat(df,other_df)
  df = df.select(...)
  ...
  df |> 
    rbind(other_df) |> 
    select(...)
Typically you align a pipeline like so:
     df
     |> rbind(other_df)
     |> select(...)
- https://github.com/elixir-explorer/explorer - https://hexdocs.pm/explorer/Explorer.html
  df = pd.concat(df,other_df).select(...)It's funny how functional programming is slowly becoming the best practice for modern code (pure functions, no side-effects), yet functional programming languages are still considered fringe tech for some reason.
If you want a language where const is the default and mutable is a keyword, try F# for starters. I switched and never looked back.
As a practical observation, I think it was easier to close this gap by adding substantial functional capabilities to imperative languages than the other way around. Historically, functional language communities were much more precious about the purity of their functional-ness than imperative languages were about their imperative-ness.
That's why F# is so great.
Functional is default, but mutable is quite easy to do with a few mutable typedefs.
The containers are immutable, but nobody stops you from using the vanilla mutable containers from .net
It's such a beautiful, pragmatic language.
Also, the STM monad is the most carefree way of dealing with concurrency I have found.
You can program F# just like you would Python, and it will be more readable, concise, performant, and correct.
For some reason, this makes me think of SVG's foreignObject tag that gives a well-defined way to add elements into an SVG document from an arbitrary XML namespace. Display a customer invoice in there, or maybe a Wayland protocol. The sky's the limit!
On the other hand, HTML had so many loose SVG tags scattered around the web that browsers made a special case in the parser to cover them without needing a namespace.
And we all know how that played out.
Posted from an xhtml foreignObject on my SVGphone
Isn't this a strawman? Even Haskell has unsafePerformIO and Debug.Trace etc. It just also provides enough ergonomics that you don't need them so often, and they "light up" when you see them, which is what we want: to know when we're mutating.
Rust is also like this (let x = 5; / let mut x = 5;).
Or you can also use javascript, typescript and zig like this. Just default to declaring variables with const instead of let / var.
Or swift, which has let (const) vs var (mutable).
FP got there first, but you don't need to use F# to have variables default to constants. Just use almost any language newer than C++.
In case anyone here hasn’t seen it, here’s his famous essay, Functional Programming in C++:
http://sevangelatos.com/john-carmack-on/
He addresses your point:
> I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language. To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.
I think that's why we're seeing a lot of what you're describing. E.g. with Rust you end up writing mostly functional code with a bit of imperative mixed in.
Additional, most software is not pure (human input, disk, network, etc), so a pure first approach ends up being weird for many people.
At least based on my experience.
Nonetheless, I also heavily dislike non-alphabetical, library-defined symbols (with the exception of math operators), but this is a cheap argument and I don't think this is the primary reason FPs are not more prevalent.
That increases the activation energy, I guess, for people who have spent their whole programming life inside the algol-derived syntax set of languages, but that’s a box worth getting out of independently of the value of functional programming.
At least for me, this was solved by Gleam. The syntax is pretty compact, and, from my experience, the language is easily readable for anyone used to C/C++/TypeScript and even Java.
The pure approach may be a bit off-putting at first, but the apprehensions usually disappear after a first big refactoring in the language. With the type system and Gleam's helpful compiler, any significant changes are mostly a breeze, and the code works as expected once it compiles.
There are escape hatches to TypeScript/JavaScript and Erlang when necessary. But yeah, this does not really solve the impure edges many people may cut themselves on.
Maybe they're right about the syntax too though? :)
If you have a course where one day you're supposed to do Haskell and another Erlang, and another LISP, and another Prolog, and there's only one exercise in each language, then you're obviously going to have to waste a lot of time on syntax, but that's never a situation you encounter while actually programming.
I write way more algol-derived language code than ML, yet piped operations being an option over either constant dot operators where every function returns the answer or itself depending on the context or inside out code is beautiful in a way that I've never felt about my C#/C++/etc code. And even Lisp can do that style with arrow macros that exist in at least CL and Clojure (dunno about Racket/other schemes).
  (< lower-bound x-coordinate upper-bound)
    lower_bound < x_coordinate < upper_bound
    lower_bound <= x_coordinate < upper_bound
In Python this is a magical special case, but in Icon/Unicon it falls out somewhat naturally from the language semantics; comparisons don't return Boolean values, but rather fail (returning no value) or succeed (returning a usually unused value which is chosen to be, IIRC, the right-hand operand).
And in SQL you have
    `x-coordinate` between `lower-bound` - 1 and `upper-bound` + 1
C# is not that far I suppose from what I want
e.g. Just a very simple example to illustrate the point
    if (customer != null)
    {
        customer.Order = GetCurrentOrder();
    }
    if (customer is not null)
    {
        customer.Order = GetCurrentOrder();
    }
if (obj is string s) { ... }
if (date is { Month: 10, Day: <=7, DayOfWeek: DayOfWeek.Friday }) { ... }
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
Microsoft give the same example though. I understand what hes saying, theres conceptual overlap between is and ==. Many ways to do the same thing.
Why couldnt it just be...
if (obj == string s) { ... }
> Would you say that these samples show no benefit to using the "is" operator?
I didn't say no benefit. I said dubious benefit.
I didn't really want to get into discussing specific operators, but lets just use your date example:
   if (date is { Month: 10, Day: <=7, DayOfWeek: DayOfWeek.Friday }) { ... }
    static bool IsFirstFridayOfOctober(DateTime date)
    {
        return date.Month == 10
            && date.Day <= 7
            && date.DayOfWeek == DayOfWeek.Friday;
    }
    if IsFirstFridayOfOctober(date) {
       ...
    }
Each release there seems to be more of these language features and half the time I have a hard time remembering that they even exist.
Each time I meet with other .NET developers either virtually or in Person they all seem to be salivating over this stuff and I feel like I've walked in on some sort of cult meeting.
It isn't any one language feature it is the mentality of both the developer community and Microsoft.
> I agree that they should not add new stuff lightly
It seems though kinda do though. I am not the first person to complain that they add syntactic sugar that doesn't really benefit anything.
e.g. https://devclass.com/2024/04/26/new-c-12-feature-proves-cont...
I have a raft of other complaints outside of language features. Some of these are to do with the community itself which only recognise something existing when Microsoft has officially blessed it, it is like everyone has received the official permission to talk about a feature. Hot Reload was disabled in .NET 6 IIRC for dubious reasons.
My bet is functional programming will become more and more prevalent as people figure out how to get AI-assisted coding to work reliably. For the very reasons you stated, functional principles make the code modular and easy to reason about, which works very well for LLMs.
However, precisely because functional programming languages are less popular and hence under-represented in the training data, AI might not work well with them and they will probably continue to remain fringe.
I once mentioned both these concepts to a room of C# developers. Two of them were senior to me and it was a blank expression from pretty much everyone.
> yet functional programming languages are still considered fringe tech for some reason.
You can use the same concepts in non-functional programming languages without having to buy into all the other gumpf around functional programming languages. Also other programming languages have imported functional concepts either into the language itself or into the standard libraries.
Past that. It is very rare to be able to get a job using them. The number of F# jobs I've seen advertised over the last decade, I could count on one hand.
Also, category theorists think how excited people get about using the word monad but then most don’t learn any other similar patterns (except maybe functors) is super cringe. And I agree.
I believe Scala was pretty ahead here by building the language around local mutability with a general preference for immutable APIs, and I think this same philosophy shows up pretty strongly in Rust, aided by the borrow checker that sort of makes this locality compiler-enforced (also, interior mutability)
dropping down into the familiar or the simple or the dumb is so innately necessary in the building process. many things meant to be "pure" tend to also be restrictive in that regard.
This creates friction between casual stakeholder models of a mutable world, versus the the assumptions an immutable/mostly-pure language imposes. When the customer describes what they need, that might be pretty close to a plain loop with a variable that increments sometimes and which can terminate early. In contrast, it maps less-cleanly to a pure-functional world, if I'm lucky there will at least be a reduce-while utility function, so I don't need to make all my own recursive plumbing.
So immutability and pure-functions are like locking down a design or optimizing--it's not great if you're doing it prematurely. I think that's why starting with mutable stuff and then selectively immutable'ing things is popular.
Come to think of it, something similar can be said about weak/strong typing. However the impact of having too-strong typing seems easier to handle with refactoring tools, versus the problem of being too-functional.
In particular: My brain, my computing hardware, and my problems I solve with computers all feel like a better match for time-domain-focused programming.
Haskell is a great example here. Last time I tried to learn it, going on the IRC channel or looking up books it was nothing but a flood of "Oh, don't do that, that's not a good way to do things." It seemed like nothing was really settled and everything was just a little broken.
I mean, Haskell has like what, 2, 3, 4? Major build systems and package repositories? It's a quagmire.
Lisp is also a huge train wreck that way. One does not simply "learn lisp" There's like 20+ different lisp like languages.
The one other thing I'd say is a problem that, especially for typed functional languages, they simply have too many capabilities and features which makes it hard to understand the whole language or how to fit it together. That isn't helped by the fact that some programmers love programming the type system rather than the language itself. Like, cool, my `SuperType` type alias can augment an integer or a record and knows how to add the string `two` to `one` to produce `three` but it's also an impossible to grok program crammed into 800 characters on one line.
Lisp is not a language, but a descriptor for a family of languages. Most Lisps are not functional in the modern sense either.
Similarly, there are functional C-like languages, but not all C-likes are functional, and "learn c-likes" is vague the same way "learn lisp" is.
Emacs lisp and Clojure are about as similar as Java and Rust. The shared heritage is apparent but the experience of actually using them is wildly different.
Btw, if someone wants to try a lisp that is quite functional in the modern sense (though not pure), Clojure is a great choice.
Don't know when was the last time you've used Haskell, but the ecosystem is mainly focused on Cabal as the build tool and Hackage as the official package repository. If you've used Rust:
- rustup -> ghcup - cargo -> cabal - crates.io -> hackage - rustc -> ghc
ghcup didn't exist, AFAIK. Cabal was around but I think there was a different ecosystem that was more popular at the time (Started with an S, scaffold? Scratch? I can't find it).
whats the difference between const and mutable?
You don't need both a "const" keyword and a "mutable" keyword in a programming language. You only need 1 of the keywords, because the other can be the default. munchler is saying the "const" keyword shouldn't exist, and instead all variables should be constant by default, and we should have a "mutable" keyword to mark variables as mutable.
As opposed to how C++ works today, where there is no "mutable" keyword, variables are mutable by default, and we use a "const" keyword to mark them constant.
What if the lang has pointers? How express read-only?
But [0] is never possible.
F# is wonderful. It is the best general purpose language, in my opinion. I looked for F# jobs for years and never landed one and gave up. I write Rust now which has opened up the embedded world, and it's nice that Rust paid attention to F# and OCaml.
Trying to do it 100% pure makes everything take significantly longer to build and also makes performance difficult when you need it. It’s also hard to hire/onboard people who can grok it.
Python has a lot of functional-like patterns and constructs, but it's not a pure functional language. Similarly, Python these days allow you to adds as much type information as you want which can provide you a ton of static checks, but it's not forced you like other typed languages. If some random private function is too messy to annotate and not worth it, you can just skip it.
I like the flexibility, since it leads to velocity and also just straight up more enjoyable.
I know it's irrelevant to his point, and it's not true of C, and it doesn't have the meaning he wants, but the pedant in me is screaming and I'm surprised it hasn't been said in the comments:
In C++ mutable is a keyword.
An important thing to keep in mind is just how far compilers have come over the least 40 years. Often, with immutable languages, they can do extremely efficient compile only optimizations because of said guaranteed immutability by default.
Its not true in every case, of course, but I think for most situations compilers do a more than good enough job with this.
                clc
                lda value
                adc #1
                sta value
                lda value+1
                adc #0
                sta value+1
    value       .byte 0,0
    x = 20
    x = x + func(y)
    x = x / func(z)
    x = 20
    x1 = x + func(y)
    x2 = x1 / func(z)
his comment was in response to the
```
    Rediscovering one of the many great decisions that Erlang made
and seemed to me to insinuate just that.
    int x = 3;
    x = 4; // error!
    int* p = &x;
    *p = 4; // is that an error?    f.c:4:8: warning: initializing 'int *' with an expression of type 'const int     *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
    4 |   int* p = &x;
      |        ^   ~~
    1 warning generated.I want to be able to write `x := y` and be sure I don't have mutable slices and pointer types being copied unsafely.
I find many of languages I am constantly fighting with dependency managers, upgrades and all sorts of other things.
    classList = ['highlighted', 'primary']
    if discount:
        classList.append('on-sale')
    classList = ' '.join(classList)
1.
    classList = ['highlighted', 'primary']
        .concatif(discount, 'on-sale')
        .join(' ')
    classList = ' '.join(['highlighted', 'primary'] + (['on-sale'] if discount else []))
    mut classList = ['highlighted', 'primary']
    if discount:
        classList.append('on-sale')
    classList = ' '.join(classList)
    freeze classList
    def get_class_list(discount):
        mut classList = ['highlighted', 'primary']
        if discount:
            classList.append('on-sale')
        classList = ' '.join(classList)
        return classList
    classList = get_class_list(discount)I find that keeping functions short also helps a ton with that.
No, shorter than that. Short enough that the only meaningful place to "move a block of code" is into another function. Often, by itself.
Yes, but also no. If its a mostly side-effect free function with a good name and well defined input/output types its basically free.
Reading and understanding code is a process of answering "what are the immediate steps of this task?", without thinking about what those steps consist of or entail. It is not a process of answering "where is the variable representing ..., and the code that manipulates this?", especially since this makes assumptions that may prove incorrect.
  df = pd.read_excel()
  df = df.drop_duplicates.blahblah_other_chained_functions()
  [20 cells later]
  df = df.even_more_fns()It's worth it, though. Every variable that isn't mutated should be const. Every parameter not mutated should be const, and so should every method that doesn't mutate any fields. The mutable keyword should be banned.
Ultra-pedantic const-correctness (vs tasteful const-correctness on e.g. pass-by-reference arguments or static/big objects) catches nearly no bugs in practice and significantly increases the visual noise of your code.
If you have luxury of designing a new language or using one with default mutability then do so, but don't turn C coding styles into C++-envy, or C++ coding styles into Rust-envy.
Usually there are good, obvious names for intermediate calculations in my experience.
I'm open though - what kinds of things are you doing that require reassigning variables so much?
But any variable which I’ve not already marked as const is pretty much by definition going to be modified at least once. So now instead of 1 variable name you need at least two.
So now the average number of variables per non-const variable is >= 2 and will be much more if you’re doing for example DSP related code or other math heavy code.
You can avoid it with long expressions but that in principle is going against the “name every permutation” intention anyway.
It's actually math heavy code (or maybe medium heavy?) where I really like naming every intermediate. fov, tan_fov, half_tan_fov, center_x, norm_x
Which sounds really awful, but after a while it forces you to parse the logic itself instead of being guided by possibly-out-of-date comments and variable names.
I now prefer less verbosity so that probably explains why I’m a little out of distribution on this topic.
If you looked at any of my code prior to that job, it was the polar opposite with very “pretty” code and lengthy comments everywhere.
Similar to how "tail recursion can (usually) be lifted/lowered to a simple loop...", immutability from language statements can often be "collapsed" into mutating a single variable, and there may be one or two "dances" you need to do to either add helper functions, structure your code _slightly_ differently to get there, but it's similar to any kind of performance sensitive code.
Example foo(bar(baz(), bar_alt(baz_alt(), etc...))) [where function call nesting is "representing" an immutability graph] ...yeah, that'd have a lot of allocations and whatever.
But: foo().bar().bar_alt().baz().baz_alt().etc(...) you could imagine is always just stacking/mutating the same variable[s] "in place".
...don't get hung up on the syntax (it's wildly wrong), but imagine the concept. If all the functions "in the chain" are pure (no globals, no modifications), then they can be analyzed and reduced. Refer back to the "Why SSA?" article from a week or two ago: https://news.ycombinator.com/item?id=45674568 ...and you'll see how the logical lines of statements don't necessarily correspond to the physical movement of memory and registers.
> Having all the intermediate calculations still available is helpful in the debugger
Wouldn't this be an easy task for SCA tool e.g. Pylint? It has atleast warning against variable redefinition: https://pylint.pycqa.org/en/latest/user_guide/messages/refac...
When I was first learning I thought all methods would mutate. It has a certain logic to it
const prevents reassignment of the variable but it does not make the object the variable points to immutable.
To do the latter, you have to use Object.freeze (prevent modification of an object’s properties, but it is shallow only so for nested objects you need to recurse) and Object.seal (prevent adding or removing properties, but not changing them).
May people use immutable.js or Immer for ergonomic immutable data structures.
This is why the single var pattern used to be recommended.
The value (pun intended) of the latter is that once you’ve arrived at a concrete result, you do not have to think about it again.
You’re not defining a “variable”, you’re naming an intermediate result.
    order_data = boto.get_from_dynamodb()
    customer_data = boto.get_from_rds()
    branding_assets = boto.get_from_s3()
    return render_for_user(order_data, customer_data, branding_assets, ...)John Carmack is a C++ programmer apparently that still has a lot to learn in python.
Anything else is wishful thinking, trying to rely on the GC for deterministic behaviour.
FYI John Carmack is a true legend in the field. Despite his not being a lifelong Python guy, I can assure you he is speaking from a thorough knowledge of the arguments for and against.
> [...] outside of true iterative calculations
Constant is by definition immutable.
Why can't people get it through their heads in 2025? (I'm looking at you, Rust)
    const int a{10};
   void some_function(const int a);
No, it is not. A constant is the direct opposite of a variable.
> An immutable variable..
There is no such thing. You can decide not to mutate it but variable is by definition mutable.
If you want to argue mutability, then you have to talk about the data structure or memory footprint of the constant or variable that it points to or represents, not the concept of variable or constant itself.
In other words, we can have var foo = 5 and const bar = 5. foo can be changed by being reassigned another value with simple foo = 6, whereas bar cannot as bar = 6 should cause panic/exception/... On the other hand, we can have var foo = {value: 5} and const bar = {value: 5} and now it depends on the language how it handles complex types like a struct/object as the operation now bypasses the guards on the variable/constant assignment itself. Will it guard against mutation or not? It should, but that is rarely the case. Hence, in most languages, we will be able to do foo.value = 6 and also bar.value = 6, even though we should not. But again, now we are arguing about the mutability of the data type or memory representation and not the variable/constant itself. Most languages don't care about mutability, so we have this flawed thinking where we are simply unable to strictly define what data is actually mutable and what data is not. Rust uses the borrow checker, that is one approach, but generally this should be properly handled by the language spec and compiler itself and we should not even have this conversation where programmers simply cannot make a distinction between variables and constants, let alone comprehend what those terms mean in the first place, as those meanings have been thrown out of the window by the folks designing the languages.
A true constant won't change between runs of the code. I.e. it is essentially a symbolic name for a literal.
A constant variable OTOH, varies in different executions of the code. So, its invariance is linked to an execution context.
That's surely not a constant like PI, is it?
#define var __auto_type
#define let const __auto_type
  const result = ... ;
  return result;
For example in Dart you make everything `final` by default.
A lot of code needs to assemble a result set based on if/then or switch statements. Maybe you could add those in each step of a chain of inline functions, but what if you need to skip some of that logic in certain cases? It's often much more readable to start off with a null result and put your (relatively functional) code inside if/then blocks to clearly show different logic for different cases.
  if cond:
      X = “yes”
  else:
      X = “no”
  let X = if cond { “yes” } else { “no” };
    let x: Int
    if cond {
        x = 1
    } else { 
        x = 2
    }
    // read x hereDon't be silly and assume if I assign it multiple times in an if condition it's mutable - it's constructing the object as we speak, so it's still const!!!
C# gets this right among many other things (readonly vs const, init properties, records to allow immutability by default).
And the funny thing is the X thread has lots of genuine comments like 'yeah, just wrap a lambda to ensure const correctness' like that's the okay option here? The language is bad to a point it forces good sane people into seeing weird "clever" patterns all the time in some sort of an highest ELO rating for the cleverest evilest C++ "solution".
I was hoping Carbon was the hail mary for saving C++ from itself. But alas, looks like it might be googlified and reorged to oblivion?
Having said that, I still like C++ as a "constrained C++" language (avoid clever stuff) as it's still pretty good and close to metal.
[1] https://docs.astral.sh/ruff/rules/redefined-argument-from-lo...
[2] https://docs.astral.sh/ruff/rules/redefined-loop-name/
[3] https://pylint.pycqa.org/en/latest/user_guide/messages/warni...
- either it's transitive, in which case your type system is very much more complicated
- or it isn't, in which case it's a near useless liability
Naturally C++ runs with the latter, with bonus extra typing for all the overloads it induces.
- if (x) { const y = true } else { const y = false } // y doesn't exist after the block - try { const x = foo } catch (e) { } // x doesn't exist after the try block
  const myArray = [1,2,3]
  myArray.push(4)
  myArray // [1, 2, 3, 4]  const y = (() => {
    if (x) {
      return true;
    } else {
      return false;
  })();    const y = (x === true) ? true : false;
    Composite.prototype.SetPosition(x, y, z) {
        x = (isNumber(x) && x >= 0 && x <= 1337) ? x : null;
        y = (isNumber(y) && y >= 0 && y <= 1337) ? y : null;
        z = isNumber(z) ? z : null;
        if x !== null && y !== null && z !== null {
            // use clamped values
        }
    }  function SetPosition(x, y, z) {
    if (!(isNumber(x) && isNumber(y) && isNumber(z))) {
      // Default vals
      return;
    }
    x = clamp(x, 0, 1337);
    y = clamp(y, 0, 1337);
    z = z;
  }In JS, errors are pretty painful due to try/catch, that's why I would probably these days recommend to use Effect [1] or similar libraries to have a failsafe workflow with error cases.
Errors in general are pretty painful in all languages in my opinion. The only language where I thought "oh this might be nice" was Koka, where it's designed around Effect Types and Handlers [2]
const y = x ? true : false;
    const y = (bool)x;
    const bool y = x;val result = if (condition) { val x = foo() y = bar(x) y + k // return of last expression is return value of block } else { baz() }
Or:
val q = try { a / b } catch (e: ArithmeticException) { println("Division by zero!") 0 // Returns 0 if an exception occurs }
Edit: ugh, can't get the formatting to work /facepalm.
The principle of reducing state changes and side-effects feels a good one.
Whether it's of any actual utility is debatable
Hindsight's 20/20, of course. But still.
https://github.com/hsutter/cppfront/wiki/Design-note%3A-cons...
Anyway the outcome of trying to avoid mutation means instead of simply setting player.score you get something like player = new Player(oldPlayerState, updates). This is of course slow as hell. You're recreating the entire player object to update a single variable. While it does technically only mutate a single object rather than everything individually it's not really beneficial in such a case.
Unless you have an object with a lot of internal rules across each variable (the can't be 'A' if 'B' example above) it's probably wrong to push the mutation up the stack like that. The simple fact is a complex computer program will need to mutate something at some point (it's literally not a turing machine if it can't) so when avoiding mutation you're really just pushing the mutation into a higher level data object. "Avoid mutations of data that has dependencies" is probably the correct rule to apply. Dependencies need to be bundled and this is why it makes sense not to allow 'A' in the above example to be mutated individually but instead force the programmer to update A and B together.
There is no idiom about this. Do it if you like but clipply doesn't warn about any of it.
I mean there is good reason to keep variables well scoped, and the various linters do a reasonable job about scope.
But I've only really know C++[1] people to want everything as a const.
[1] Yes, you functional people also, but, lets not get into that.
It can be perfectly fine to use mutable variables within a block, like a function when absolutely needed - for example, in JavaScript's try catch and switch statements that need to set a variable for later use. As long as these assignments are local to that block, the larger code remains side-effect free and still easy to reason about, refactor and mantain.
https://rockthejvm.com/articles/what-is-referential-transpar...
Well yea, that's what sane languages that aren't Python, C, and C++ do. See F# and Rust.
`let` is so 2020. `const` is too long. `static` makes me fall asleep at the keyboard. `con` sounds bad. How about
`law`?
    law pi = 3.142 (heh typing on Mac autocompleted this)
    law c = 29979245
    law law = "Dredd"
I think it never gained traction but it would have been nice to have this in Java
    val firstName = "Bob";
    val lastName = "Tables";
    var fullName = "";
    fullName = firstName + " " + lastName;Also related, it annoys me that Java has final but otherwise poor/leaky support for immutability. You can mark something final but most Java code (and a lot of the standard library) uses mutable objects so the final does basically nothing... C++ "const" desparately needs to spread to other languages.
But, there is practicality in the ability of being able to change a var, and not having to create a new object every time you change one of its members.
It models real nature/physics better.
It looks like He is asking that 'const' be the default, and 'var' should be explicit, which makes sense.
Clippy offers lints for (three?) distinct ways of shadowing because in most cases it turns out people who don't like shadowing only had problems with typically one specific kind of shadowing (e.g. same type unrelated shadowing, or different type same value shadowing) and since that varies why not offer to diagnose the specific problem this programmer doesn't like.
To be concrete some people are worried about things like:
  let sparrows = get_a_list_of_sparrows();
  // ....
  let sparrows = sparrows.len() + EXTRA_SPARROWS;
   let sparrows = find_wild_sparrows();
   // ....
   let sparrows = find_farmed_sparrows();
Variables are distinct from constants. It's a problem that C and C++ use the keyword "const" to signify immutability instead, indeed as a result C++ needed three more keywords "constexpr", "constinit" and "consteval" to try to grapple with the problem.
I want everything that passes through a function to be a copy unless I put in a symbol or keyword that it suppose to be passed by reference.
I made a little function to do deep copies but am still experimenting with it.
  function deepCopy(value) {
    if (typeof structuredClone === 'function') {
      try { return structuredClone(value); } catch (_) {}
    }
    try {
      return JSON.parse(JSON.stringify(value));
    } catch (_) {
      // Last fallback: return original (shallow)
      return value;
    }
  }JavaScript doesn’t have references, it is clearer to only use “passed by reference” terminology when writing about code in a language which does have them, like C++ [0].
In JavaScript, if a mutable object is passed to a function, then the function can change the properties on the object, but it is always the same object. When an object is passed by reference, the function can replace the initial object with a completely different one, that isn’t possible in JS.
Better is to distinguish between immutable objects (ints, strings in JS) and mutable ones. A mutable object can be made immutable in JS using Object.freeze [1].
[0] https://en.wikipedia.org/wiki/Reference_(C%2B%2B)
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Maybe someone should work on a way to make references in javascript land. Sort of like immer but baked in.
I wish all arguments were copies unless I put some symbol that says, alright, go ahead and give me the original to mutate?
It seems like this way, you reduce side effects, and if you want the speed of just using the originals, you could still do that by using special notation.