Switch expressions with pattern matching are absolutely killer[0] for its terseness.
Also, it is possible to use OneOf[1] and Dunet[2] to get access to DU
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
If you want the .NET ecosystem and GC conveniences, there is already F#. If you want no GC and RAII-style control, then you would already pick Rust.
I do like/respect C# but come on now. I know they're fixing it but the rest of the language was designed the same way and thus still has this vestigial layer of OOP-hubris
The language is a tool; teams decide how to use the tool.
How about F#? Isn't F# mostly C# with better ergonomics?
As much as I'd like to do more with it, the "just use F#" idea flaunted in this thread is a distant pipe dream for the vast majority of teams.
Haha, no. Microsoft barely talks about F# at all, and has largely left the evolution of the language up to the open source community that supports it. Furthermore, you shouldn't take your cues about what a language is best suited for from marketing types, you should evaluate it based on its strengths as a language and broader ecosystem. If you seriously doubt that C# is a better comparison to F# than Python, then I suspect you haven't used either C# or F# and you're basing your views on marketing fluff.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
> ...it is build upon OOP design principles and a bunch of “needless” abstraction
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
[0] https://medium.com/itnext/getting-functional-with-c-6c74bf27...
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
> I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
> Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#
You mean like this? string foo = result.MatchFirst(
value => value,
firstError => firstError.Description);
Or this? ErrorOr<string> foo = result
.Then(val => val * 2)
.Then(val => $"The result is {val}");
Or this? ErrorOr<string> foo = await result
.ThenDoAsync(val => Task.Delay(val))
.ThenDo(val => Console.WriteLine($"Finsihed waiting {val} seconds."))
.ThenDoAsync(val => Task.FromResult(val * 2))
.ThenDo(val => $"The result is {val}");
With pattern matching like this? var holidays = new DateTime[] {...};
var output = new Appointment(
DayOfWeek.Friday,
new DateTime(2021, 09, 10, 22, 15, 0),
false
) switch
{
{ SocialRate: true } => 5,
{ Day: DayOfWeek.Sunday } => 25,
Appointment a when holidays.Contains(a.Time) => 25,
{ Day: DayOfWeek.Saturday } => 20,
{ Day: DayOfWeek.Friday, Time.Hour: > 12 } => 20,
{ Time.Hour: < 8 or >= 18 } => 15,
_ => 10,
};
C# pattern matching is pretty damn good[0] (seems you are not aware?).[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
#!/usr/local/share/dotnet/dotnet run
#:package ErrorOr@2.0.1
using ErrorOr;
var computeRiskFactor = ErrorOr<decimal> ()
=> 0.5m; // Just an example
var applyAdjustments = ErrorOr<decimal> (decimal baseRiskFactor)
=> baseRiskFactor + 0.1m; // Just an example
var approvalDecision = computeRiskFactor()
.Then(applyAdjustments)
.Match(
riskFactor => riskFactor switch {
< 0.5m => "Approved",
< 0.75m and >= 0.5m => "Approved with Conditions",
>= 0.75m and < 0.9m => "Manual Review",
_ => "Declined"
},
errors => "Error computing risk factor"
);
Console.WriteLine($"Loan application: {approvalDecision}");
(Fully contained program, BTW)Here's the OCaml version:
let compute_risk_factor () = 0.5
let apply_adjustments base_risk_factor = base_risk_factor +. 0.1
let approval_decision =
let risk_factor = compute_risk_factor () |> apply_adjustments in
match risk_factor with
| r when r < 0.5 -> "Approved"
| r when r < 0.75 -> "Approved with Conditions"
| r when r < 0.9 -> "Manual Review"
| _ -> "Declined"
let () =
print_endline approval_decision
Still not functional enough?...Or you just don't like C#? No point moving goal posts.My experience of .NET even from version 1 is that it has the best debugging experience of any modern language, from the visual studio debugger to sos.dll debugging crash dumps.
I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.
All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.
And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.
Explicitness and Verbosity are orthogonal concepts mostly!
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.
I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.
I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.
Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
await client.Write(
new ClientWriteRequest(
[
// Alice is an admin of form 123
new()
{
Object = "form:124",
Relation = "editor",
User = "user:avery",
},
]
)
);
var checkResponse = await client.Check(
new ClientCheckRequest
{
Object = "form:124",
Relation = "editor",
User = "user:avery",
}
);
var checkResponse2 = await client.Check(
new ClientCheckRequest
{
Object = "form:125",
Relation = "editor",
User = "user:avery",
}
);
This is an abstraction we wrote on top of it: await Permissions
.WithClient(client)
.ToMutate()
.Add<User, Form>("alice", "editor", "226")
.Add<User, Team>("alice", "member", "motion")
.SaveChangesAsync();
var allAllowed = await Permissions
.WithClient(client)
.ToValidate()
.Can<User, Form>("alice", "edit", "226")
.Has<User, Team>("alice", "member", "motion")
.ValidateAllAsync();
You would make the case that the former is better than the latter?(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
Yes, they make CRUD stuff very easy and convenient.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
I agree the syntax is awkward, but all it boils down to is concatenating code in strings and adding it as a file to your codebase.
And the syntax will 100% get cleaner (it;s already happening with stuff like ForAttributeWithMetadataName
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.
- Json serialization, sure but you can use your own converters. Attributes are not necessary.
- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.
- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.
So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.
Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.
And going through converters is (was?) significantly slower for some reason than the built-in serialisation.
> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute
Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.
I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.
I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.
Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
(define-route METHOD PATH BODY)
You can then easily expect the generated code. But in Java and others, you'll have something like @GET(path=PATH)
And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".
That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
If we're writing good code then why do we even need a GC? Heh.
In decades of experience I've never once worked in an organisation where "don't write bad code" applied. I have seen people with decades of experience with C# who don't know that IQuerable and IEnumerable load things into memory differently. I don't necessarily disagree with you that people should "just write good code", but the fact is that most of us don't do that all the time. I guess you could also argue that principles like "foureyes" would help, but it doesn't, even when they are enforced by leglisation with actual risk of punishments like DORA or NIS2.
This is the reason I favour Go as a cross platform GC language over C#, because with Go are given fewer opportunities to fuck up. There is still plenty of chance to do it, but fewer than other GC languages. At least on the plusside for .NET 10 they're going to improve IEnumerable with their devirtualization.
> hidden indirection and runtime magic"
Maybe not in C#, but C# is .NET and I don't think it's entirely fair to decouple C# from .NET and it's many frameworks. Then again, I could have made it more clear.
Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.
My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.
> Just don't write bad code
Reminds me with C advices: "Just don't write memory leaks & UAF!".- Attributes can do a lot of magic that is not always obvious or well documented.
- ASP.NET pipeline.
- Source generators.
I love C#, but I have to admit we could have done with less “magic” in cases like these.
Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.
Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.
That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).
Source generators didn't exist in C# 10 years ago. You probably had something else in mind?
> But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
A challenge with .NET web APIs is that it's not possible to detect when interacting with a payload deserialized from JSON whether it's `null` because it was set to `null` or `null` because it was not supplied.A common way to work around this is to provide a `IsSet` boolean:
private bool _isNameSet;
public string? Name { get; set { ...; isNameSet = true; } }
Now you can check if the value is set.However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.
public partial string? Name { get; set; }
Now a single marker attribute will generate as many `Is*Set` properties as needed.Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.
It's just code that generates code. Some of the syntax is awkward, but it's not magic imo.
Is this sarcasm?
* objects should model a single concept, or
* every domain concept should be an object.
These two alone are already contradictory. And what do they even mean? Concretely?
Then, when OOP invariably breaks down, they can always point to any of the 100 rules that you supposedly violated, and blame the failure on that. “Yes, it did not work out because you did not do it right.” It’s the true scotsman fallacy.
It’s like communism. It would work out if somebody just finally did it properly.
Maybe a system that requires 100 hard to follow rules to have even a chance at success just isn’t a great one.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
For example just having value types and reified generics as a combination meant you could write generic code against value types which usually meant for hot algorithmic loops or certain data structures a big win w.r.t memory and CPU consumption. For example for a collection type critical to an app I wrote many years ago the use of value types would almost half the memory footprint compared to the best Java one I could find, and was somewhat faster with less cache misses. The Java alternative wasn't an amateur one either but they couldn't get the perf out of it even with significant effort.
It also last time I checked doesn't have a value decimal type for financial math which IMO can be a significant performance loss for financial/money based systems. Anything with math, and lots of processing/data structures for example I would find .NET significantly faster after doing the optimisation work. If I had to choose the 2 targets these days I would find .NET in general an easier target w.r.t performance. Of course perf isn't everything depending on the domain.
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
Compiler guarantees mean a lot.
Is this open source? Do you have numbers? I've been coding in .NET since it was "a thing" and frankly I'm having trouble mapping the optimizations to a local application at that magnitude.
The optimizations are seen at scale, they really won't mean much for your local application. Not a 3x+ improvement at least.
Its not one feature with F# IMO, its little things that add up which generally is the reason it is hard to convince someone to use it. To the point when the developers (under my directive) had to write two products in C# they argued with me to switch back.
https://dev.to/ruizb/function-purity-and-referential-transpa...
Plus F# as a functional language has significant gaps that prevent effective refactoring, such as lack of support for named arguments to curried functions.
Another one is if you want to add a curried parameter to the end of the parameter list, and you have code like
|> myFunc a b
|> ...
You can't just say |> myFunc a b z=10
|> ...
instead, you have to rewrite the whole pipe.Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
There are a couple unicorns like Figma and that is it.
Performance is much better option with WebGPU compute, and not everyone hates JavaScript.
Whereas on the server it is basically a bunch of companies trying to replicate application servers, been there done that.
It has taken off in the browser. If you've ever used Google Sheets you've used WebAssembly.
Amazon switched their Prime Video app from JavaScript to WebAssembly for double the performance. Is streaming video a niche use case?
Lots of people are building Blazor applications:
https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...
> not most people aren’t using a high performance spreadsheet
A spreadsheet making use of WebAssembly couldn't be deployed to the browser if WebAssembly hadn't taken off in browsers.
Practical realities contradict pjmlp's preconceptions.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
So, in other words, widely used in lots and lots of deployments.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
Counted out over N languages, we should see something decent land before long.
Won't this potentially cause stack overflows in programs that ran fine in older versions though?
An ArrayList<Float> is a list of pointers though.
Eventually value classes might close the gap, finally available as EA.
Doing a Python 3 would mean no one wanted going to adopt it.
Yes it is long process.
Some of the JEP in the last versions are the initial baby steps for integration.
Free for some (most?) use cases these days.
Basically enterprise edition does not exist anymore as it became the "Oracle GraalVM" with a new license.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
You set the (max) stack size once when you create the thread and you can’t increase the (max) size after that.
Processes see a virtual address space that is handled by the OS, so you would have to involve the OS if you needed to add to the stack size dynamically.
Many userspace apps already do custom stack handling, it's how things like green threads work. And many non-native runtimes like .net already have custom handling for their managed stacks, as they often have different requirements and limitations to the "native" stack, and often incompatible formats and trying to isolate from possible bugs means there's less benefit to sharing the same stack with "native" code.
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
Preparing for the .NET 10 GC - https://news.ycombinator.com/item?id=45358527 - Sept 2025 (60 comments)
TieredCompilation on the other hand caused a bunch of esoteric errors.
Performance Improvements in .NET 10
https://devblogs.microsoft.com/dotnet/performance-improvemen...
It can reveal secret cull conditions for long generational objects. If that side-branch is hit in the hot-loop, all longterm objects of that generation, are going to get culled, in a single stroke… so bundle them and keep them bundled. And now they started using it, to at least detect objects that do not escape lambdas. So its all stack, no more GC involved at all. Its almost at the static allocation thing we do for games. If the model proofs that every hotloop 5 objects are allocated and life until a external event occurs- static allocation and its done.
Great start. But you could do so much more than that with this. If you write a custom JIT whose goal is not just to detect and bytecompile hotloops, but to build a complete multi-lifetime model of object generation.
Outside of the mentioned, things like detecting finegrained lifetimes is very very hard and the mentioned escape analysis is an optimization that needs to be capped to avoid the halting problem. (1)
A fairly deep covererage of GC behaviours can found in Bacon's "Unified Theory of Garbage Collection" where the author theoretically connect previous works on tracing collectors and reference-counting systems and show that the optimized variations often existing in a design-space between them. (2)
1: https://en.wikipedia.org/wiki/Halting_problem
2: https://web.eecs.umich.edu/~weimerw/2008-415/reading/bacon-g...
We first built a proof of concept with 15 basic tasks to implement in both MAUI and Flutter. Things like authentication, navigation, API calls, localization, lists, map, etc. In MAUI, everything felt heavier than it should've been. Tooling issues, overkill patterns, outdated docs, and a lot of small frustrations that added up. In Flutter, we got the same features done much faster and everything just worked. The whole experience was just nicer. The documentation, the community, the developer experience... everything is better.
I love C#, that's what we use for our backend, but for mobile developement Flutter was the clear winner. We launched the new app a year ago and couldn't be happier with our decision.
I'd say Uno Platform[0] is a better alternative to Flutter for those who do not care much about the native look: it replicates WinUI API on iOS, Mac, Android, and Linux, while also providing access to the whole mature .NET ecosystem – something Flutter can't match for being so new and niche.
Im not a flutter dev and Im very interested to hear how it doesn’t play well liquid glass.
At best, Flutter can implement some shaders for the glass'y look of the controls, but something as basic as the Liquid Glass tab bar would require a huge effort to replicate it inside Flutter, while in MAUI and RN it's an automatic update.
Flutter will always have multiple advantages against React Native (and even Native toolkits themselves) in terms of upgradability, you can do 6 months of updates with only 30mins of work and make sure it 100% works everywhere.
The quality of the testing toolkit is also something which is still unmatched elsewhere and makes a big difference on the app reliability.
Personally I've done a 50k+ line project in Flutter and I didn't hit any of these. There's been a few issues for sure but nowhere near what I experienced with React Native (and don't start me on native itself)
https://flutter.dev/multi-platform/ios
https://survey.stackoverflow.co/2024/technology#1-other-fram...
For a "28% of new iOS apps", the Flutter subreddit is a ghost town with regular "is it dying? should I pick RN?" posts. I just don't buy the numbers because I'm myself in a rather stagnant cross-platform ecosystem, so I know this vibe well.
If I ever leave .NET, no way I'd move to something like Flutter. Even Kotlin Multiplatform is more promising concept-wise. LLMs are changing cross-platform development and Flutter's strong sides are not that important anymore, while its weak sides are critical.
Flutter is starting to become the default framework to build apps in in Asia at least.
And I disagree about the LLM, Flutter provides strong standardisation and strong typing which make it an ideal target for LLM.
As for Kotlin Multiplatform, maybe it will take off similarly as Flutter but that hasn't happened yet.
In the past it was rather painful for a solo dev to do them twice, but now Claude Code one-shots them. I just do the iOS version and tell it to repeat it on Android – in many cases 80% is done instantly.
Just in case, I have an app with half a million installs on both stores that has been running perfectly since 2018 using this ".NET with native UIs" approach.
But it's curious that it's used widely with game engines (Unity, Godot), but has a pretty weak and fractured UI landscape.
Sadly it's not cross-platform, which is a benefit of MAUI.
[1]: https://learn.microsoft.com/en-us/aspnet/core/blazor/hybrid/...
If Microsoft aren't using it themselves in any real capacity, then it's not good bet IMO.
https://medium.com/@ocoanet/improving-net-disruptor-performa...
[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
https://medium.com/@jadsarmo/why-we-chose-java-for-our-high-...
What else would we call it? It is what it is.
I believe there are some differences between what .NET does and what mainstream Java does. For instance, objects can be stack allocated even if they can't be turned into collections of scalars. This allows the JIT to stack allocate small known-sized arrays.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
LINQ does a lot of work behind the scene to optimize for speed and reduce allocations. An example can be found here [1]. These optimizations are mostly about reducing the various LINQ patterns into simple for loops.
[1] https://github.com/dotnet/runtime/blob/main/src/libraries/Sy...
This is hugely expensive compared to just a for loop. With this update it seems like the JIT can do escape analysis to stack-allocate the closure object, and the delegate as well (it could devirtualize calls even before that). It seems like it has everything to optimize away the whole LINQ overhead, though I'm not sure what happens in practice.
It'd be neat since that was a major argument against actually using LINQ in perf-sensitive code.
Linq contains a goodly number of hand-crafted special-case enumerators for common collections, or collections with certain interfaces, or span projections that are really nice optimizations but can complicate things for the JIT.
Some details here if you're curious: https://github.com/dotnet/runtime/blob/main/docs/design/core...
Edit: Looks like you are allowed to benchmark the runtime now. I was able to locate an ancient EULA which forbade this (see section 3.4): https://download.microsoft.com/documents/useterms/visual%20s...
> You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
> Publishing SQL Server benchmarks without prior written approval from Microsoft is generally prohibited by the standard licensing agreements.
[1]: https://docs.oracle.com/en/industries/food-beverage/micros-w...
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
The cross-platform version is mainstream, and this isn't new any more.
.NET on Linux works fine for services. Our .NET services are deployed to Linux hosts, and it's completely unremarkable.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
But also read these 400 articles to understand our GC. If you are lucky, we will let you change 3 settings.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...
It pretty much never gets in your way for probably 98% of developers.