Compile time function execution and functions with constant arguments were introduced in D in 2007, and resulted in many other languages adopting something similar.
I feel like I can write powerful code in any language, but the goal is to write code for a framework that is most future proof, so that you can maintain modular stuff for decades.
C/C++ has been the default answer for its omnipresent support. It feels like zig will be able to match that.
I like Zig a lot, but long-term maintainability and modularity is one of its weakest points IMHO.
Zig is hostile to encapsulation. You cannot make struct members private: https://github.com/ziglang/zig/issues/9909#issuecomment-9426...
Key quote:
> The idea of private fields and getter/setter methods was popularized by Java, but it is an anti-pattern. Fields are there; they exist. They are the data that underpins any abstraction. My recommendation is to name fields carefully and leave them as part of the public API, carefully documenting what they do.
You cannot reasonably form API contracts (which are the foundation of software modularity) unless you can hide the internal representation. You need to be able to change the internal representation without breaking users.
Zig's position is that there should be no such thing as internal representation; you should publicly expose, document, and guarantee the behavior of your representation to all users.
I hope Zig reverses this decision someday and supports private fields.
> You cannot reasonably form API contracts (which are the foundation of software modularity) unless you can hide the internal representation. You need to be able to change the internal representation without breaking users.
You never need to hide internal representations to form an "API contract". That doesn't even make sense. If you need to be able to change the internal representation without breaking user code, you're looking for opaque pointers, which have been the solution to this problem since at least C89, I assume earlier.
If you change your data structures or the procedures that operate on them, you're almost certain to break someone's code somewhere, regardless of whether or not you hide the implementation.
Very similar experience here. Also just recently I really _had_ to use and extend the "internal" part of a legacy library. So potentially days or more than a week of work turned into a couple of hours.
Take something as simple as a vector (eg. std::vector in C++). If a user directly sets the size or capacity, the calls to methods like push_back() will behave incorrectly, or may even crash.
Opaque pointers are one way of hiding representation, but they also eliminate the possibility of inlining, unless LTO is in use. If you have members that need to be accessible in inline functions, it's impossible to use opaque pointers.
There is certainly a risk of "implicit interfaces" (Hyrum's Law), where users break even when you're changing the internals, but we can lessen the risk by encapsulating data structures as much as possible. There are other strategies for lessening this risk, like randomizing unspecified behaviors, so that people cannot take dependencies on behaviors that are not guaranteed.
You can, just not in the "strictly technical" sense. You add a "warranty void if these fields are touched" documentation string.
I write a lot of rust. By default, fields are private. It's rare to see a field in my code that omits the `pub` prefix. I sometimes start with private because I forget `pub`, but inevitably I need to make it public!
I like in principle they're there, but in practice, `pub` feels like syntactic clutter, because it's on all my fields! I think this is because I use structs as abstract bags of data, vice patterns with getters/setters.
When using libraries that rely on private fields, I sometimes have to fork them so I can get at the data. If they do provide a way, it makes the (auto-generated) docs less usable than if the fields were public.
I suspect this might come down to the perspective of application/firmware development vice lib development. The few times I do use private fields have been in libs. E.g. if you have matrix you generate from pub fields and similar.
> You cannot reasonably form API contracts (...) unless you can hide the internal representation.
Yes you can, by communicating the intended use can be made with comments/docstrings, examples etc.
One thing I learned from the Clojure world, is to have a separate namespace/package or just section of code, that represents an API that is well documented, nice to use and more importantly stable. That's really all that is needed.
(Also, there are cases where you actually need to use a thing in a way that was not intended. That obviously comes with risk, but when you need it, you're _extremely_ glad that you can.)
Then I noticed people were using them, preventing me from making important changes. So I created a pseudo-"private" facility using macros, where people had to write FOOLIB_PRIVATE(var) to get at the internal var.
Then I noticed (I kid you not) people started writing FOOLIB_PRIVATE(var) in their own code. Completely circumventing my attempt to hide these internal members. And I can't entirely blame them, they were trying to get something done, and they felt it was the fastest way to do it.
After this experience, I consider it an absolute requirement to have a real "private" struct member facility in a language.
I respect Andrew and I think he's done a hell of a job with Zig. I also understand the concern with the Java precedent and lots of wordy getters/setters around trivial variables. But I feel like Rust (and even C++) is a great counterexample that private struct variables can be done in a reasonable way. Most of the time there's no need to have getters/setters for every individual struct member.
If it’s in an internal monorepo, this should be super easy to fix using grep.
Honestly it sounds like a great opportunity to improve your API. If people are going out of their way to access something that you consider private, it’s probably because your public APIs aren’t covering some use case that people care about. That or you need better documentation. Sometimes even a comment helps:
int _foo; // private. See getFoo() to read.
I get that it’s annoying, but finding and fixing internal code like this should be a 15 minute job.If you hesitate changing it because you worry about users using it anyway you are hurting the fraction of your users who are not using it.
Everyone is brainstorming ways to work around Zig's lack of "private". But nobody has a good answer for why Zig can't just add "private" to the language. If we agree that users shouldn't touch the private variables, why not just have the language enforce it?
Companies with monorepos could easily just ban the annotation. OSS projects could easily close any user complaint if the repro requires the annotation.
This seems like a great compromise to me. It would let you unambiguously mark which parts of the api are private, in a machine checkable way, which is undoubtedly better than putting it into comments. But it would offer an escape hatch for people who don’t mind voiding their warranty.
You can't blame them, but they can't blame you if you break their code.
This way users can get to them if they really need to, say for a crucial bug fix, but they're also clearly an implementation detail so you're free to change it without users getting surprised when things break etc.
I agree with this part with no reservations. The idea that getters/setters provide any sort of abstraction or encapsulation at all is sheer nonsense, and is at the root of many of the absurdities you see in Java.
The issue, of course, is that Zig throws out the baby with the bath water. If I want, say, my linked list to have an O(1) length operation, i need to maintain a length field, but the invariant that list.length actually lines up with the length of the list is something that all of the other operations need to maintain. Having that field be writable from the outside is just begging for mistakes. All it takes is list.length = 0 instead of list.length == 0 to screw things up badly.
Python is a good counter example IMHO, the simple convention of having private fields prefixed with _/__ is enough of a deterrent, you don't need language support.
If you really need to you can always use opaque pointers for the REALLY critical public APIs.
My experience is that users who are trying to get work done will bypass every speed bump you put in the way and just access your internals directly.
If you "just" rely on them not to do that, then your internals will effectively be frozen forever.
I seriously do not get this take. People use reflection and all kinds of hacks to get at internals, this should not stop you from changing said internals.
There will always be users who do the wrong thing.
I can't break their app without them complaining to my boss's boss's boss who will take their side because their app creates money for the company.
Having actual private fields doesn't 100% prevent this scenario, but it makes it less likely to sneak through code review before it becomes business-critical.
That doesn’t save you in languages that support reflection, but it will with zig. Inside a module, all private does is declare intent.
In languages with code inheritance, I think inheritance across module boundaries is now widely viewed as the anti-pattern that it is.
Or they will be broken when you change them and they upgrade. The JavaScript ecosystem uses this convention and generally if a field is prefixed by an underscore and/or documented as being non-public then you can expect to break in future versions (and this happens frequently in practice).
Not necessarily saying that's better, but it is another choice that's available.
It’s antithetical to what Zig is all about to hide the implementation. The whole idea is you can read the entire program without having to jump through abstractions 10 layers deep.
In Zig (and plenty of other non-OOP languages) modules are the mechanism for encapsulation, not structs. E.g. don't make the public/private boundary inside a struct, that's a silly thing anyway if you think about it - why would one ever hand out data to a module user which is not public - just to tunnel it back into that same module later?
Instead keep your private data and code inside a module by not declaring it public, or alternatively: don't try to carry over bad ideas from C++/Java, sometimes it's better to unlearn things ;)
There are lots of use cases for this exact pattern. An acceleration structure to speed up searching complex geometry. The internal state of a streaming parser. A lazy cache of an expensive property that has a convenient accessor. An unsafe pointer that the struct provides consistent, threadsafe access patterns for. I've used this pattern for all these things, and there are many more uses for encapsulation. It's not just an OO concern.
C++ has the PassKey idiom that allows you to whitelist what objects are allowed to access each part of the public API at compile-time. This is a significant improvement but a pain to manage for complex whitelists because the language wasn't designed with this in mind. C++26 has added language features specifically to make this idiom scale more naturally.
I'd love to see more explicit ACLs on APIs as a general programming language feature.
In that I agree, but per-member public/private/protected is a dead end.
I'd like a high level language which explores organizing all application data in a single, globally accessible nested struct and filesystem-like access rights into 'paths' of this global struct (read-only, read-write or opaque) for specific parts of the code.
Probably a bit too radical to ever become mainstream (because there's still this "global state == bad" meme - it doesn't have to be evil with proper access control - and it would radically simplify a lot of programs because you don't need to control access by passing 'secret pointers' around).
In other words, we can at best form API contracts in C++ that work 99% of the time.
Unless the user only links an opaque pointer, then just changing the sizeof() is breaking, even if the fields in question are hidden. A simple doc comment indicating that "fields starting with _ are not guaranteed to be minor-version-stable" or somesuch is a perfectly "reasonable" API.
It does feel regressive to me. I've seen people easily reach for underscored fields in Python. We can discourage them if the code is reviewed, but then again there's also people who write everything without underscores.
Not to mention just about every language offers runtime reflection that let's you do bad stuff.
IMO, the Python adage of "We are all consenting adults here" applies.
If you can trust that downstream users of your api won’t misuse private-by-convention fields (or won’t punish you for doing so), it’s not a problem. That works a lot of the time: You can trust yourself. You can usually your team. In the opensource world, you can just break compatibility with no repercussions.
But yes, sometimes that trust isn’t there. Sometimes you have customers who will misuse your code and blame you for it. But that isn’t the case for all code. Or even most code.
Such a smart guy though, so I'm hesitant to say he's wrong. And maybe in the embedded space he's not, and if that's all Zig is for then fine. But internal code is a necessity of abstraction. I'm not saying it has to be C++ levels of abstraction. But there is a line between interface and implementation that ought to be kept. C headers are nearly perfect for this, letting you hide and rename and recast stuff differently than your .c file has, allowing you to change how stuff works internally.
Imagine if the Lua team wasn't free to make it significantly faster in recent 5.4 releases because they were tied to every internal field. We all benefited from their freedom to change how stuff works inside. Sorry Andrew but you're wrong here. Or at least you were 4 years ago. Hopefully you've changed your mind since.
I just fundamentally disagree with this. Not having "proper" private methods/members has not once become a problem for me, but overuse of them absolutely has.
Can’t you do this in Zig with modules? I thought that’s what the ‘pub’ keyword was for.
You can’t have private fields in a struct that’s publicly available but the same is sort of true in C too. OO style encapsulation isn’t the only way to skin a cat, or to send the cat a message to skin itself as the case may be.
It was an interesting experience and I was pleasantly surprised by the maturity of Zig. Many things worked out of the box and I could even debug a strange bug using ancient GDB. Like you, I’m sold on Zig too.
I wrote about it here: https://news.ycombinator.com/item?id=44211041
Using indices isn't bad for performance. At the very least, it can massively cut down on memory usage (which is in turn good for performance) if you can use 16-bit or 32-bit indices instead of full 64-bit pointers.
> "unsafe" (which is strictly more dangerous than even C, never mind Zig)
Unsafe Rust is much safer than C.
The only way I can imagine unsafe Rust being more dangerous than C is that you need to keep exception safety in mind in Rust, but not in C.
In Rust, it's not just using an invalid reference that causes UB, but their very creation, even if temporary. For example, since references have to always be aligned, the compiler can assume the pointer they were created from was also aligned, and so suddenly some ending bits from the pointer are ignored (since they must've been zero).
And usually the point of unsafe is to make safe wrappers, so unafe Rust makes or interacts with safe shared/mutable references pretty often.
No it's not? The Rust burrow checker, the backbone of Rust's memory safety model, doesn't stop working when you drop into an unsafe block. From the Rust Book:
>To switch to unsafe Rust, use the unsafe keyword and then start a new block that holds the unsafe code. You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
Dereference a raw pointer
Call an unsafe function or method
Access or modify a mutable static variable
Implement an unsafe trait
Access fields of a union
It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any of Rust’s other safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You’ll still get some degree of safety inside of an unsafe block.If you need to make everything in-house this is the experience. For the majority though, the moment you require those things you reach for a crate that solves those problems.
For those saying unsafe Rust is strictly safer than C, you're overlooking Rust's extremely strict invariants that users must uphold. These are much stricter than C, and they're extremely easy to accidentally break in unsafe Rust. Breaking them in unsafe Rust is instant UB, even before leaving the unsafe context.
This article has a decent summary in this particular section: https://zackoverflow.dev/writing/unsafe-rust-vs-zig/#unsafe-...
Imo, the more annoying part is dealing with exception safety. You need to ensure that your data structures are all in a valid state if any of your code (especially code in an unsafe block) panics, and it's easy to forget to ensure that.
Laughs in graphics programmer. You end up using indices to track data in buffers all the time when working with graphics APIs.
Um, what? Unsafe Rust code still has a lot more safety checks applied than C.
>It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any of Rust’s other safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You’ll still get some degree of safety inside of an unsafe block.
Unsafe rust still has a bunch of checks C doesn’t have, and using indices into vectors is common code in high performance code (including Zig!)
Rust is a general purpose language that can do systems programming. Zig is a systems programming language.
(Safety Coomers please don't downvote)
Actually interfacing with idiomatic C APIs provided by an OS is something else entirely. You can see this is when you compare the Rust ecosystem to Zig; ie Zig has a fantastic io-uring library in the std lib, where as rust has a few scattered crates none of which come close the Zig's ease of use and integration.
One thing I'd like to see is an OS built with rust that could provide its own rusty interface to kernel stuff.
Zig's is in the standard library. From the commits it was started by Joran from Tigerbeetle, and now maintained by mlugg who is a very talented zig programmer.
https://ziglang.org/documentation/master/std/#std.os.linux.I...
The popular Rust one is tokio's io-uring crate which 1) relies on libc; the zig one just uses their own stdlib which wraps syscalls 2) Requires all sorts of glue between safe and unsafe rust.
github.com/tokio-rs/io-uring
I know it's written in rust, but I am talking more specifically than that.
Rust makes doing the wrong thing hard, Zig makes doing the right thing easy.
Off topic - One tool built on top of Zig that I really really admire is bun.
I cannot tell how much simpler my life is after using bun.
Similar things can be said for uv which is built in Rust.
Rust is like a highly opinionated modern C++
Go is like a highly opinionated pre-modern C with GC
Language limitations are more on the SDK side of things. SDKs are available under NDAs and even publicly available APIs are often proprietary. "Real" test hardware (as in developer kits) is expensive and subject to NDAs too.
If you don't pick the language the native SDK comes with (which is often C(++)), you'll have to write the language wrappers yourself, because practically no free, open, upstream project can maintain those bindings for you. Alternatively, you can pay a company that specializes in the process, like the developers behind Godot will tell you to do: https://docs.godotengine.org/en/stable/tutorials/platform/co...
I think Zig's easy C interop will make integration for Zig into gamedev quite attractive, but as the compiler still has bugs and the language itself is ever changing, I don't think any big companies will start developing games in Zig until the language stabilizes. Maybe some indie devs will use it, but it's still a risk to take.
You're not really going to make something better than C. If you try, it will most likely become C++ anyway. But do try anyway. Rust and Zig are evidence that we still dream that we can do better than C and C++.
Anyway I'm gonna go learn C++.
Creating a better C successor than C++ is really not a high bar.
I don't doubt that compilers occasionally break language specs, but in that case Clang is correct, at least for C11 and later. From C11:
> An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.
Thus in C the trivial infinite loop for (;;); is supposed to actually compile to an infinite loop, as it should with Rust's less opaque loop {} -- however LLVM is built by people who don't always remember they're not writing a C++ compiler, so Rust ran into places where they're like "infinite loop please" and LLVM says "Aha, C++ says those never happen, optimising accordingly" but er... that's the wrong language.
Worth mentioning that LLVM 12 added first-class support for infinite loops without guaranteed forward progress, allowing this to be fixed: https://github.com/rust-lang/rust/issues/28728
Same as with GCC
while (i <= x) {
// ...
}
just needs a slight transformation to while (1) {
if (i > x)
break;
// ...
}
and C11's special permission does not apply any more since the controlling expression has become constant.Analyzes and optimizations in compiler backends often normalize those two loops to a common representation (e.g. control-flow graph) at some point, so whatever treatment that sees them differently must happen early on.
It is no accident that there is ongoing discussion that clang should get its own IR, just like it happens with the other frontends, instead of spewing LLVM IR directly into the next phase.
I've avoided such manual specification of aliasing because:
1. few people understand it
2. using it erroneously can result in baffling bugs in your code
Do note that your linked godbolt code actually demonstrates one of the two sub-par examples though.
For complicated things, I haven't really understood the advantage compared to simply running a program at build time.
> I haven't really understood the advantage compared to simply running a program at build time.
Since both comptime and runtime code can be mixed, this gives you a lot of safety and control. The comptime in zig emulates the target architecture, this makes things like cross-compilation simply work. For program that generates code, you have to run that generator on the system that's compiling and the generator program itself has to be aware the target it's generating code for.
I do not understand your criticism of [N]. This gives compiler more information and catches errors. This is a good thing! Who could be annoyed by this: https://godbolt.org/z/EeadKhrE8 (of course, nowadays you could also define a descent span type in C)
The cross-compilation argument has some merit, but not enough to warrant the additional complexity IMHO. Compile-time computation will also have annoying limitations and makes programs more difficult to understand. I feel sorry for everybody who needs to maintain complex compile time code generation. Zig certainly does it better than C++ but still..
It only does sane thing in GCC, in other compilers it does nothing and since it's very underspec'd it's rarely used in any C projects. It's shame Dennis's fat pointers / slices proposal was not accepted.
> warrant the additional complexity IMHO
In zig case the comptime reduces complexity, because it is simply zig. It's used to implement generics, you can call zig code compile time, create and return types.
This old talk from andrew really hammers in how zig is evolution of C: https://www.youtube.com/watch?v=Gv2I7qTux7g
Also in Zig it does not reduce complexity but adds to it by creating an distinction between compile time and run-time. It is only lower complexity by comparing to other implementations of generic which are even worse.
C only seems cleaner and simpler if you already know it well.
But if there were zig would prob be simpler since it uses one language that seamlessly weaves comptime and runtime together
In this case both would (or could) give the "perfect" code without any explicit comptime programming.
> I just realized at some point that it isn't actually all that useful.
Except, again, C code often uses macros, which is a more cumbersome mechanism than comptime (and possibly less powerful; see, e.g. how Zig implements printf).
I agree that comptime isn't necessarily very useful for micro optimisation, but that's not what it's for. Being able to shift computations in time is usedful for more "algorithmic" macro optimisations, e.g. parsing things at compile time or generating de/serialization code.
Once we have this, the wide pointer could just be introduced as syntactic sugar for this. char (buf)[:] = ..
Personally, I would want the dependent structure type first as it is more powerful and low-level with no need to decide on a new ABI.
update: clang is even a bit nicer https://godbolt.org/z/b99s1rMzh although both compile it to a constant if the other argument is known at compile time. In light of this, the Zig solution does not impress me much: https://godbolt.org/z/1dacacfzc
Is this line really true? I feel like expressing intent isn't really a factor in the high level / low level spectrum. If anything, more ways of expressing intent in more detail should contribute towards them being higher level.
Something like purchase.calculate_tax().await.map_err(|e| TaxCalculationError { source: e })?; is full of intent, but you have no idea what kind of machine code you're going to end up with.
In yet other words, tautology.
I don't think this is a good comparison. You're telling the compiler for Zig and Rust to pick something very modern to target, while I don't think V8 does the same. Optimizing JITs do actually know how to vectorize if the circumstances permit it.
Also, fwiw, most modern languages will do the same optimization you do with strings. Here's C++ for example: https://godbolt.org/z/TM5qdbTqh
Now, one rarely uses typed arrays in practice because they're pretty heavy to initialize so only worth it if one allocates a large typed array one once and reuses them a lot aster that, so again, fair enough! One other detail does annoy me a little bit: the article says the example JS code is pretty bloated, but I bet that a big part of that is that the JS JIT can't guarantee that 65536 equals the length of the two arrays so will likely insert a guard. But nobody would write a for loop that way anyway, they'd write it as i < x.length, for which the JIT does optimize at least one array check away. I admit that this is nitpicking though.
I love Zig too, but this just sounds wrong :)
For instance, C is clearly too sloppy in many corners, but Zig might (currently) swing the pendulum a bit too far into the opposite direction and require too much 'annotation noise', especially when it comes to explicit integer casting in math expressions (I wrote about that a bit here: https://floooh.github.io/2024/08/24/zig-and-emulators.html).
When it comes to performance: IME when Zig code is faster than similar C code then it is usually because of Zig's more aggressive LLVM optimization settings (e.g. Zig compiles with -march=native and does whole-program-optimization by default, since all Zig code in a project is compiled as a single compilation unit). Pretty much all 'tricks' like using unreachable as optimization hints are also possible in C, although sometimes only via non-standard language extensions.
C compilers (especially Clang) are also very aggressive about constant folding, and can reduce large swaths of constant-foldable code even with deep callstacks, so that in the end there often isn't much of a difference to Zig's comptime when it comes to codegen (the good thing about comptime is of course that it will not silently fall back to runtime code - and non-comptime code is still of course subject to the same constant-folding optimizations as in C - e.g. if a "pure" non-comptime function is called with constant args, the compiler will still replace the function call with its result).
TL;DR: if your C code runs slower than your Zig code, check your C compiler settings. After all, the optimization heavylifting all happens down in LLVM :)
fn signExtendCast(comptime T: type, x: anytype) T {
const ST = std.meta.Int(.signed, @bitSizeOf(T));
const SX = std.meta.Int(.signed, @bitSizeOf(@TypeOf(x)));
return @bitCast(@as(ST, @as(SX, @bitCast(x))));
}
export fn addi8(addr: u16, offset: u8) u16 {
return addr +% signExtendCast(u16, offset);
}
This compiles to the same assembly, is reusable, and makes the intent clear.The core problem is that you're changing the semantics of that integer as you change types, and if that happens automatically then the compiler can't protect you from typos, vibe-coded defects, or any of the other ways kids are generating almost-correct code nowadays. You can mitigate that with other coding patterns (like requiring type parameters in any potentially unsafe arithmetic helper functions and banning builtins which aren't wrapped that way), but under the swiss cheese model of error handling it still massively increases your risky surface area.
The issue is more obvious on the input side of that expression and with a different mask. E.g.:
const a: u64 = 42314;
const even_mask: u4 = 0b0101;
a & even_mask;
Should `a` be lowered to a u4 for the computation, or `even_mask` promoted, or however we handle the internals have the result lowered sometimes to a u4? Arguably not. The mask is designed to extract even bit indices, but we're definitely going to only extract the low bits. The only safe instance of implicit conversion in this pattern is when you intend to only extract the low bits for some purpose.What if `even_mask` is instead a comptime_int? You still have the same issue. That was a poor use of comptime ints since now that implicit conversion will always happen, and you lost your compiler errors when you misuse that constant.
Back to your proposal of something that should always be safe: implicitly lowering `a & 15` to a u4. The danger is in using it outside its intended context, and given that we're working with primitive integers you'll likely have a lot of functions floating around capable of handling the result incorrectly, so you really want to at least use the _right_ integer type to have a little type safety for the problem.
For a concrete example, code like that (able to be implicitly lowered because of information obvious to the compiler) is often used in fixed-point libraries. The fixed-point library though does those sorts of operations with the express purpose of having zeroed bits in a wide type to be able to execute operations without loss of precision (the choice of what to do for the final coalescing of those operations when precision is lost being a meaningful design choice, but it's irrelevant right this second). If you're about to do any nontrivial arithmetic on the result of that masking, you don't want to accidentally put it in a helper function with a u4 argument, but with implicit lowering that's something that has no guardrails. It requires the programmer to make zero mistakes.
That example might seem a little contrived, and this isn't something you'll run into every day, but every nontrivial project I've worked on has had _something_ like that, where implicit narrowing is extremely dangerous and also extremely easy to accidentally do.
What about the verbosity? IMO the point of verbosity is to draw your attention to code that you should be paying attention to. If you're in a module where implicit casting would be totally fine, then make a local helper function with a short name to do the thing you want. Having an unsafe thing be noisy by default feels about right though.
I agree with everything flohofwoe said, especially this: "C is clearly too sloppy in many corners, but Zig might (currently) swing the pendulum a bit too far into the opposite direction and require too much 'annotation noise', especially when it comes to explicit integer casting in math expressions ".
Seems like I will keep using Odin and give C3 a try (still have yet to!).
Edit: I quite dislike that the downvote is used for "I disagree, I love Zig". sighs. Look at any Zig projects, it is full of annotation noise. I would not want to work with a language like that. You might, that is cool. Good for you.
While Zig fixes some of these issues, the amount of @ feels like being back in Objective-C land and yeah too many uses of dot and starts.
Then again, I am one of those that actually enjoys using C++, despite all its warts and the ways of WG21 nowadays.
I also dislike the approach with source code only libraries and how importing them feels like being back in JavaScript CommonJS land.
Odin and C3 look interesting, the issue is always what is going to be the killer project, that makes reaching for those alternatives unavoidable.
I might not be a language XYZ cheerleeder, but occasionally do have to just get my hands dirty and do the needfull for an happy customer, regardlees of my point of view on XYZ.
Basically it’s a better C/C++ for me and sits between C and C++ in complexity.
I tried Zig for a while years ago but found the casts and other annotations frustrating. And at the time the language was pretty unstable in how it applied those rules. Plus I’ve never found a use for custom allocators, even on embedded.
"." = the "namespace" (in this case an enum) is implied, i.e. the compiler can derive it from the function signature / type.
"@" = a language built-in.
C++'s `__builtin_` (or arguably `_`/`__`) vs Zig's `@`
C syntax may look simpler but reading zig is more comfy bc there is less to think about than c due to explicit allocator.
There is no hidden magic with zig. Only ugly parts. With c/c++ you can hide so much complexity in a dangerous way
as for the dots, if you use zig quite a bit you'll see that dot usage is incredibly consistent, and not having the dots will feel wrong, not just in an "I'm used to it sense/stockholm syndrome" but you will feel for example that C is wrong for not having them.
for example, the use of dot to signify "anonymous" for a struct literal. why doesn't C have this? the compiler must make a "contentious" choice if something is a block or a literal. by contentious i mean the compiler knows what its doing but a quick edit might easily make you do something unexpected
Virgil leans heavily into the reachability and specialization optimizations that are made possible by the compilation model. For example it will aggressively devirtualize method calls, remove unreachable fields/objects, constant-promote through fields and heap objects, and completely monomorphize polymorphic code.
(Note that this is orthogonal to whether and to what extent use of AI for coding is a good idea. Even if you believe that it's not, the fact is that many devs believe otherwise, and so languages will strive to accommodate them.)
I don't love the noise of Zig, but I love the ability to clearly express my intent and the detail of my code in Zig. As for arithmetic, I agree that it is a bit too verbose at the moment. Hopefully some variant of https://github.com/ziglang/zig/issues/3806 will fix this.
I fully agree with your TL;DR there, but would emphasize that gaining the same optimizations is easier in Zig due to how builtins and unreachable are built into the language, rather than needing gcc and llvm intrinsics like __builtin_unreachable() - https://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Other-Builtins....
It's my dream that LLVM will improve to the point that we don't need further annotation to enable positive optimization transformations. At that point though, is there really a purpose to using a low level language?
That is quite a long way to go, since the following formal specs/models are missing to make LLVM + user config possible:
- hardware semantics, specifically around timing behavior and (if used) weak memory
- memory synchronization semantics for weak memory systems with ideas from “Relaxed Memory Concurrency Re-executed” and suggested model looking promising
- SIMD with specifically floating point NaN propagation
- pointer semantics, specifically in object code (initialization), se- and deserialization, construction, optimizations on pointers with arithmetic, tagging
- constant time code semantics, for example how to ensure data stays in L1, L2 cache and operations have constant time
- ABI semantics, since specifications are not formal
LLVM is also still struggling with full restrict support due to architecture decisions and C++ (now worked on since more than 5 years).
> At that point though, is there really a purpose to using a low level language?
Languages simplify/encode formal semantics of the (software) system (and system interaction), so the question is if the standalone language with tooling is better than state of art and for what use cases. On the tooling part with incremental compilation I definitely would say yes, because it provides a lot of vertical integration to simplify development.
The other long-term/research question is if and what code synthesis and formal method interaction for verification, debugging etc would look like for (what class of) hardware+software systems in the future.
One thing I was wondering, since most of Zig's builtins seem to map directly to LLVM features, if and how this will affect the future 'LLVM divorce'.
It's an interesting question about how Zig will handle additional builtins and data representations. The current way I understand it is that there's an additional opt-in translation layer that converts unsupported/complicated IR to IR which the backend can handle. This is referred to as the compiler's "Legalize" stage. It should help to reduce this issue, and perhaps even make backends like https://github.com/xoreaxeaxeax/movfuscator possible :)
What's good for the goose should be good for the gander.
unmarshaling struct type Foo: in field Bar int: failed to parse value "abc" as integer
Whereas in Zig, an error can only say something that's known at compile time, like IntParse, and you will have to use another mechanism (e.g. logging) to actually trace the error.https://ziglang.org/documentation/0.14.1/#errorReturnTrace
https://ziglang.org/documentation/0.14.1/#Error-Return-Trace...
Yeah but some language features are disproportionately more difficult to optimize. It can be done, but with the right language, the right concept is expressed very quickly and elegantly, both by the programmer and the compiler.
So I have two lists, side by side, and the position of items in one list matches positions of items in the other? That just makes my eyes hurt.
I think modern languages took a wrong turn by adding all this "magic" in the parser and all these little sigils dotted all around the code. This is not something I would want to look at for hours at a time.
for (one, two, three) |uno, dos, tres| { ... }
My eyes have to bounce back and forth between the two lists. When the identifiers are longer than this example it increases eye strain. Maybe it's better when you wrote it and understand it, but trying to grok someone else's code, it feels like an obstacle to me.