It's interesting to see Zig being chosen here over Rust for a browser engine component. Rust has kind of become the default answer for "safe browser components" (e.g., Servo, Firefox's oxidation), primarily because the borrow checker maps so well to the ownership model of a DOM tree in theory. But in practice, DOM nodes often need shared mutable state (parent pointers, child pointers, event listeners), which forces you into Rc<RefCell<T>> hell in Rust.
Zig's manual memory management might actually be more ergonomic for a DOM implementation specifically because you can model the graph relationships more directly without fighting the compiler, provided you have a robust strategy for the arena allocation. Excited to learn from Lightpanda's implementation when it's out.
Our goal is to build a headless browser, rather than a general purpose browser like Servo or Chrome. It's already available if you would like to try it: https://lightpanda.io/docs/open-source/installation
[0]: https://github.com/servo/stylo
[1]: https://github.com/DioxusLabs/taffy
[2]: https://github.com/linebender/parley
---
Also, if you're interested in contributing C bindings for html5ever upstream then let me know / maybe open a github issue.
Web APIs and CDP specifications are huge, so this is still a work in progess. Many websites and scripts already work, while others do not, it really depends on the case. For example, on the CDP side, we are currently working on adding an Accessibility tree implentation.
[1] https://lightpanda.io/docs/quickstart/build-your-first-extra...
[2] https://github.com/lightpanda-io/demo/tree/main/playwright
When I read your site copy it struck me as either naive to that, or a somewhat misleading comparison, my feedback would be just to address it directly alongside Chrome.
Build system complexity disappears when you set it up too. Meson and such can be as terse as your Curl example.
I mean, it's your project, so whatever. Do what you want. But choosing Zig for the stated reasons is like choosing a car for the shape of the cupholders.
But sometimes not good ones. Lot's of domains make tradeoffs about what features of C++ to actually make use of. It's an old language with a lot of cruft being used across a wide set of problems that don't necessarily share engineering trade offs.
- project requirements
- requirements forced upon you due to how the business is structured
- libraries available for a particular language ecosystem
- paradigms / abstractions that a language is optimised for
- team experiences
Your argument is more akin to saying “all general purpose languages are equal” which I’m sure you’d agree is false. And likewise, complexity can and will manifest itself differently depending on language, problems being solved, and developer preferences for different styles of software development.
So yes, C++ complexity exists for a reason (though I’d personally argue that “reason” was due to “design by committee”). But that doesn’t mean that reason is directly applicable to the problems the LightPanda team are concerned about solving.
I don't see why C++ would be materially better than Zig for this particular use case.
It remains to be seen which big name will make Zig unavoidable.
I don't think so. I don't know about Adobe, but it's not a meaningful statement for the rest. Those companies default to writing safe code in languages other than Rust, and the Linux kernel defaults to unsafe code in C. BTW, languages favoured by those projects/companies do not reliably represent industry-wide preferences, let alone defaults. You could certainly say that of the two languages accepted so far in the Linux kernel, the only safe one is Rust, but there's hardly any "default" there.
> It remains to be seen which big name will make Zig unavoidable.
I have no idea whether or not Zig will ever be successful, but at this point it's pretty clear that Rust's success has been less than modest at best.
Whatever could be done in programming languages with automatic memory management was already being done.
Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Adobe is the sponsor for the Hylo programming language, and key figures in the C++ community, are doing Rust talks nowadays.
"Adobe’s memory safety roadmap: Securing creativity by design"
https://blog.adobe.com/security/adobes-memory-safety-roadmap...
Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.
I don't know how true either of these statements is or to what extent the mandate is enforced (at my company we also have language mandates, but what they mean is that to use a different language all you need is an explanation and a manager to sign off), but I'll ask acquaintances in those companies (Except Adobe; don't know anyone there. Although the link you provided doesn't say Rust; it says "Rust or Swift". It also commits only to "exploring ways to reduce the use of new C and C++ code in safety critical parts of our products to a fraction of current levels").
What I do know is that the rate at which Rust is adopted, is significantly lower than the rate at which C++, Java, C#, Python, TS, and even Go were adopted, even in those companies.
Now, there's no doubt that Rust has some real adoption, and much more than just hobby languages. Its rate of adoption is significantly higher than that of Haskell, or Clojure, or Elixir were (but lower than that of Ruby or PHP). That is without a doubt a great accomplishment, but not what you'd expect from a language that wishes to become the successor to C++ (and doesn't suffer from lack of hype despite its advanced age). Languages that offer a significant competitive advantage, or even the perception of one, spread at a faster pace, certainly those that eventually end up in the top 5.
I also think there's little doubt that the Rust "base" is more enthusiastic than that of any language I remember except maybe that of Haskell's resurgence some years back (and maybe Ruby), and that enthusiasm may make up for what they lack in numbers, but at some point you need the numbers. A middle-aged language can only claim to be the insurgent for so long.
This is a political achievement, not technical one. People are bitter about it as it doesn't feel organic and feel pushed onto them.
> Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Ignoring it doesn't make those achievements political rather than technical.
Thus I don't consider that a reason good enough for using Zig, while throwing away the safety from modern languages.
How so? This feels like an empty statement at best.
The only thing relatively modern would be compile time execution, if we forget about how long some languages have had reader macros, or similar capabilities like D's compile time metaprogramming.
Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
There are several examples around of doing arenas in said languages.
https://dlang.org/phobos/std_experimental_allocator.html
You can write your own approach with the low level primitives from Swift, or ping back into the trusty NSAutoreleasePool.
One example for C#, https://github.com/Enichan/Arenas
That's your opinion, but I couldn't disagree more. It places partial evaluation as its biggest focus more so than any other language in history, and is also extraordinarily focused on tooling. There isn't any piece of information nor any technique that was known to the designers of those older languages and wasn't known to Zig's designer. In some situations, he intentionally chose different tradeoffs on which there is no consensus. It's strange to insist that there is some consensus when many disagree.
I have been doing low-level programming (in C, C++, and Ada in the 90s) for almost 30 years, and over that time I have not seen a low-level language that's as revolutionary in its approach to low-level programming as Zig. I don't know if it's good, but I find its design revolutionary. You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.
I guess you could say that you personally don't care about Zig's primary design points and when you ignore them you're left with something that you find similar to other languages, but that's like saying that if you don't care about Rust's borrow- and lifetime checking, it's basically just a mix of C++ and ML. It's perfectly valid to not care about what matters most to some language's designer, and it's perfectly valid to claim that what matters to them most is misguided, but it is not valid to ignore a language's design core when describing it just because you don't care about it.
> Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
Again, that is an opinion, but not one I agree with. For one, Rust isn't as safe as other safe languages given its relatively common reliance on unsafe. If spatial and temporal memory safety were the dominating concerns, there wouldn't be a need for Rust, either (and it wouldn't have exposed unsafe). Clearly, everyone recognises that there are other concerns that sometimes dominate, and it's pretty clear that some people, who are no less knowledgeable about the software industry and its direction, prefer Zig. There is no consensus here either way, and I'm not sure there can be one. They are different languages that suit different people/projects' preferences.
Now, I agree that there's definitely more motion toward more correctness - which is great! - and I probably wouldn't write a banking or healthcare system in Zig, but I wouldn't write it in Rust, either. People reach for low level languages precisely when there may be a need to compromise on safety in some way, and Rust and Zig make different compromises, both of which - as far as I can tell - can be reasonable.
> There are several examples around of doing arenas in said languages.
From what I can tell, all of them either don't provide freedom from UAF, or they're not nearly as general as a proper arena.
I know of one safe and general arena design in RTSJ, which immidiately prevents a reference to a non-enclosing arena from being written into an object, but it comes with a runtime cost (which makes sense for hard realtime, where you want to sacrifice performance for worst-case predictability).
Some of them even prevent use after free (the "ABA mitigation" column).
It makes everything very explicit, and you can always _see_ where your allocations are happening in a way that you can't (as easily, or as obviously - imo) in rust.
It seems like something I quite like. I'm looking forward to rust getting an effects system/allocator api to help a little more with that side of things.
It's still nice sometimes to ensure that you have to think about allocation everywhere, and can change the allocation strategy for something that works for your usecase. (hence why I'm looking forward to the allocator api in rust to get the best of both worlds).
It's unfortunate that "writing safe code" is constantly being phrased in this way.
The borrow checker is a deterministic safety net. Claiming Zig is easier ignores that its lack of safety checks is what makes it feel easier; if Zig had Rust’s guarantees, the complexity would be the same. Comparing them like this is apples vs. oranges.
Safety has some value that isn't infinite, and a cost that isn't zero. There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html) and Zig offers spatial safety. The question is always what you're paying and what you're getting in return. There doesn't appear to be a universal right answer. For some projects it may be worth it to pay for more safety, and for other it may be better to pay for something else.
Data races, type state pattern, lack of nulls, ...
[1]: A benging race is when multiple tasks/threads can concurrently write to the same address, but you know they will all write the same value.
You keep repeating this. It's not true. If what you said was true, Rust would have adopted HKT, and God knows whatever type astronomy Haskell & Scala cooked up.
There is a balancing act, and Rust decided to plant a flag in memory safety without GC. The fact that Zig, didn't expand on this, but went backwards is more of an indictment of programmers unwilling to adapt and perfect what came before, but to reinvent it in their own worse way.
> There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html)
How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
> How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
Sure, but spatial safety is higher. So if Rust's compromise, we'll exact a price on temporal safety and have both temporal and spatial safety, is reasonable, then so is Zig's that says, the price on temporal safety is too high for what you get in return, but spatial safety only is a better deal. Neither go as far as ATS in offering, in principle, the ability to avoid all bugs. Nobody knows whether Rust's compormise is universally better than Zig's or vice versa (or perhaps neither is universally better), but I find it really strange to arbitrarily claim that one compromise is reasonable and the other isn't, where both are obviously compromises that recognise there are different benefits and different costs, and that not every benefit is worth any cost.
It doesn't. Not by any reasonable definition of having a GC.
And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.
> Nobody knows whether Rust's compormise is universally better than Zig's
When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.
This is what happens when you ignore one type of memory safety. You have to have both. Just ask Go.
Given that refcounting and tracing are the two classic GC algorithms, I don't see what specifying "non tracing" here does, and reference-counting with special-casing of the one reference case is still reference counting. I don't know if the "reasonable definition" of GC matters at all, but if it does, this does count as one.
I agree that the one-reference case is handled in the language and the shared reference case is handled in the standard library, and I think it can be reasonable to call using just the one-reference case "not a GC", but most Rust programs do use the GC for shared references. It is also true that Rust depends less on GC than Java or Go, but that's not the same as not having one.
> When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.
And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas. It's like declaring that you win by paying $100 for something while I paid $50 for something else without comparing what we got for the money, or declaring that you win by getting a faster car without looking at how much I paid for mine.
> This is what happens when you ignore one type of memory safety.
When you have less safety for any property, you're guarnateed to have more violations. This is what you buy. Obviously, this doesn't mean that avoiding those extra violations is necessarily worth the cost you pay for that extra safety. When you buy something, looking just at what you pay or just at what you get doesn't make any sense. The question is whether this is the best deal for your case.
Nobody knows if there is a universal best deal here let alone what it is. What is clear is that nothing here is free, and that nothing here has infinite value.
If you define all non-red colors to be green, it is impossible to talk about color theory.
> And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas.
That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.
> When you have less safety for any property, you're guarnateed to have more violations.
If that's what you truly believed outside some debate point. Then you'd be advocating for ATS or Ada.SPARK, not Zig.
Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are. Even within these basic algorithms there are combinations. For example a mark-and-sweep collector is quite different from a moving collector, or CPython uses refcouting for some things and tracing for others.
> That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.
That it's not as easily quantifiable doesn't make it any less real. If we compare languages only by easily quantifiable measures, there would be few differences between them (and many if not most would argue that we're missing the differences that matter to them most). For example, it would be hard to distinguish between Java and Haskell. It's also not necessarily a "skill issue". I think that even skilled Rust users would admit that writing and maintaining a large program in TypeScript or Java takes less effort than doing the same in Rust.
Also, ATS has many more compile-time safety capabilities than either Rust or Zig (in fact, compared to ATS, Rust and Zig are barely distinguishable in what they can guarantee at runtime), so according to your measure, both Rust and Zig lose when we consider other alternatives.
> Then you'd be advocating for ATS or Ada.SPARK, not Zig.
Quite the opposite. I'm pointing out that, at least as far as this discussion goes, every added value comes with added cost that needs to be considered. If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust. I'm saying that we don't know where the cost-benfit sweet point is or, indeed, even if there's only one such sweey point or multiple. I'm certainly not advocating for Zig as a universal choice. I'm advocating for selecting the right tradeoffs for every project, and I'm rejecting the claim that whatever benefits Rust or Zig have compared to the other are free. Both (indeed, all languages) require you to pay in some way to get what they're offering. In other words, I'm advocating can both be more or less appropriate than the other, depending on the situation and against the position that Rust is always superior, which is based on only looking at its advantages and ignoring its disadvantages (which, I think, are quite significant).
That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with
> That it's not as easily quantifiable doesn't make it any less real.
It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.
> For example, it would be hard to distinguish between Java and Haskell.
It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.
If you can make a program that halts on that feature, you can prove you're in language with that feature.
> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.
Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.
Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.
Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading.
> Rust has a clear purpose. To put a stop to memory safety errors.
Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.
So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.
But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?
> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.
So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?
I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.
I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.
I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true.
Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.
So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true.
(As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.)
Zig's check absolutely don't go to the extent that Rust's do, which is kind of the point here. If you do need to go beyond safe code in Rust, Zig is safer than unsafe code in Rust.
Saying Zig lacks safety checks is unfortunate, although I wouldn't presume you meant it literally and just wanted to highlight the difference.
This is a common flow for me
lightpanda url | markitdown (microsoft) | sd (day50 streamdown)
I even have it as a shell alias, wv(). It's way better than the crusty old lynx and links on sites that need JS.It's solid. Definitely worth a check
Standard library is changing. The core language semantics - not so much. You can update from std.ArrayListUnmanaged to std.array_list.Aligned with to greps.
This same pattern used to play out with Ruby, Lisp, and other languages in different eras of this site. It will probably never stop and calling it out seems to just fan the flames more than anything else.
TL;DR: It does the following:
- Fetch HTML over the network
- Parse HTML into a DOM tree
- Fetch and execute JavaScript that manipulates the DOM
But not the following:
- Fetch and parse CSS to apply styling rules
- Calculate layout
- Fetch images and fonts for display
- Paint pixels to render the visual result
- Composite layers for smooth scrolling and animations
So it's effectively a net+DOM+script-only browser with no style/layout/paint.
---
Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
I'm not too happy with the fact that Chrome is one of our memory-hungriest parts of all the MCP servers we have in use. The only thing that exceeds that in our whole stack is the Clickhouse shard, which comes with Langfuse. Especially if you are looking to build a "deep research" feature that may access a few hundreds of webpages in a short timeframe, having a lightweight alternative like Lightpanda can make quite the difference.
https://news.ycombinator.com/item?id=45640594
I've spent some time feeding llm with scrapped web pages and I've found that retaining some style information (text size, visibility, decoration image content) is non trivial.
> ---
> Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
Both projects (Lightpanda, DioxusLabs/blitz) sound very interesting to me. What do you think about rendering patterns that require both script+layout for rendering, e.g. virtual scrolling of large tables?
What would be a good pattern to make virtual scrolling work with Lightpanda or Blitz?
If your aim is to render a UI (ala Electron/Flutter) then we have a React-style framework (Dioxus) that runs on top of Blitz, and allows you access to the low-level Rust API of the DOM for advanced use cases (although it's still a WIP and this API is a bit rough atm). I'm also hoping to eventually have a built-in `RecyclerView`-like widget for this (that can bypass the style/layout systems for much more efficient virtual scrolling).
Very refreshing. Most engineers would rather saw their leg off.
I can understand the source of concern but I wouldn’t expect innovation to stop. The world isn’t going to pause because of a knowledge cutoff date.
I really like Zig, I wish it appeared several years earlier. But rewriting everything in Zig might just not have practical sense soon.
However there is still a strong argument to be made for protections/safety that languages can provide.
e.g. would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
So, while we might not care about how easy it is for a human to read/write, there will still be a purpose for innovation in programming languages. But those innovations, IMO, will be more focused on how to make languages easier for AI.
"assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different. And, honestly, I bet on C here because its code base is the largest, the language itself is the easiest to reason about and we have a lot of excellent tooling that helps mitigate where it falls short.
> I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
We need these constraints because we can't reliably track all the necessary details. But AI might be much more capable (read — scalable) in that, so all the complexity that we need to accumulate in a programming language it might just know out of the way it's built.
> "assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different.
You are correct, but I am trying to illustrate that assuming some ideal system with equal expertise, the languages with more safety would win out in productivity/bugs over those with less safety.
As in to say that it could be worth investing further in safer programming languages because AI would benefit.
> We need these constraints because we can't reliably track all the necessary details.
AI cannot reliably track the details either (yet, though I am sure it can be done). Even if it could, it would be a complete waste of resources (tokens).
Why have an AI determine the type of a variable when it could be done in a deterministic manner with a compiler or linter?
To me these arguments closely mirror/follow arguments of static/dynamically typed languages for human programmers. Static type systems eliminate certain kinds of errors and can produce higher quality programs. AI systems will benefit in the same way if not more by getting instant feedback on the validity of their program.
So even if they don't get to train much on some technology, all you need is some guidance docs in AGENTS.md
There's a plus in being fresh too: LLMs aren't going to be heavily trained on outdated tutorials and docs. Like React for example.