> Hey y'all, GPUI develoment is getting some major brakes put on it. We gotta focus on some business relevant work in 2026, and so I'm going to be pushing off anything that isn't directly related to Zed's use case from now on. However, Nate, former employee #1 at Zed, has started a little side repo that people can keep iterating on if they're interested: https://github.com/gpui-ce/gpui-ce. I'm also a maintainer on that one, and would like to try to help maintain it off of work hours. But I'm not sure how much I'll be able to commit to this
https://discord.com/channels/869392257814519848/144044062864...
Remember that post announcing the millions of VC capital they raised? This is the result
I like Zed but it's still my secondary editor because it's missing usability features that I value in other editors. I think we all benefit if they focus their attention on the parts of Zed that differentiate it rather than writing new frameworks and libraries.
Did you not raise a bunch of money from Sequoia? Sounds like you're in a perfect place to quit your job and hack on GPUI for us.
In fact the entire Qt group is just work 650m EUR
And since Sequoia? It is primarily the Zed team working full time on it, which costs money.
I'm curious... does anyone have any PRs or features that they feel need merging in order to use GPUI in their own projects? (other than web support)
https://github.com/gpui-ce/gpui-ce/pulls
and here:
https://github.com/zed-industries/zed/pulls?q=is%3Apr+is%3Ao...
Lots of gpui was built with build Zed/a text editor in mind directly, and as folks have mentioned here, it is hard for Zed Industries to justify work on gpui that is purely for the community. Nathan is usually pretty pragmatic around not optimizing early, and gpui is generally serving Zed's needs at the moment (from what I know, I haven't worked on Zed since July)
I do think ZI would generally benefit if gpui did get pulled out of Zed if there was a community that was passionate about taking it over... but that is time and effort in itself.
I've written quite a lot of rust UI code for Zed over the past few years so I'm mostly familiar with the pros and cons of gpui, but I haven't spent much time with Iced, Dioxus, Xilem, etc.
Yet more disruption caused by coding agents, I’m sure. We saw it quite visibly with Tailwind, now I can see if code editors are maybe struggling too, especially something like Zed which was probably still used mostly by early adopter type People, who have early adopted TUI coding agents instead.
I only use cursor and zed to browse code now.
As far GPUI has a great foundation, the community can built the components themselves.
maybe they could pivot into the luxury boutique hand-crafted artisanal code market
Companies using it in production are often forking it as a result, and trying to keep their fork in sync. Ultimately, if the community wants iced to become a major and stable framework, it will have to be forked and a community development model built around it.
And I'm not saying this to disparage the author in any way, their readme even seems to suggest that that's exactly what they'd prefer.
I went from knowing nothing about web stuff to building this tool https://chakravarthysoftware.com/work_distributor in a week
I love iced and wrote a decent amount of code using it, but in my mind the biggest sponsor is system76 - and as awesome as they are they aren’t a major vendor yet :)
I have not used GPUI beyond a simple test case, but had (prior to this news?) considered it for future projects. I am proficient with, and love EGUI and WGPU. (The latter for 3D). I have written a library (`graphics` crate) which integrates the two, and I use for my own scientific applications which have both 2D and 3D applications. Overall, I'm confused by this, as I was looking forward to using GPUI in future applications and comparing it to EGUI. I have asked online in several places for someone to compare who's used both, but I believe this to be a small pool.
I was not sure of the integration between GPUI and WGPU, which can confirm EGUI and WGPU have great integration. But I only care about this because I do 3D stuff; if I were not, I would be using eframe instead of WGPU as the backend.
Unrelated, off-topic, but I'm also not sure where to ask this: Am I missing something about Zed? I have tried and failed to get into it. I really want to like it because it's so fast [responsive], but it seems to lack basic IDE functionality in Python and Rust, like moving structs/functions, catching errors dynamically, introspection and refactoring in general etc. I thought I might be missing some config, but now lean that it's more of a project-oriented text editor than true IDE in the fashion of JetBrains. But I have been unable to get a confirmation, and people discuss it as if it's an IDE or JB alternative.
But love the project and been using it for almost 2 years now though
It has far less built-in features for refactoring than other editors you might be coming from. It's handled at the LSP level, get the LSP for your language and hit cmd+ to see what it can do. I'm not working in Python or Rust at the moment (Elixir), but I'm sure they have some good extensions.
> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.
Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.
> To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.
— https://caseymuratori.com/blog_0001
Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?
Win32 GUI common controls are a pretty thin layer over GDI and you can always take over WM_PAINT and do whatever you like.
If you make your own control you musts handle WM_PAINT which seems pretty immediate to me.
https://learn.microsoft.com/en-us/windows/win32/learnwin32/y...
Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.
I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.
Handling WM_PAINT is no different from something like OnPaint() on a base class.
This was actually one of mindset shifts when moving from MS-DOS into Windows graphics programming.
Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.
When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.
I don't understand this part of your comment, it seems like you're replying to some other comment or something not in my comment. How am I overcorrecting? A statement of fact, that game developers didn't invent these things even though that's a common belief, is not an overcorrection. It's just a correction.
My bad. I think we're aligned on the history; I was making a point about why they're prominent advocates today (and why people are attributing invention to them) even though they didn't invent the concepts.
It seems like much of the shade is tossed at web front end like it's the only other domain of computing besides game end.
You're right that HFT, large-scale backend, and real-time systems care deeply about performance, often with far more money at stake.
But those domains are rare. The vast majority of software development today can genuinely throw hardware or money at problems (even HFT and large backend systems). Backends are usually designed to scale horizontally, data science rents bigger GPUs, embedded gets more powerful SoCs every year. Most developers never have to think about cache lines because their users have fast machines and tolerant expectations.
Games are one of the few consumer-facing domains that can't do this. We can't mandate hardware (and attempts at doing so cost sales and attract community disgust), we can't hide latency behind async, and our users immediately notice a 5ms hitch. That creates different pressures- we're optimising for the worst case on hardware we don't control whilst most of the industry optimises for the common case on hardware they choose.
You're absolutely right that we're often ignorant of advances elsewhere. But the economic constraint is real, and it's increasingly unusual.
But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.
In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.
I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.
[0] https://youtu.be/rX0ItVEVjHc?si=v8QJfAl9dPjeL6BI
Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...
That's exactly what they did :D
> Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.
https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph...
[Edit: I'll add an update to the post to note that Casey Muratori simply “coined the term” but that it predates his video.]
And you will see which information is more accurate.
Yes, he coined the term rather than invent the technique
It was a swinging pendulum. At first everything was immediate mode because video RAM was very scarce. Initially there was only enough VRAM for the frame buffer, and hardly any system RAM to spare. But once both categories of RAM started growing, there was a movement to switch to retained mode UI frameworks. It wasn’t until the early 00’s that GPUs and SIMD extensions tipped the scales in the other direction - it was faster to just re-render as needed rather than track all these cached UI buffers, and allowed for dynamic UI motifs “for free.”
My graying beard is showing though, as I did some gave dev in the late 90’s on 3Dfx hardware, and learned UI programming on Win95 and System 7.6. Get off my lawn.
I also came to a similar endpoint when building out a fairy large GUI application using egui. While egui solves the "draw widgets" part of building out the application, inevitably I had to restructure my app entirely with a new architecture to make it maintainable. In many places the "immediate" nature of the GUI mutable editing the state was no longer an advantage. Not to mention that UI code I wrote 6 months ago became difficult to read, especially if there was advanced layout happening.
Ultimately I've boiled my choices down to:
- egui for practicality but you pay the price in architecture + styling
- iced for a nice architecture but you have to roll all your own widgets
- slint maybe one day once they make text rendering a higher priority but even then the architecture side is not solved for you either
- tauri/dioxus/electron if you're not a purist like me
- Rewind 20 years and use Qt/WPF/etc.
Down the stack, low-level 3D acceleration is in a rough spot too unfortunately. The canonical Rust Vulkan wrapper (Ash) hasn't cut a release for nearly two years, and even git main is far behind the latest spec updates.
IIRC there is another raw vulkan library that just generated bindings as well and stayed up to date but that comes with its own issues.
WGPU + Winit + EGUI + EGUI component libs is its own joy of compatibility, but anecdotally they have been updating in reasonable sync. things can get out of hand if you wait too long between updates though!
https://github.com/vulkano-rs/vulkano/blob/master/Cargo.toml...
Maybe that's so they can interop with other crates which use Ash's types?
The C++ equivalent, Vulkan-Hpp[2], follows extremely closely behind. Plus, ash isn't just an FFI wrapper; it does quite a bit of RAII-esque state and function pointer management that is generally required for Vulkan.
[1]: https://github.com/KhronosGroup/Vulkan-Docs/blob/main/xml/vk...
A mature high-quality GUI with support for all the features of a modern desktop UI, accessibility, support for all the display variations you encounter in the wild, high quality rendering, high performance, low overhead, etc. is a development task on par with creating a mature game engine like Unity.
Nearly all open source GUI projects get 80% of the way there and stall, not realizing that they are only 20% of the way there.
Then you start to think about full unicode support, right-to-left rendering, and so on. Then you start to think about properly implementing accessibility features. The necessary work increases by a magnitude. And it's not fun work. So you stall out with a bare-bones implementation.
Do you have a source for this?
I started writing a program that needed to have a table with 1 million rows. This means it needs to be virtualised. Pretty common in GUI libraries. The only Rust GUI library I found that could do this easily was gpui-component (https://github.com/longbridge/gpui-component). It also renders text crisply (rules out egui), looks nice with the default style (rules out GTK, FLTK, etc.), isn't web-based (rules out Dioxus), was pretty easy to use and the developers were very responsive.
Definitely the best option today (I would say it's probably the first option that I haven't hated in some way). The only other reasonable choices I would say are:
* egui - doesn't render very nicely and some of the APIs are amateurish, but it's quick and it works. Good option for simple tools.
* Iced - looks nice and seemed to work fairly well. No virtualised lists though.
* Slint (though in some ways it is weird and it requires quite a lot of boilerplate setup).
All the others will cause you pain in some way. I think the "ones to watch" are:
* Makepad - from the demos I've seen this looks really cool, especially for arty GUI projects like synthesizers and car UIs. However it has basically no documentation so don't bother yet.
* Xilem - this is an attempt to make an 100% perfect Rust GUI library, which is cool and all but I imagine it also will never be finished.
Beyond egui/Iced/Slint, I'd say the "ones to watch" are:
* Freya
* Floem
* Vizia
I think all three of those offer virtualized lists.
Dioxus Native, the non-webview version of Dioxus is also nearing readiness.
Except the above virtualised lists, another case I hit was layered images (sprites for example). Not very hard to write my own, sure, but it’d be nice to have that out of the box as in eg. egui
Actually, this story is literally them changing their renderer on linux, so they are maintaining it.
> except to the extent contributions align with its business mission
Isn't that every single open source project that is tied to a commercial entity?
Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.
All of those are handled. Run the "story" app. It is very impressive IMO.
Components list: https://longbridge.github.io/gpui-component/docs/components/
[0]: https://agentcommunicationprotocol.dev/introduction/welcome [1]: https://zed.dev/docs/ai/external-agents#claude-code
This definitely would be worth some profiling. I don't think it's a given that their custom stacks are going to beat wgpu in a meaningful way.
They probably will for memory usage. Current wgpu seems to have a floor around ~100mb that isn't there with other rendering backends (and it was more like ~60mb with wgpu a few months / versions ago).
Not sure if this is fixable in wgpu, or do with spec compatibility (my guess would be that it's fixable, just not top priority for the team atm).
I am still curious how much uptake WebGPU will end up having on Android, or if Java/Kotlin folks will keep targeting OpenGL ES.
Just like you can hook up local VS code native up to a random server via SSH, browser rendering is just a convenience for client distribution.
You would need a full client/server editor architecture that VS code has.
> There is significant work beyond the renderer that would need to happen to run Zed in a browser - notably background tasks and filesystem/input APIs would need web/wasm-compatible implementations.
Building a chat platform in an IDE with CRDTs...? That screams we are more interested in the solutions than the problems, and that they didn't appreciate network effect before attempting this
Edit: replying to https://tritium.legal/blog/desktop, not the OP
AFAIK people 100% are using other libraries for UI, but often use a macro or something to force Rust to behave in a way that those libraries expect.
I haven't read about this in literally years, but that's my recollection.
I find it odd the broader hacker community feels the need to requestion and cross-examine every choice for using rust. Like, no other language has such great just works ergonomics, with a solid language, fantastic tooling, excellent packages that gives it a just works the first time cross-platform joy. Why does every thread have to spawn a brand new unsupported whinge throwing dirt at what seems like such an obvious enjoyable choice?
Then there is the issue that the Rust community likes to rewrite classic C programs because of "memory safety" and "modern tooling," but really just focuses on the easy 80% of the work. It feels like these rewrites are more done to gain popularity on GitHub than anything, as they most often remain incomplete and never replace the original implementation.
Finally there is the GPL to MIT licensing issue, on which much has been said already.
I don't get why every language's community doesn't just do the same thing: roll an idiomatic UI lib on top of SDL. It was tough, but I was able to do it as a single person (who was also building an entire game engine at the same time) over the course of a couple years.
> I don't get why every language's community doesn't just do the same thing: roll an idiomatic UI lib on top of SDL.
> I haven't worked on screen reader support, yet. Support for alternative text input is built into SDL. UI size scaling is a feature I plan on adding eventually.
Well, that's why :)For most serious applications, accessibility isn't a second thought, it's a requirement and it's very hard to implement correctly.
Fixed last october
You can just convert bitmap fonts, supporting them doesn’t make sense in 2026.
But he was doing that on his work time and did so collaborating with other Mozilla engineers, whereas AFAIK blade has been more of a personal side project.
Other then that the one big downside of WebGPU is the rigid binding model via baked BindGroup objects. This is both inflexible and slow when any sort of 'dynamism' is needed because you end up creating and destroying BindGroup objects in the hot path.
Vulkan's binding model will really only be fixed properly with the very new VK_EXT_descriptor_heap extension (https://docs.vulkan.org/features/latest/features/proposals/V...).
The WebGPU API gets you to rendering your first triangle quicker and without thinking about vendor-specific APIs and histories of their extensions. It's designed to be fully checkable in browsers, so if you mess up you generally get errors caught before they crash your GPU drivers :)
The downside is that it's the lowest common denominator, so it always lags behind what you can do directly in DX or VK. It was late to get subgroups, and now it's late to get bindless resources. When you target desktops, wgpu can cheat and expose more features that haven't landed in browsers yet, but of course that takes you back to the vendor API fragmentation.
The scientific folks don't have all that much reason to upgrade from OpenGL (it still works, after all), and the games folks are often targeting even newer DX/Vulkan/Metal features that aren't supported by WebGPU yet (for example, hardware-accelerated raytracing)
What advantages are people finding with this editor other then high fidelity scrolling.
The advantage I find personally, at least compared to something like emacs, is not just that you get high fidelity scrolling, but that the editor can open 60,000 line code files instantaneously syntax highlight all of it using trees that are and be butter smooth and responsive the entire time I'm searching through making multi-cursor edits or moving through the file. As well as being able to open for instance log files that are multi-megabytes large without having to worry about anything.
Plus, Zed has a lot of refinements and features over other editors, even if you discount the benefits of GPUI. I've spoken at length before about why I think its approach to coding agents is the best at sort of enhancing the human in the loop and keeping you in a flow state and preventing skill degradation[0], but I also think the range and design of the editing actions are better than almost all modern text editors, closer to what something like Emacs provides, and the UI is overall more streamlined and pleasant to use than something like VS Code, even though it's generally the same philosophy. There's also the collaboration features and the edit predictions.
There’s a lot of small things you’ll hit if you use Zed where it’s a subtlety nicer design point, but one of the big ones for me is project-wide search. Zed’s multibuffers are SO much better than VS Code’s equivalent.
If I’m debugging something on a coworkers laptop, VSCode is mostly usable until I hit that.
If you’re a craftsman, it’s worth trying different tools!
Last time I tried it (few months back) it felt really slow. Truns out it was spawning nodejs servers and using tons of memory.
Honestly, vscode was much faster for me (and looked much better).
The only reason it would be spawning Node.js processes is if it's running a javaScript/typescript language server for you, but that's not a property of Zed itself, it's something any other editor would do (including VS Code). Also, the resident memory of Zed, even with multiple entire projects with hundreds of tabs open, running several language servers and multiple terminals and AI agents for me never exceeds about 900 megabytes, which is significantly less than VS Code uses even at startup.
Whatever it was that you ran into, it's likely some kind of fluke or platform-specific bug.
I am always interested in what features new editors and how people use them and such and if I am missing out.
Ram usage:
VS Code 580 MB
Zed 410 MB
I don't see a reason yet to switch away from VS Code, more feature complete and I don't care about scroll speed, it's good enough in vs code.
I dont understand the zed hype, not only the UI has tons of issues, memory usage is not that different
Things that keep me: fast. Easy project wide search that is fast. Easy file completion that is fast. Easy ability to add/remove line numbers from a gutter. Vi keys that... kinda mostly work. Sorta. Code collapsing that I didn't have to spend hours fidgeting with that also mostly works with Ruby (except for rescue clauses / end-of-function exception handling which collapses weirdly.)