In this case, I would trust the output even less than the input. The input was memory-unsafe but hand-written. The output is memory-unsafe but also vibe-coded and has had no eyeballs on it. What is the point of abusing agentic AI for this use-case?
Have you ever seen what comes out of c2rust? It's awful. It relies on a library of functions which emulate unsafe C pointer semantics with unsafe Rust.
A few years ago, when I was struggling with bugs in OpenJPEG (a JPEG 2000 decoder), someone tried running it through c2rust. The converted unsafe rust segfaulted at the same place the C code did. It's compatible, but not safe.
Main insight: don't do string manipulation in C or unsafe Rust. It's totally the wrong tool for the job.
which is somewhat close to what their port produced...
like their goal was from the get to go to have a mostly exactly the same as zig "just in rust" which implies mostly unsafe rust and all the soundness/memory issues zig has (plus probably some more due to AI based port instead of a tool like c2ruts)
the thing is if you don't keep things mostly 1:1 with all the problems that has there is absolutely no way to review that PR or catch the AI going rogue with hallucinations etc. With a mostly 1:1 port you can at least check if things seem mostly the same.
but it also means this is just step 1 of very many, with the other being incrementally fixing soundness, removing unsafe and (hopefully) making the code more idiomatic...
(to got to the actual question of why?, I think the answer is doing this port using AI is likely way easier/faster then first writing a tool which need in depth understanding of the languages, especially given that some features in zig do not map 1:1 in rust and fuzzily mapping is what LLMs are good at and human hand written tools tend to be very bad at).
That is indeed the point of c2rust. It gives you a baseline that is semantically identical to the original codebase, and with that passing the full test suite, bug-for-bug, you can then start gradually adopting rusty idioms to improve the memory safety of the codebase.
2022 discussion on HN.[1]
There's a DARPA funded effort called TRACTOR, Translate All C To Rust, which has funded some efforts to develop a usable translator.[2] It's about 10 months after award, with no reported progress. I've been checking the personal sites of the academics involved, and they barely mention the project, although $5 million has been allocated to it.[3] The approach comes from U.C. Berkeley - let the LLM generate slop, check it using formal methods.[4] Not expecting near-term results.
[1] https://news.ycombinator.com/item?id=30169263
[2] https://csl.illinois.edu/news-and-media/translating-legacy-c...
I'm much more bullish on the opposite approach. Perform the naive translation, let the LLM lose on cleaning it up...
This is awful. They have some internal string format borrowed from a Zig library where the address of the item is in the low end of a pointer and the length is at the high end. Why are they doing that in 2026? It lets you save a few bytes at best. It doesn't enforce the Rust rule that strings must be strict UTF-8. It's totally alien to the safe way Rust handles strings.
[1] https://github.com/oven-sh/bun/blob/main/src/bun_core/string...
They did ;) a highly dynamic one...
This doesn’t work like you think it does. These things are full of errors and make the code very verbose and hard to reason about. It works with small apps, not entire rewrites.
The issue isn't the existence of undefined behavior that miri would catch. The issue is exposing an API that allows undefined behavior from safe code - which miri only catches if you go write the test that proves it.
This isn't an all together unreasonable thing to happen during an initial port of code from an unsafe language. You can, and the bun team seems to be, go around later and make sure that the functions where you wrap unsafe code does so correctly. Temporarily in a porting stage incorrectly marking some unsafe functions as safe isn't a real issue. It's a bit strange to merge it into the main repo in this state, but not a wholly unreasonable thing to do if the team has decided that they're definitely doing this. The only real issue would be if they made an actual release with the code in this state.
It's also a bit unfortunate that they didn't immediately set up their tests to run in miri if only because LLMs respond so well to good tests - I know they didn't do this not because of this github issue (which doesn't demonstrate that) but because there's another test [1] that absolutely does invoke undefined behavior that miri would catch. Though the code it's testing doesn't actually appear to be used anywhere so it's not much of a real issue. That said it's obviously early in the porting process... maybe they'll get around to it (or just get rid of all this unsafe code that they don't actually need).
[1] https://github.com/oven-sh/bun/blob/4d443e54022ceeadc79adf54... - the pointers derived from the first mutable references are invalidated by creating a new mutable reference to the same object. In C terms think of "mutable reference" as "restrict reference which a trivial mutation is made through". It's easy to do this properly, derive all the pointers from the same mutable reference, it just wasn't done properly.
PS. Spamming github just makes people less likely to work in the open. Please don't. We can all judge this work just fine on third party sites.
PPS. And we might want to withhold judgement until it's in a published state. Judging intermediate working states doesn't seem terribly fair or interesting to me.
Couldn't a case be made that it's better to get Bun to the to the language with the stronger type system first and, once there, use that stronger type system as leverage for these kinds of improvements as a follow-on effort? It seems preferable to requiring perfection on the very first step.
This is what they are doing.
They are working through the issues as they come in.
What is a bit disappointing is that the Rust code apparently has APIs that aren't marked unsafe but may cause UB anyway. When doing this kind of translation, I'd always err on the side of caution and start by marking all/most things unsafe. Or prompt the slopbots to do the same I guess.
Then you can go in and verify the safety of individual bits step by step.
This is expected, because unsafe rust can leave your program in an unhealthy state, since the language doesn't doesn't hold your hand anymore.
This asymmetry is well understood by marketing and PR professionals, and actively exploited.
It is. We’re what, a week into this exercise? Absolutely everyone criticizing it, with no exceptions, is behaving like a micromanaging middle manager who couldn’t even dream of doing the work themselves.
I half want to start a list of “people to ignore”, but such people tend to expose themselves in every other comment anyway.
> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
Did they even claim it was "memory-safe"? Every discussion of this topic has had dozens of comments noting that their vibed codebase is bursting at the seams with unaudited unsafe blocks, lightly reviewed by people who seem to not only seem to not understand Rust, but who seem incensed at the idea of needing to understand any programming language in the first place.
They did cite Rust's safety as a motivating factor for the port. That doesn't imply trying to achieve that simultaneously with the language change — which is good, because that would be insane. (Or, if you prefer, even more insane.)
You cannot faithfully port a codebase to a new language while also radically re-architecting it. You have to choose.
They want the safety benefits of Rust going forward; i.e., after it's finished, when they then write new code in Rust.
The newer posts go into detail about the rearchitecting that follows.
they didn't,
actually the port is trying to be mostly 1:1 and in turn is mostly unsafe rust, which means no benefits initially
but also doing the 1:1 port to mostly unsafe rust is also only the first step of a full port, you then incrementally go through it fixing issues and remove "unsafe" usage. (And long term likely also doing some refactoring to using more idiomatic rust, but that has less priority).
The problem is there was no blog port describing the whole thing to someone without contextual knowledge. Instead just linked PRs which is in this case somewhat close to a "as if nearly all people only read the HN headline" case :/
Like a more context giving version of the first HN post would have something on the line of `Show HN: Bun is porting to safe rust (PR link), starting with an AI based automatized port to mostly unsafe rust which once it behaves mostly the same as Bun in the test suite will likely be merged. But must be followed up with incremental PRs to remove unsafeness, and likely also a lot of unsoundness related to the way it's ported (some explanation about why this port will have unsoundness).`
The "1:1" assumption is a massive unjustified assumption. Rust and Zig have different memory models, so it's possible to do a "1:1" translation of Zig code to Rust and end up with undefined behavior in Rust.
For example, Zig code might make assumptions about lifetimes based on implicit knowledge of which allocator was used for some memory. That could cause problems in Rust if you erase the lifetime https://github.com/oven-sh/bun/blob/main/src/bun_core/string...
It's not, that's clear from this kind of bug popping up. Functionally this bug exists because `PathString` was converted into a "safe" Rust API but still works the same internally as the original Zig code did (via using `unsafe`), that introduces UB that wasn't there in the Zig code.
If it was attempting to be a 1:1 with no behavior changes (like c2Rust attempts to do) then this would not have been turned into a "safe" Rust API like this.
In your quote, there is no time-dependency between the lie and the truth. Whereas here, it's an attractive lie (easily parsed, great narrative), followed up by truths (that need more than surface-level analysis).
The rewrite was a code translation meant to be a starting point.
> a big, flashy announcement (here: bun was re-written in memory-safe rust in a couple weeks), and the relatively small reach of a correction (often just a footnote on an old article, here a GH issue).
The Bun team never made a big announcement that the code is now memory safe. They've been clear that this is the starting point.
Anyone expecting it to be perfect immediately and to have solved all of the memory problems in the original Zig code is arguing with an announcement they imagined, not what the Bun team has said.
Did anyone try to map this code back to the original codebase to see if this memory problem exists in the original codebase?
FWIW what is being discussed is not memory problems, it's breaking rust invariants (the unsafe code has to follow specific rules, e.g. annotate lifetimes properly).
The bar for matching tsc's behavior is really _really_ high. see:
https://github.com/type-challenges/type-challenges
I'm not against using LLMs to write a lot of code. But verification should be 100x more robust now that we can output code at this rate.
- Test before code, Bun had lots of test so that's good but maybe they could start by asking Mythos to write like 20k additional tests that pass on Zig Bun first.
- Deterministic anti-slop features. LLMs love to solve the problem in the wrong abstraction layer or place. There are many ways to catch this with deterministic tests. I do this in tsz a lot
- Roadmap that constantly evolving by humans.
- Taking a pause and looking how the progress is going and undoing slop
- Fuzztest(https://github.com/google/fuzztest) style "trying to break things" with the powers of LLM
Yes, tools like Miri, which this very post is about.
I've seen large rewrites and migrations take both approaches -- in my experience, the former usually works out better.
Thus far most of the buzz and marketing has been entirely negative from people who are against AI.
My take is that most of the buzz is also tied to recent negative opinions of Anthropic themselves due to some of their recent decisions.
It's not that they're using AI, it's the massive rug pull on bun users.
* Make a huge deal out of it how “Claude Code enabled Bun team to rewrite 1+ mil of Zig lines to Rust” and write a blogpost, VCs are salivating
* Basic checks fail
* Let Mythos rip the codebase to shreds, spend God knows how much more
* Write a separate blogpost
* Charlatans and smooth brains clap and defend against “delusional anti-AI mob”
* VCs orgasm even harder
Clap, clap, clap. That’s how you make money, folks.
And btw, we need to get rid of software engineers now.
"Zig, let me Ai you"
"no"
*Ai's Zig fork, suffers from memory bugs*
"Well I'm moving!"
*Ai's code into Rust, suffers from memory bugs*
Bun's fork of Zig was just an unsound hack that at best would have produced a strictly inferior speedup compared to our current work with incremental compilation, which is already plenty usable:
- June 2025 core team starts using it with the zig compiler itself https://ziglang.org/devlog/2025/#2025-06-14
- April 2026 https://ziglang.org/devlog/2026/#2026-04-08
> Zig's AI stance is ridiculous & politically-motivated
It's literally an issue with our business model to mess with our contributor pipeline, can't get more concrete than this.
Well, presumably they want to contribute to the compiler. I know that you did not like those contributions, and that view seems entirely valid, but obviously "no AI" rules out their development model (by design, and you likely think that's good, and maybe it is!).
Not intending to defend the bun move, but obviously a project using Zig and also using AI might feel motivated to avoid Zig since they're ruled out as contributors.
> An example of this is the changes to type resolution which happened in the 0.16.0 release cycle—these didn’t affect users too much, but had big implications for the compiler implementation. Before those changes, the compiler’s behavior was often highly dependent on the order in which types and declarations were semantically analyzed by the compiler. Some orders might result in successful compilation, while others give compile errors. Single-threaded semantic analysis prevented these bugs from causing user-facing non-determinism. The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.
There is a reason why, zig is upholding the quality and they hate it.
Not sure why you're inventing a stance for me to be arguing against, when the Zig compiler stance is publicly articulated as exactly what I'm describing.
The zig team is not that big. They don't have 200 core contributors to filter through the noise and mine PRs for "gems".
I think an outright rejection of AI contributions makes sense, regardless, and has nothing to do with politics. A Zig developer was forced into writing a long-form post to justify rejecting Bun's awful contribution (lest their PR be sullied, and then it was anyways), and the act of writing that post probably took 10 or 20x more human time and effort than Bun's contribution. Now multiply that by 100 for every random fucking moron with an LLM submitting a contribution. That is not sustainable. Open source maintainers of popular projects would have to make rejecting AI PRs their full time job and stop developing the project itself altogether, if they took them seriously and reviewed at length to conclusively identify whether a PR is good or bad. Given that 99.99% of AI PRs are bad, it's simply not worth it. You cannot possibly expect humans to spend more time reviewing code than drive-by contributors spent generating it, especially when many of them are unpaid volunteers. It's an absolutely ridiculous expectation.
I think they shat over the community who trusted them by trying to advertise their owner company
There’s no way they had time to review the code. This just seems so wildly irresponsible for such an important and high profile project.
Isn't the whole point of AI companies using Rust that it's explicit, safe, and AIs are fairly good at writing it?
Related: If AI writes your code, why use Python? (which notes why Rust has taken off for LLMs) https://news.ycombinator.com/item?id=48100433
But I think their true strategy is to have AI produce "fixes" like these which will end up infecting the entire codebase: https://github.com/oven-sh/bun/pull/30728
https://www.reddit.com/r/rust/comments/1hxjdvp/eli5_what_is_...
It's been like a day since the merge, presumably such followups are coming.
Will definitely use that going forward
Didn't find anything on my existing vibecoded rust projects but can't hurt
1 week turnaround I guess is what they meant.
If you find a bug, just go straight to blog posts and CVEs to denounce this idiocy. It ranks higher on Google.
The "rewrite it in rust" commit is +1M lines of code. Humans haven't looked at that in depth. In about a week, they saw the tests passed and pushed it to main. Now people have started to look through it and are pointing out glaring issues. And the solution is just going to be "feed it to another AI and ask it to fix it".
The entire codebase is slop now. Nobody knows what it does. It manages to pass some tests, but its largely a black box just on the basis of humans haven't read it yet. The code isn't guaranteed to be anything close to 1:1 with the old codebase. Its probably vaguely shaped like the old codebase, but new bugs could be there, old bugs could be there, nobody knows anything yet.
Its going to be interesting to see how recoverable this is. They are almost certainly going to just hand every file to an AI, say "look for soundness issues and fix them" and then what? If AI is making huge, sweeping changes to the code so frequently that humans can't keep up, is that really maintainable? The only solution appears to be "even more AI" while anybody that looks closely gets scared away by the too-large-to-comprehend-and-entirely-slop codebase.
This kind of thing has been happening with many smaller projects already, but now its a larger project and happening in a much more public way, with the intent to replace human-written, mostly-understood code with slop. I suspect the same thing, with the same problems, is happening inside all the largest companies, just not quite as obviously.
I am not against AI code, it can be perfectly fine.
The principle issue in my mind is the rate of change.
Once you rewrite a code base like this (in a week no less) the only way to work on it in the future is using AI tools because no single person has any knowledge about any specific piece of code base any more.
AI generated code that is run through a classic PR process would potentially be fine, but then you sorta lose the entire point of using AI.
That's the idea, to transform businesses to be wholly dependent on "AI" service to develop software. What better way than to re/write entire codebases until no human being understands it.
The Zig project know this, and its so-called "anti-AI" policy is actually pro-community and cultivating human understanding. It's not about the tool or technology, per se, it's about people, knowledge, and sustainability.
In contrast, the Bun project is demonstrating how they doesn't care about any of that, YOLO-ing its way to losing the trust of its users, contributors, and maintainers. Oh well, AI will maintain the project now, since no one else can.
I'd be concerned that by jumping onboard with this sort of development process I'd lose touch with how to engineer software in a detail-oriented or remotely rigorous way.
It also makes me question what sort of value the entire Bun project ever had if a drop-in replacement can just be thrown in here like it's nothing. Why do we need all these JS runtimes again?
The AI bubble is so large that we've also forgotten how useless and dumb a lot of software engineering labor was even before LLMs came along. We were already in a bubble.
All that is to say, I think it's useful to reframe some conversations about AI as, "if AI can accomplish this task, was it ever actually valuable?" I think for some specific things, the answer will be yes, but the tech industry has been huffing its own farts for so long I really don't think anyone has sight anymore of what's economically valuable in a ground truth sense. Much like LLMs themselves, this confusion pollutes the entire well of discourse about their economic utility.
What would have been significantly better is just rewriting Claude in a language that's actually well suited to what it's doing in the first place (which could well be Rust, Codex is written in it as prior art). It's funny how the vibe coding promoters are keen on things like this, rewriting other codebases as fast as possible with little quality checking, but they are still defensive of their own code.
Jarred is an exceptional 1% engineer, and its likely he can succeed at this port, to the detriment of naysayers who don't believe there's any chance it's possible.
- Its a throw thing at the wall and see what sticks situation
- LLMs will improve*
- Using LLMs in an agentic way will improve (git worktrees, sliced PRs, spec driven steps)
So what happened here is a mess, but you gotta break a few eggs to make a souffle.
It's a learning step and I am glad it happened, there will be so many things to debrief from this.
I don't use Bun or Rust but fair play to them having a punt.
<Shameless plug> I have been working with Claude code to spec out and bring back to life a Spring Boot starter library for Apache Solr search
https://github.com/tomaytotomato/spring-data-solr-lazarus
There were a few points I had to steer it but the result has been a good implementation.
My grandpa told my dad never to show a client a work in progress - You told them when you'd get the work done, and they can see the finished result when it's ready.
It's just a story so don't wrap yourself around the axle with counter-examples. I think it's fair to say that an open-source project going through a language translation is going to have transitional periods as they shake things out, and criticizing every snapshot as some proof that they're incompetent is useless.
Step 2: Purchase an entire company for a product that, if you squint, might help paper over the entirely predictable problems that arise from using the wrong tools to implement the wrong architecture, because surely the solution isn’t reevaluating your original engineering choices.
Step 3: Perform a buggy, vibe-code rewrite of the tool you just bought. A tool you only need because — for whatever internal political reasons — sunk cost means you can only keep digging.
Step 4: ???
The goal of a library is to provide the encapsulation such that the unsafety doesn't spread.
If undefined behavior occurs, the fault lies with whoever wrote `unsafe { ... }` in the body of a function. If I write "unsafe" in order to call an unsafe library function, and I don't meet the library function's pre-requisites, then it's my fault. If the library internally writes "unsafe" in order while providing a safe wrapper, and I never actually wrote `unsafe { ... }`. If neither I nor the library wrote `unsafe { ... }`, then it is the fault of the compiler.
Using "in safe Rust" means that `unsafe` doesn't occur either in the user code nor in the library. In this context, since we've heard how many uses of `unsafe { ... }` exist in the Bun rewrite, I'd read "in safe Rust" to mean "without calling any functions marked as unsafe".
Maybe its better to think about this in the reverse, where C and C++ has 'defined behavior', but unsafe rust intentionally does not, its just whatever the complier and platform lets you get away with. Ultimately its still just a computer which stores values in memory and jumps to subroutines.
Undefined behavior is everything else. C and C++ are relatively unique in that their standards explicitly say "combining these constructs in this way is undefined", and we call those cases explicit UB. There's also a larger universe of implicit UB that standards omit. Most (all?) languages have implicit UB, even if they lack the explicit stuff. What happens when you get ENOMEM is a common one.
Rust does something similar to C/C++ and lists a bunch of UB that's only possible with incorrect code in unsafe blocks. Correct code placed in an unsafe block remains defined, as does code without unsafe (up to compiler/language bugs).
In this context "UB" means something different than how you're using it. The UB being mentioned here is the "nasal demons" form, i.e., programs which contain undefined behavior have no defined meaning according to the language semantics.
What you're talking about is probably better described in this context as "unspecified behavior", which is behavior that the language standard does not mandate but does not render programs meaningless. For example, IIRC in C++ the order in which g(), h(), and i() are evaluated in f(g(), h(), and i()) is unspecified - an implementation can pick any order, and the order doesn't have to be consistent, but no matter the order the program is valid (approximately speaking).
So this "unspecified behavior" might turn into the more nasal demon type when g(), h() and i() share mutable state and assume some particular sequential order of execution. No?
Unsafe Rust allows you to tell the compiler “hold my beer”. It’s a concession to the reality that the normal restrictions of Rust disallow some semantically valid programs that you might otherwise want to write. The safeguards work great in most cases, but in some they’re overly restrictive.
In practice, the overwhelming majority of code is able to be written in safe Rust and the compiler can have your back. The majority of the rest is for performance reasons, interacting with external functions like C libraries over FFI, or expressing semantics that safe Rust struggles with (e.g., circular references).
fn safe_function(...) -> (...) {
// do unsafe things here
}then `safe_function` can be called from safe code, and still trigger UB. This wouldn't be a soundness issue in the rust compiler, but instead a bug in safe_function.
There are many reasons you might want to do that. In particular, it's very common in rust to have a library define some data structure that uses unsafe under-the-hood, but checks whatever invariants it needs to, and provides solely safe methods to external callers. Rust's `String` type is like this: it's (roughly) a `Vec<u8>`, e.g. heap-allocated bytes. It has the additional invariant that these bytes correspond to valid UTF8 though. See for example `push_str_slice`, which (roughly) concatenates 2 strings.
https://doc.rust-lang.org/src/alloc/string.rs.html#1107
It does the following thing
1. reserve enough space for the concatenated string within the source string 2. does some pointer arithmetic and a call to Rust's equivalent to `memcpy` (unsafe) 3. re-casts this pointer to a string object without checking that it's valid utf8 (unsafe).
While these individual calls are unsafe, `push_str_slice` checks that in this particular situation they are safe, so the stdlib authors do not mark `push_str_slice` as unsafe. It has no invariants that must be maintained by external callers.
I take issue with the phrasing of OP's title: "allows for UB in safe rust". AFAIK there are compiler bugs that allow UB in safe Rust, but this is not what is happening here. We have UB in an unsafe block (which is to be expected) which enables an issue outside in safe code. What is your opinion? Is calling this "UB in safe Rust" justified?
This is a bug in the library, namely in Bun's PathString implementation. The bug is a soundness issue, precisely because usage of Bun's PathString implementation allows for UB in safe rust. Now this buggy library isn't that big of a concern for the community, because Bun is the only consumer. It's not also an indication of a compiler bug, because Bun's library is implemented using unsafe rust. But the fundamental issue is that usage of Bun's PathString implementation allows for UB in safe rust, and is therefore (clearly) unsound.
Here's some links on this topic which have some examples:
https://doc.rust-lang.org/nomicon/working-with-unsafe.html https://www.ralfj.de/blog/2016/01/09/the-scope-of-unsafe.htm...
And it's not like Bun when written in Zig has been a beacon of stability either. It has been segfaults all over the place.
I don't see what the big deal is here.
architector4@AGOGUS:/tmp$ git clone --depth=1 'https://github.com/oven-sh/bun' Cloning into 'bun'... … architector4@AGOGUS:/tmp$ cd bun/ architector4@AGOGUS:/tmp/bun$ find -type f -name '*.rs' -exec grep unsafe {} \; | grep -v '//' | wc -l 13255
....Thirteen thousand two hundred and fifty five lines without comments with the word "unsafe" in them in Rust code files across this rewrite.
This is so gross.
I'm a founder of an early-stage startup. I built a precision-editing tool system (called HIC Mouse). It provides coordinate-based addressing, staged batching with atomic rollback, embedded agent guidance, and more. It works well, it's available on VS Code Marketplace, and I've worked for a year and am still grinding every day, working so hard, just to get people to think about trying it, and to get attention paid to it. I did rigorous, careful benchmark research to make sure I wasn't just fooling myself. I incorporated, built a sales pipeline, changed my life by taking a chance and launching a business, and I pound the pavement and toil in obscurity every day and night, trying so hard to get interest in my product. I check every diff painstakingly before committing. I may make tools for AI agents but I am unbelievably careful about reviewing and thoroughly testing their code, and usually rather ruthlessly editing quite a bit further beyond any initial version drafted, long before deciding it is good enough to ship. I take enormous pains to get things right and worry constantly about whether I'm doing enough to make HIC Mouse secure and performant for my users. All I want is to make my users happier and to give them a genuine way to get "surgical, precise edits" that "don't touch the other lines", like we all ask of our AI agents over and over all day if we're using AI.
Or maybe not. Here we have Bun. Who cares about 90K GitHub Stars and massive community engagement -- just go crap all over them, all at once, with this AI tripe that you obviously neither tested in any meaningful manner, nor documented, nor read, I am assuming, before merging the whole bloated mess to production. What a disgraceful way to treat your users! I would be so grateful if I had a tiny fraction of the interest in my project that the Bun team has. I could never imagine shipping this garbage in a million years.
I'm sorry to vent but this just isn't defensible. It's the very worst of AI. I'm not going to wish ill on Bun, but it just makes me sad that I spend so much effort, work so hard to do things right, and painstakingly review everything because it's not just me any more and I do have folks who depend on my code being reliable and secure. And meanwhile, Bun just gives a huge middle finger to 90k+ starred supporters not to mention the millions of users who didn't click on the star but rely on the library, by acting this disrespectfully and disgracefully towards their own users. How they didn't take one look at this and promptly revert and apologize is simply beyond me. Again, sorry to vent, but this made me irrationally mad.
In fact using the word "rewrite" itself is pretty inaccurate.
As has been mentioned the goal was a port so they "could" eventually rewrite most of it to be idiomatic rust. The main benefit of this now is the compiler and being able to use these tools to fix issues that were already being hidden when it was in zig.
If you go into this codebase expecting to see idiomatic rust and get angry when it's not there, you are going in with the entirely incorrect attitude.
It's understandable how people see it as AI slop or whatever given the division among developers at the moment. But please see it for what it is instead of just jumping to conclusions.
They may have said that, but quite clearly the value they actually get out of it is getting the headline "AI reimplements complex, broadly used software in 2 weeks, but makes it way better because it's rust now" in front of a million people's eyes, only 1% of whom will ever find out it was mostly fluff
This is entirely disingenuous. Jarred has already made it clear what value they get out of moving off of Zig. Yes they used AI heavily to attempt this goal but I don't see what the big issue is. They haven't even released it yet and Anthropic themselves have said 0 about this.
The "headlines" thus far are really just people completely uninvolved with Bun and with all to gain by perpetrating "AI BAD".
My honest take: the big issue isn't "what if it goes wrong" its the fear that a migration of this size works out of the box and being done almost entirely by AI.
I wonder what are the real legitimate use-cases for "unsafe" in the first place, it is there for a reason?
In my application I'm able to guarantee that there is no modification to the backing file by making them read-only and ensuring nobody messes with them, but that guarantee exists outside of rust. So -- unsafe with a big SAFETY comment explaining the requirements if you use it.
Much rust code will never use unsafe. Systems code is likely to use a bit but also to know what it's doing.
Things like this port of bun are unusual and presumably transitory on the way to an implementation with minimal use of unsafe.