Cargo-mutants:zombie: Inject bugs and see if your tests catch them
105 points
9 days ago
| 9 comments
| github.com
| HN
rao-v
6 days ago
[-]
This seems like a near perfect use of coding LLMs and a useful way to implement reinforcement learning.

“Add a major bug to this file that is not covered by existing tests” vs “Find the bug in this file” vs “write a sensible test in this file that protects against this type of bug”

reply
infogulch
6 days ago
[-]
Somehow that reminds me of how diffusion models are trained.
reply
acheong08
6 days ago
[-]
I'm pretty sure that's the premise for GANs (generative adversarial networks) rather than diffusion. Diffusion is more about noise reduction than pitting models against each other
reply
IshKebab
6 days ago
[-]
This is called mutation testing and is very common in formal silicon verification.

https://en.wikipedia.org/wiki/Mutation_testing

A downside of naively changing source code is that you have to recompile the code for every mutant, which can be very slow (especially for Rust!). Obviously the right thing to do is to decide at runtime whether to insert a bug or not for each mutation point.

I had a brief skim through the help for cargo-mutants and it looks like it takes the naive approach which is rather unfortunate.

reply
gtramont
6 days ago
[-]
Here's a list of mutation testing tools for various languages: https://github.com/theofidry/awesome-mutation-testing

If you're looking for an option for Go, https://github.com/gtramontina/ooze can be of help. It was heavily inspired by https://github.com/zimmski/go-mutesting

reply
stblack
6 days ago
[-]
At RustConf 2024 in Montreal, Cargo-mutants' creator Martin Pool's presentation was excellent. One of the best sessions of the conference.

https://www.youtube.com/watch?v=PjDHe-PkOy8

reply
kilroy123
6 days ago
[-]
Really cool! I wish there was such a thing for JavaScript.

I say this as a so-so software engineer. I badly wish there was more emphasis on increasing software quality. There is so much the industry could do to radically improve quality, such as tools like this.

I know the incentives just aren't there, but still.

reply
MaoSYJ
6 days ago
[-]
Good news it is a thing in js!

https://stryker-mutator.io/

There was some hype about it some years ago.

reply
NotAnOtter
6 days ago
[-]
There is such a thing. Basically every popular language has a mutation-testing frameworks. It's pretty common for large scale projects.
reply
J_Shelby_J
6 days ago
[-]
Neat.

Is this to be used in addition to the tools mentioned in this talk: https://youtube.com/watch?v=qfknfCsICUM

reply
mcdeltat
6 days ago
[-]
This is a cool project. Related fun anecdote: I once found an application at work where just about the ENTIRE test suite was a no-op as the author (and subsequent copy-pasters) misunderstood a GTest feature. Yep, dozens of unit tests which did not actually test anything. Fortunately the application did mostly work and wasn't critical. Experiences like these make me want to write "negative tests" to test test failure conditions.
reply
01HNNWZ0MV43FF
6 days ago
[-]
That's why I practice "red, green, refactor". I must see a new test fail once before I believe that its passing means anything

This generalizes into the strategy that learning and programming are both skimming the edge of an envelope, alternating between things that work and things that almost work

reply
matthewfcarlson
6 days ago
[-]
I wonder if there’s something similar to run on a c codebase.
reply
zhaoweny
6 days ago
[-]
There's mull[1], which is based on LLVM toolchain and can accept any language that transforms into LLVM bytecode. SQLite[2] also does mutation testing, by compile and mutate generated assembly code.

[1]: https://mull.readthedocs.io/en/0.26.0/ [2]: https://sqlite.org/testing.html#mutation_testing

reply
BadBadJellyBean
6 days ago
[-]
reply
ramon156
6 days ago
[-]
Need to replace the :zombie: with a
reply
01HNNWZ0MV43FF
6 days ago
[-]
Hacker News blocks emojis

:V

reply