Jujutsu (jj), a Git compatible VCS (89 points, 112 comments) https://news.ycombinator.com/item?id=41895056
Steve's Jujutsu Tutorial (158 points, 117 comments) https://news.ycombinator.com/item?id=41881204
Jujutsu: A Next Generation Replacement for Git (94 points, 80 comments) https://news.ycombinator.com/item?id=40908985
A better merge workflow with Jujutsu (135 points, 90 comments) https://news.ycombinator.com/item?id=40842762
Jujutsu: A Git-compatible DVCS that is both simple and powerful (673 points, 262 comments) https://news.ycombinator.com/item?id=36952796
2025-02-04 https://news.ycombinator.com/item?id=42934427 (234 points, 122 comments) Jujutsu VCS: Introduction and patterns
I assume that’s the inspiration for this link.
It used to be such a mess with all the VCS systems and having to know all of them to commit on various dependencies from different projects.
I know that jj can be backed by git. But already it doesn’t need to be. Between this and other projects, I don’t have to squint too hard to see a future that looks like the past. I hope I’m wrong.
This bears repeating.
To further frame the importance of this, we've started seeing git being used as the deployment tool of some notable mainstream software projects such as nvm. This is only possible if a VCS becomes ubiquitous and a de-facto standard.
Essentially, it agrees that Git has become a de-facto standard as a file format and as a protocol, but that doesn't mean it can't have a very different approach to that file format and protocol.
It's similar to how HTTP can support a RESTful approach, or GRPC, or GraphQL or whatever else, and all these different systems can build on the same underlying technology.
No one uses jj. You need to go way out of your way to install something like jj. It makes no sense to install jj so that they can install something else.
Not so with Git. Git is so popular that it comes pre-bundled with other critical software, and some OS distributions even install it by default.
Git is a problem-solver. JJ is a solution desperately looking for a problem, for which the right solution is already Git.
Git is also a problem-causer. The existence of sites like ohshitgit.com is testament to that. Git has wasted thousands of developer years with a terrible UI, overly-complex concepts, and lack of basic undo.
Some people have internalized, or worked around, git's flaws for so long, they can no longer even see them as such.
It's a testament to worldwide adoption first and foremost. If `jj` had the same adoption, there would probably be similar sites about it. No tool is perfect, and all have their quirks.
Git is wildly popular, and a large part of that is its solid and fast technical underpinnings. Unlike its open source predecessors (svn/cvs), it excelled at distributed development.
At the same time, git also has a terrible CLI, combined with unnecessarily complexity like staging, stashes, rebase states, etc. This has been true from the beginning. Mercurial was built at the same time (also by a linux kernel dev!), and was generally considered to have a much better CLI and mental model. I think hg lost out despite that because it was slower than git, and didn't have Linus behind it.
You're right that all tools have their quirks — jj is surely no exception. But it seems like bad engineering and a lack of curiosity to look at the quirks in a tool and not ask if we can do better. How much of a tool's difficulty represents the underlying complexity of the problem it's solving, and how much is just incidental complexity from the tool itself?
In this case, I think the core of Git (the underlying model of commits and refs and objects) does a really good job of being exactly what's necessary and no more, but the UI of Git (i.e. all the commands and surrounding paraphernalia) has a lot of incidental complexity. That is, there are simpler and cleaner ways to represent a lot of the stuff that's happening in Git. This, I think, is why a lot of people struggle with Git — not just because version control is hard (although it is), but because Git as a tool is additionally hard on top of that. And I think it's reasonable to look for better ways of doing version control that don't have as much incidental complexity.
Probably. And neither Python Packaging nor regular expressions will be replaced by anything in the next few decades, for the same reasons why `jj` is unlikly to replace git, for the same reason why rust won't replace C, and the same reason why, 20 years from now, people will still have jobs fixing Java and PHP code.
> But it seems like bad engineering and a lack of curiosity to look at the quirks in a tool and not ask if we can do better.
No one is demanding that we stop asking. But as engineers we cannot just consider "better", we also have to consider "practical". And in practical, pragmatic considerations, "technically better" simply isn't enough to displace an entrenched tool. That is a fact, whether people like that or not.
I like the ideas of jj. I cheer the development. Will I replace my git with jj in the forseeable future? No. Why? Because pragmatic reasons.
To replace an entranched tool that does its job and has wide adoption, a new thing needs to offer absolutely astonishing advantages that become "must-have" just by the merit of their existence. And even then, the ROI of these advantages have to offset all the technical and logistic pain of switching from one to the other.
> but the UI of Git (i.e. all the commands and surrounding paraphernalia) has a lot of incidental complexity.
There is no shortage of wrappers around git abstracting the nitty gritty to nearly any level of complexity desired. I use vim-fugitive for example. GUI abstractions as well as CLI tools and TUI wrappers exist.
10% better isn't. 10x better is. Jujutsu is 10x better.
10x is an arithmetic operation. Therefore, to make such an assumption, there has to be a quantity that operation can be applied to.
So, by what quantifieable metric is jj "10x better" than git?
For me, the important metric here is the intersection of two things: number of concepts, and power. I moved from svn to git relatively early in Git’s life; I started using GitHub in 2008. I’ve loved git for a long time.
jj has the ability to do everything git can, but has fewer primitives that are factored better. This means it is easier to use, while still retaining the ability to do anything.
I don’t know if that’s 10x, but it’s good enough for me that I’m never using git again.
And I'll happily repeat: By what metric is it "better"? So far, I have heard your opinion, not metrics.
Plenty of people use jj. It just doesn't seem like that to you because it's client-side.
It might behoove you to be a bit more curious about the deep intellectual lineage behind Jujutsu.
Jesus, what a hater Maybe one day you'll become self-aware
If you're learning French you don't argue with the language and try to get French people to talk how you want them to talk, you suck it up and do what they do. As with all things it's a trade off: you get to converse with everyone in France, but, yeah, you will have to deal with saying "le parking" etc.
jj is better.
If it's backed by git and doesn't break anyone else working on the repo, what's the problem?
For so many reasons, SVN to Git was a no-brainer but still constituted significant migration costs for existing teams and codebases: training, retooling, high-fidelity data transfer (you retained the commit history, right?).
For a tool with such wide adoption, motivating an ecosystem to move where there are many actors with different incentive structures, some of them will look and say ‘30% better isn’t even close to sufficient to spend time and $$$ on this. I would need ~10x better and/or a gain of altogether new capabilities impossible in previous tool to motivate that.’
It’s great that jj can be used independently without needing the whole ecosystem to move today, but once/if that part changes I’d expect it to be a long slog towards re-standardization if ever.
But jj likely already fully anticipated this given its compatibility with the Git wire protocol in the first place and I doubt it would ever give that up given the current landscape.
Care to point one concrete example?
What should that flow look like?
1. You send out a change for review. 2. You get review comments. 3. You fix the comments. 4. Repeat from #2 as many times as necessary. 5. You merge the change.
In git, how do you address review comments? You can either amend, in which case, at least on Github, you lose history while in review; or you add another commit, which essentially forces you to squash when you merge, which results in large commits.
How do you do stacked changes? How do you continue working on your feature while waiting for a change to be reviewed, and rebase the comments quickly?
With jj, it's jj edit and jj next.
Please no. Just do a normal merge instead of destroying history. Having those as separate commits is extremely useful for those of us who actually have had to dig into the history to track down why something is the way it is.
What? Are you actually asking this question? I mean, what answer do you need other than a basic intro to PR tutorial from the likes of GitHub or GitLab?
But to answer your question, you continue working on your feature exactly like you've been working so far. You do not need to amend or squash. You can instead do the right thing and address PR feedback by pushing individual commits. How the PR branch is merged is a separate concern, and you are free to squash or rebase or whatever you'd like, because none of this is a Git issue.
Why do you think this is an issue with Git?
I can either stash my changes, which requires keeping a mental model of the stash queue (what if the review comments took a while and I started working on something else?) or a "wip commit", which you need to know, and you need to know how to reset and force push if you accidentally push it.
With jj, you run `jj edit`.
You do:
git commit -m "wip"
Then, if you wish to undo the commit then:
git reset HEAD
If instead you want to update your commit with additional changes:
git commit --amend
Seriously, why are you guys advocating for replacing a tool when the problems you're actually experiencing are due to your inabilility to learn the basics of it? I mean, is jj edit any better than git commit --amend ? Does git commit --amend require more cognitive load to understand?
What you really want to do is
git commit -anm "wip"
The -a is important because maybe you haven't staged the changes yet. The -n is important because otherwise your pre-commit hooks will run, and potentially modify your commit.
Also, what happens if some of the files were partially staged?
No. I only need to run hit commit. The rest is just silly attempts to nitpick your way to faking complexity where there is none. You don't need to pass --all because you should know and tell what changes go into the commit. You don't need or care to skip precommit hooks, because either you don't have any or you have them and you want to run them. That's it.
Also, even your attempt to make a mountain out of a molehill ends up with a easier, smaller command line entry.
Your examples are laughable because most serious commands are way more complicated than that and again jj fixes that gracefully.
Again, please point out your best example that you believe proves your point.
All I see so far is shit-talking Git with abstract unsubstantiated complains to then hand-wave an alternative as the savior.
Point exactly where Git fails and what value is brought to the picture by any other alternative. Clearly Git's backend is not a issue by the way jj reuses it. So what exactly are these problems that are worth solving?
https://ohshitgit.com/ has plenty of exmaples that you're free to disregard as being shit-talking. Meanwhile, the rest of us can admit that git has some rough edges, like how git checkout does a couple of different things, depending on the flags you pass it. That's not user friendly.
That's not a credible explanation. Amending a commit is somehow unintuitive and inconvenient but editing a commit is suddenly so much clearer and intuitive?
Primarily jj is different. And a lot of the "problems" it solves barely ever come up in git. And when they come up, its usually as "difficult" as "google the solution and enter the commands". And worst case scenario, someone has to manually copy paste some files around.
If this happened every day, I'd agree that we need a different system. But it doesn't.
In short, the situations where jjs advantages can really shine, and the advantages in these situations are simply not common enough / important enough to justify the headache of switching the world over to another VCS.
In an ideal world, "the better" solution wins.
In a pragmatic world of limited resources, "better" on its own is not enough. The differential between the new and the old solution has to be large enough to offset the cost of switching, and the more entrenched the existing solution is, the higher that cost will be.
> barely ever come up in git
Just because you don't run into that problem very often doesn't mean that other people, with different roles and workflows don't run into that problem more frequently.
For now. At some point, it may want to run features incompatible with git, and what happens then?
> nd people that don't understand git but do understand JJ can just run JJ.
And how many such people would you say are there? I am willing to bet that the vast majority of jj users know how to use git very well, and given the adoption of git, that is unlikely to change.
Secondly, git isn't just run by people, its run by scripts, pipelines, automated systems. git commands live in configurations, and elsewhere. Its a common language for how VCS works, not just a command line tool.
> Just because you don't run into that problem very often doesn't mean that other people, with different roles and workflows don't run into that problem more frequently.
It also doesn't mean that they do.
For now. At some point, it may want to run features incompatible with git, and what happens then?
Then we raise issue. But I don't have a problem with what someone else wants to run on their computer.
Neither do I. This discussion isn't about what someone else runs or doesn't run on their computers. By all means, run `jj`.
Or use `fossil`[1], which I maintain is technically superior to both `git` and `jj` (if you disagree, show me another VCS that also gives me a ticketing system, wiki, documentation system, forum and webui, all from a single executeable that allows me to set everything up with a few command line invocations ;-))
XKCD 927.
jj’s killer feature is git compatibility, obviously. For now it’s table stakes so we can pretend only git exists.
What you need for your CVS to be popular is not having a new CVS that do a thing a little bit different and better vs Git but to have a good GitHub alternative. In Git world there we have Gitea - which is very easy to setup and experience is much better than github , and gitlab for bigger orgs. Plus a good IDE support.
Sapling and JJ need something like that - built in or selfhostable.
JJ doesn't currently have a "native" repository format (or rather, it's in development and not ready for general usage), so right now a JJ forge wouldn't look too different from your usual Git forges.
Everybody who's using it right now uses it on Git repos on Git forges. I've been using it for months on my own and at work and I didn't make any changes to the infrastructure outside of my own command line usage.
It would be interesting to see what a JJ-native forge would even look like in the future, but I don't think anybody knows exactly what that would be right now! JJ is in development (but stable enough to work with) and people have only just started using it, so it's still in the phase of discovering what workflows are possible.
> Plus a good IDE support.
Now that's something I would love to see. Git tooling works mostly fine but due to how JJ's git backend works, it always thinks it's in a "detached HEAD" state.
As I understand it there's also a group of Googlers using it internally at Google with an internal to Google non git backend.
I only use git directly when I have to now.
(As others have pointed out, jj uses git as a storage/network layer, so you don't need a new forge to start using it.)
This is using JJ's "Git" backend, which internally stores data as Git refs, and communicates with other servers the same way that Git does. There is a work-in-progress native backend, but this is still being developed and you need to explicitly opt into it if you want to try it out.
Something I always regretted doing with my many-topic-branches-out-for-review approach was making changes that depend on two topic branches. With jj, it's easy, you just "jj new qrs xyz" and now you have a new change on top of those two topic branches. If you add something in this commit that you want in one of your topic branches, "jj squash --onto qrs some/change/to/qrs.go" and now that change is in there.
It works well with Github and a PR per change. You create a PR with "jj bookmark create whatever" and then "jj git push --allow-new" to make "whatever" a remote branch. As you update that commit, "jj git push" adjusts the remote state as necessary. "jj git fetch" automatically adds bookmarks for upstream branches (so you can "jj new coworker/their-change" to test and modify their thing locally).
Finally, it works well when you have persistent changes that you don't want to commit but you do want to keep around. For some reason, we check in dev configs, and each engineer needs to modify them to run stuff locally. I have a commit called 'private: local setup' and that forms the base for each new (local) commit I work on. To do a PR, I squash my changes into a commit based on main instead of that commit. jj will avoid pushing anything with the prefix 'private: ' so I don't have to worry about leaking my keys, even though they are checked in locally. I wouldn't recommend managing secrets like this, but ... I didn't make this decision and jj handles it fine.
Overall, a huge improvement to my workflow. There have been times when I have so much stuff in flight that I just give up for the day until a few PRs get resolved. That never happens anymore. I can kind of work on whatever I want and let the world move at its own speed. Updating my work to account for PR comments or PRs merged before mine now takes seconds and never goes wrong. Truly an amazing tool and I wish I discovered this years ago.
Say I'm messing around with the commit that introduced a bug, somewhere deep in the history. With git, it's basically impossible to mess up the repo state. Even if I commit, or commit --amend, my downstream refs still point to the old history. This kind of sucks for making stacked PRs (hello git rebase -i --autosquash --update-refs) but gives me a lot of confidence to mess around in a repo.
With jj, it seems like all I would have to do is forget to "jj new" before some mass find+replace, and now my repo is unfixable. How does jj deal with this scenario?
Internally, JJ is still backed by an append-only tree of commits. You don't normally get to see these commits directly, but they're there. A change (i.e. the thing you see in `jj log`) is always backed by one or more commits. You can see the latest commit in the log directly (the ID on the left is the change ID, the ID on the right is the commit ID), but you can also look back in history for a single change using `jj evolog`, and you can see all commits using `jj op log`.
This ensures that even if you were to exclusively use `jj edit`, never made a new commit, and kept all your work and history in a single change, you could still track the history of your project (or the "evolution", hence the name evolog). It would be kind of impractical, but it would work.
The only caveat here is that, by default, JJ only creates new snapshots/commits from the working directory whenever you run the CLI. So if you made a large change, didn't run any JJ command at all, then made a second large change, JJ would by default see that as a single change to the working directory. To catch these issues, you can use a file watcher to automatically run JJ whenever any file changes, which typically means that JJ makes much more frequent snapshots and therefore you're less likely to lose work (at the cost of having a file watcher running, and also potentially bloating your repository with lots of tiny snapshots).
Note also that the above is all local. When using the git backend, Jujutsu will only sync one commit for each change when pushing to a remote repository, so the people you're collaborating with will not see all these minor edits and changes, they'll only see the finished, edited history that you've built. But locally, all of those intermediate snapshots exist, which is why Jujutsu should never lose your data.
For example? Or should I create my own in C using inotify/kqueue? Is there a library for jj?
The default behaviour (i.e. `core.fsmonitor = "watchman"`) is to only use the file watcher as an optimisation — rather than scanning the entire folder every time JJ wants to make a snapshot, the watch keeps a list of which files have changed, and then when creating a snapshot, JJ only needs to check those files.
However, you can also add `core.watchman.register_snapshot_trigger = true` to the configuration, and this will make it so that every time the watcher sees that a file has changed, it automatically makes the new snapshot.
That said, neither of these are active by default, and neither are necessary by default. But if you're the sort of person who uses VS Code's "Timeline" view to see exactly how each file you've worked with has changed over time, then you might also appreciate the automatic snapshotting feature.
Off of the top of my head, `jj new` and occasionally `jj rebase` can create merge commits; I don't recall any others.
You really can undo every action in Jujutsu (and if you can't, that's a bug), but the `undo` mechanism can be a bit surprising - it doesn't behave like a stack, where undoing multiple times will take you further back in history. Instead, undo always undoes the most recent action, and if the most recent action is an undo, then it undoes the undo. This often catches people off-guard - in future versions, JJ will show a warning when this happens, and in even further future versions, there's a plan to make undo behave more like expected.
But if you use `jj op log` and `jj op restore`, you can always get back to any previous state, including undoing merges and other complicated changes.
For the record I’ve seen ‘cannot undo a merge’ after jj undo 3-4 times, can’t remember now. I was trying to squash a change into a single commit for a GitHub pr for the first time and couldn’t figure out how to map jj commits into something acceptable, then decided to undo the whole thing and actually managed to overwrite some of my changes in a way I couldn’t find them, fortunately only a few lines of boilerplate.
I must be a wizard because I’ve lost count of the number of times I’ve messed up my repo’s state.
I jest. Kinda. I know that git’s state might technically not be messed up and someone skilled in the ways of git could get me out of my predicament. But the fact that I’m able to easily dig myself into a hole that I don’t know how to get out of is one of git’s biggest issues, in my opinion.
That hole is very easily dug with git:
use ctrl x ctrl v to move files around commit boom, you've lost history for those files (file tracking only works in theory, not in real life), let's say you don't notice (very easy to not notice)
commit some more, merge
discover your mistake tons of commits back
good luck fixing that, without digging a bigger hole.
And that's one of 100's of examples in which git just is really really not fun or user friendly.
This is one of the things git excels at: You didn't lose your history because that's not how git handles it. Git might be the only version control that can actually handle your case (renaming files without using a special command) - it looks for file similarity between a deleted file and an added file in the same commit, with various flags to make this look in broader places.
"git mv file1 file2" is almost identical to "mv file1 file2" + "git add file1 file2" (it also handles unstaged changes instead staging them)
I'm probably doing something dumb/wrong, some setting somewhere, wrong OS, whatever it is: it proves my point that git is harder than it should be.
- first, that commit that’s been merged to main is marked as immutable and, unless you add a flag to say “I know this is immutable and I want to mutate it anyway”, you can’t mutate it
- second, as part of your regular workflow, you haven’t actually checked out that historical commit. You created a new, empty commit when you “checked it out” using “jj new old_commit”
- third, you can use jj undo. Or, you can use “jj obs log” to see how a change has evolved over time (read: undo your mass find+replace by reverting to a previous state)
There's several things Git can't undo, such as if you delete a ref (in particular, for commits not observed by `HEAD`), or if you want to determine a global ordering between events from different reflogs: https://github.com/arxanas/git-branchless/wiki/Architecture#...
In contrast, jj snapshots the entire repo after each operation (including the assignment of refs to commits), so the above issues are naturally handled as part of the design. You can check the historical repo states with the operation log: https://jj-vcs.github.io/jj/latest/operation-log/ (That being said, there may be bugs in jj itself.)
I'm always keen to explore new things but I don't have many complaints about git. I'm wondering what this solves that made it attractive for you.
These are less of an issue once you've molded yourself to fit into git's strange ways, but jj feels like a much nicer tool -- especially for beginners, but feels like it frees up cognitive space even for more experienced folks. You can focus less on the tool and focus more on what you actually want to do.
[1]: I've tried using multiple working trees, but that workflow never really "stuck" with me.
jj solved the biggest problem for me, which is how much time you spend rebasing when you have 1 PR = 1 stack of commits on top of main. It's easy enough to work on multiple branches this way, but it's a lot of repeated pain when `main` diverges and your changes on top are still out for review. (I honestly just started squashing all of my commits before review, so I would only have to resolve conflicts once.) jj fixes all of this. I especially enjoy working on a 3rd pending change that refers to the previous 2 pending changes; `jj new june/feature-1 june/feature-2` and then you add feature 3 there. You can even `jj squash --into june/feature-1` if something makes more sense being in a prior commit. It's all very wonderful if you are working with other people and you can't immediately mutate `main` upon finishing some work.
a lot of other tools ive found were lacking for one reason or another, and mightve not been git compatible. with jj you can hop between git and jj commands as you please, essentially full compatibility with git
I'd like to introduce you to a couple of my former colleagues...
I exclusively move in a jj repo with `jj new` and `jj squash` or `jj squash --to <rev>` as appropriate. I've been using it 8+ hours daily for months and have never, ever even thought of having this issue.
And now people are saying it for jj.
Endless cycle it seems, with every new tool.
Thus, if you're worried about mutating existing commits, don't use it.
What exactly is so hard to understand here? You're not making the gotcha point you seem to think you are - it's not like it's some common command that is hyper-overloaded and has to be used specially.
Just another example of the usual HN skepticism that isn't even skepticism, it's just smug ignorance. It's so exhausting. But sure, the countless people that keep claiming its the single biggest tool improvement in some time are just idiots? suckers? hype-beasts? making it up? or what?
Like, the irony of you assuming that it must be as convoluted and hard to use as git is just... awesome. I love the Git defenders that literally can't fathom that there is actually a better mental model or simpler tool, and can't even be arsed to try it and see.
For example, I've configured it for me to make any commit by anyone else mutable regardless of branch:
[revset-aliases]
# To always consider changes by others immutable:
"immutable_heads()" = "builtin_immutable_heads() | (trunk().. & ~mine())"
You can do most of the same things, but it's easier to understand it intuitively so you don't need to google or give up on the more advanced usages. And the things you do everyday you can do with fewer steps.
Couple that with colocated repos, meaning I can use JJ while everyone keeps on using Git, there wasn't really anything holding me back.
Care to point out any concrete example?
----
Anything to do with IDs is simpler, because a jj change ID is fixed at creation, even as the underlying current commit ID changes. This makes IDs useful even in a rebase-heavy environment.
----
Likewise, jj replaces the need for staging and stashes as completely separate concepts with their attendant extra commands. Anything you used them for is replaced with uniform commit/change commands: split/rebase/squash/etc.
Staging is just the current change; when you like the code you have, you make a new change, and the old one is just the parent commit. If you want to freeze part of it, you `jj split -i` the commit into a parent/child, and move the parts you're still working on to the child.
Stashes are WIP commits you leave lying around. If you want to move on to work on something later, you just `jj new some-other-commit`, and you're now on a blank commit whose parent is `some-other-commit`. When you want to resume work on that WIP, you `jj edit some-wip-commit` and/or `jj rebase` it first.
It's truly much simpler to have a bunch of pervasive change/commit operations, instead of superfluous ad hoc concepts with their own mini-commands and behaviors.
Are you sure about that? Git reset doesn't need s chapter to explain.
Also, you should really pay attention to what you're writing. Your only tangible case in favor of JJ is user experience, and you even struggle to even explain what jj supposedly does, let alone how it's a preferable alternative. Even more baffling, this completely ignores the fact that the bulk of Git users use it through a GUI frontend, which completely negates any hypothetical selling point.
So, why bother with a solution for a problem you can't even clearly formulate?
But it gets one — as they pointed out, Chapter 2.4 of the Git book (https://git-scm.com/book/id/v2/Git-Basics-Undoing-Things) is about undoing things. And to be clear, that's not just `git reset` — depending on what exactly you need to undo, you should use different tools, including amending existing commits and using `git checkout`. All of which will have different effects, and some of which can be dangerous if you misunderstand them (such as `git checkout` which can destroy data if you've not properly committed it yet).
Meanwhile, whatever the last thing you did with `jj` was, `jj undo` will undo it. On top of that `jj op log` will provide you a list of all the things you've done, and `jj op restore` can reset the entire repository to a previous state.
This isn't about UX: Git doesn't have the capability to do what JJ is doing here, because it doesn't track the evolution of the repository as a whole in the way that JJ does.
I think what comes up most often for me is when I want to fix something on a different branch than the one I'm at now, or when I want to fix something that feels like it should be separate commit.
In git you can `git stash`, commit the unrelated change and then run `git stash pop` to get your in-progress changes back.
In jj your work is already added to the current commit whenever you run a jj command, so you can simply switch to the other branch and do the work, and when you switch back your work is as you left it. It saves you the `git stash` commands. I also like that anonymous branches are completely normal in jj, so I don't need to create branches just to find my way back to a change.
Similarly, jj removes most usages of `git rebase --interactive` by allowing you to checkout a previous commit, and as you're making changes to it all descendant commits are being rebased automatically. This makes it easy to insert changes between two commits, simply create a new commit where you'd like it to go and then descendant commits will automatically be rebased on this new commit as you make changes.
To me this is more intuitive, and it removes `stash` and `rebase --interactive` as concepts.
Not necessarily. You can just commit your changes, switch to whatever branch you want to work on, and then when you switch back to the old branch you can resume working as always.
This is not a Git thing. Even Mercurial advocated for this approach, and sold it as the only and true way of stashing changes. Mercurial only received support for a stash-like feature much later and in the form of an extension.
Also, even though git stash can be used in workflows that involve switching branches, it is suitable primarily for very short-lived tasks that prevent you from committing changes to a branch. Things like pulling changes from a remote branch or switching brsnched when you have random local changes in place (I.e., changed to config files for troubleshooting purposes)
Saves me from doing that extra commit, then.
> [...] it is suitable primarily for very short-lived tasks that prevent you from committing changes to a branch.
That's also a thing I like about jj. Nothing prevents me from committing changes. Even merge conflicts are commited, so I know I can always come back to it later.
I find I think less about when to commit my work in jj. In Git I find that I like to commit to avoid loosing work, in addition to commiting when I feel "done" with part of a change.
It's a small thing, really, but it feels quite big (for me).
The problem is that that workflow is still ultimately harmful nonsense[0] that is based on a misunderstanding of version control.
[0]: https://fossil-scm.org/home/doc/trunk/www/rebaseharm.md
- in section 2.2, they think that in a feature branch only the latest commit counts. You should not just be interested in how the tip of your feature branch compares to the latest main, but also into the logical sequence of steps your feat branch builds up a feature. In a rebase model, every step of your feature can be compared to main.
- in section 2.2, they conveniently omit what happens if you keep merging main into your feat branch. Your feat branch will show all useless commits telling you that "at this time, my feature diffed such and such from how main looked at at that time". "Oh, and here again a useless diff" "Oh and here again." "By the way, have a nice ball of diffs again". I don't fucking care. Why don't you just commit some info about the local weather at that time too or that your back was hurting on Wednesday 23th?
- "Rebase encourages siloed development". That claim is just ridiculous. If you don't want people to take ownership of a feature, by all means, forego branches. If someone wants to work on my feature, then we just agree that the public feat branch is to be considered public history. When the work finishes, someone can take ownership, (possibly clean up the mess with rebasing) and get it into mainline by rebasing on top of main.
Absolutely. And we should work to preserve that history.
> In a rebase model, every step of your feature can be compared to main.
No it can't (unless you just rebased again before running the diff). What it can be compared to is the closest common ancestor.
git diff main...<commit>
Which is exactly the same operation that you'd run in a merge workflow.> I don't fucking care. Why don't you just commit some info about the local weather at that time too or that your back was hurting on Wednesday 23th?
But those events do go wrong, and having the context to understand what happened matters. Have you never had a mismerge or a merge conflict?
Interestingly, despite being designed for a very rebase-heavy workflow, this is one of the things that Jujutsu does really well. You work with mutable changes rather than immutable commits, and each change references one or more commits that represent (roughly) the history of that change. So locally, you can see how a particular change has evolved, and you can easily undo some action or reference an older state while working on that change itself. Then when you push to a remote, the other people will only see the end-result of the change.
Which is typically what I want: I want to retain a good sense of the history of the changes I'm working on at the moment, so I can go back and forth, try out different things, and not worry about losing any data or handling conflicts badly or whatever. But, when I've finished with my change, and it's been pushed, reviewed, etc, that history is now superfluous. I don't need to know that, while I was developing a feature, I accidentally committed a typo and then needed to make another commit to remove that typo. (And if there is information I do think is important, then I can create additional changes, or add the information to the commit message or the code itself as a comment.)
Yes, jujutsu is designed to hide (some of) the problems with rebase from yourself, while still inflicting them on everyone else . History for me, but not for thee!
While also complicating the interaction model because you now have two sorts of history to contend with.
Which actually sucks, because jj could have had been a much better alternative to Quilt for maintaining patch series over third-party software (á la Linux distributions) if it supported collaboration over the metahistory/oplog.
> But, when I've finished with my change, and it's been pushed, reviewed, etc, that history is now superfluous. I don't need to know that, while I was developing a feature, I accidentally committed a typo and then needed to make another commit to remove that typo.
This assumes that you'll actually find all the issues during the review. Do you also erase your history after every release?
Let's say I make a change in three steps: I rename all uses of X in the codebase, I remove the Y component, and I fix bug Z. To me, this should be three commits: X, Y, and Z, one for each of the changes.
But to get to the final point, I probably haven't done that in only three steps. What I've probably done is I've tried out change W, but that didn't help at all so I abandoned it, then I started doing Y, realised that I needed to do X first for Y to make sense, so I switched and did X, then I finished Y, then I could do Z. Then I realised that while doing X I'd forgotten some stuff so I finished that off, then I realised that the tests weren't passing because part of Y wasn't finished. Then I pushed everything and the reviewer pointed out that while fixing Z, I'd left some code commented out and I needed to remove it properly.
Long-term, which of these sets of changes would be most useful? Surely the first one: it shows my aim while making changes, it is clearly bisectable, and it is divided into clean units of work without overlap or having those units intermingled. As a future developer, I can run `git blame` or equivalent and clearly see why each line of code was changed, and for what reason.
Whereas with the latter case, bisecting the codebase will stumble on all the intermediate situations where a test failed or the code didn't compile or whatever, because bad states during the development process are now part of the permanent history. And it'll produce spurious results when I try to annotate a file: the last time this file was changed was because I temporarily commented out this line and then immediately commented it back in again, how do I see only the changes that made a meaningful impact to this line?
In my experience, recording these sorts of "fixup" changes permanently in the history has never been useful. Having clear, precise commits that each do one thing: definitely, yes, this is a fantastic tool. But being able to go back in time with such granularity as to see when you broke a test and then fixed it in review? Or when you started out doing one change and then did something else instead in a different order? I have never found that level of detail useful, and it is often makes things harder when I'm trying to focus on the significant commits and changes and just want to filter all of those out.
The problem with that article is that it's utter nonsense and completely misses the whole point of rebase.
No one cares if under the hood a rebase is technically a merge that forgets where it came from. I mean, isn't that the whole point of a rebase?
The whole point of a rebase is to move your local branch onto the tip of another branch while preserving the local history. The goal is to simplify the repository history by eliminating irrelevant noise. That's not a problem, it's a solution. Is this too hard to understand? How can they possibly miss the extra forks and joins in their commit graph? Do they not see that? Do they spend any time looking at commit graphs?
It's ok if Fossil's devs want to force their opinionated take on VCS onto Fossil's users, but this does not mean they are right or have a point. In this case they are not only missing the whole point but also taking a completely wrong approach to mitigate it. The fact that they felt compelled to put up a page trying to present their case (and failing) is already telling.
If there is criticism to throw at Git, rebases clearly ain't it.
Hurting people is the point of a gun, that doesn't mean guns are suddenly excused from doing so.
> The whole point of a rebase is to move your local branch onto the tip of another branch while preserving the local history.
But you aren't; history is as much about where you came from as where you are now.
> The goal is to simplify the repository history by eliminating irrelevant noise.
Because merges aren't noise, they are fallible events just like manual changes. They also help to group changes together and provide helpful context.
> That's not a problem, it's a solution. How can they possibly miss the extra forks and joins in their commit graph? Do they not see that? Do they spend any time looking at commit graphs?
If you don't want to see the merges, hide them. `git log --first-parent` shows you the commit history as if all merges were squashes instead.
Meanwhile, the rest of us can suddenly appreciate having the extra context available when it's actually needed.
> It's ok if Fossil's devs want to force their opinionated take on VCS onto Fossil's users, but this does not mean they are right or have a point.
Deleting data is opinionated, preserving it leaves it up to the reader to judge.
But my point isn't that you should use Fossil, but that Git is a far better VCS if you use it as if it was Fossil.
> If there is criticism to throw at Git, rebases clearly ain't it.
In my, uh, decade or so of using Git, every single avoidable problem that I have seen people have with Git turns out to originally come from rebasing.
That means that both you and those people do not fully grasp the concept. The problem I have is that those misunderstandings are spread such that I have to deal with it. Especially those that do not understand Git at all keep merging balls of merges of merges. Then they lose track, and decide to just merge main again because something broke and they hope that Git will solve it with just one more merge.
I have no problem with incompetence, I myself am incompetent in a lot of areas, and I know that. When I have to work with people, I have a problem with those that are not able or willing to acknowledge they lack some overview.
I would take a step further and say the root cause is not users who do not fully grasp a concept of Git, but who fail to grasp one of the most fundamental aspects of version control systems: branching and merging.
Git just happens to take a bad rap because it's ubiquitous and for some it's the only VCS they ever used.
Of course a bad workman blames his tools. If the tool they are using is Git then of course that's the one being thrown under the bus.
What is perplexing is how this problem is then approached by people who are expected to know better by framing it as something only a bad tool would do, and thus they roll out yet another tool that does the same thing and poses the same problems.
In the case of jj, they even claim to fix git by running git as the backend. Think about that for a second.
That mismerge would have been just as bad if they had rebased instead, except you wouldn't have had the context to go back and fix it. How would that be better?
But you shouldn't merge into your feature branch at all. We are interested in how your commits change what becomes before it in mainline. We are not interested in how main differed from your feature branch at various points in time.
Git rebase is wonderful for people that create mess when working. Your feature branch can contain lots of WIPs for all the times you hit five 'o clock. With interactive rebase you can clean up your mess: squash, split, delete and reorder commits. You can do this all graphically even, like drag-drop reordering of commits. Also read up about git fixup and friends for quick fixes. There is also git rebase abort if you think you are doing something wrong.
No. That's clearly something you are failing to understand. Commit history is not chronological. Commit history is ordinal. I make my commits, and I say where they should go and in which order they should go. Try to understand that.
> Because merges aren't noise, they are fallible events just like manual changes.
You're not even paying attention to what you are saying. If you interpret merges are "fallible events" then what's the point of pushing a "fallible event" to a shared repository when they are irrelevant to describe your changeset? The only purpose they have is to make sure your feature branch can be merged back to mainline with the latest changes in mainline. Why introduce a "fallible event" for that?
Isn't it far simpler if you just reapply your commits on top of the branch, and leave those "fallible events" out of the commit history?
Moreover, look at the silliness of it all. What prevents anyone from manually replaying a commit history on top of another branch? Should that be prevented too? Should support for squashed merges be removed too because they lose a reference to the origin commit?
What's exactly the point you're trying to argue?
> Deleting data is opinionated, preserving it leaves it up to the reader to judge.
Try to understand this: you read what I write. No more, no less.
> In my, uh, decade or so of using Git, every single avoidable problem that I have seen people have with Git turns out to originally come from rebasing.
So what? Are you also going to argue to ban furniture because unsupervised toddlers can bang their heads on it?
Do you know who doesn't waste time posting remarks on rebase? People who use it all the time and don't have a problem with it.
Otherwise I also find it not really an improvement and a downgrade in terms of other tooling (editor etc.)
I.e. the typical Gerrit workflow
This made me chuckle. In all seriousness, does jj afford anything new by connecting to filesystems more directly?
The main issue with git I've heard of is for projects, like video games for instance, that bundle text based code with a huge amount of non-text assets (audio, images etc).
I don't understand what causes gitissues here, is it storage? Or is it more attempting to map differences between non-text assets. Curious about whether jj's filesystem usage help here.
Jj interacts with files in a way that ensures this can’t happen.
Jj is the same as git with regards to large non-text assets. But that’s just because it’s still young, it will gain good support eventually.
(And yeah it’s about storage and also about diffs. How would you fix a merge conflict in a PNG?)
Large file handling needs to be sane in any new VCS, IMHO, as this is a main failing of git (..without the extra legwork of git-lfs).
Edit: https://github.com/jj-vcs/jj/issues/80 Could maybe bring jj up to parity with git here
But also, if it is such a showstopper, you can disable auto-amend of _new_ files with snapshot.auto-track=none() config (or even something funky like src/*) - and recently jj status finally started showing untracked files too, it's very usable
What I'm really getting at is, I want a system that records all changes I make and then let's me very selectively choose which of those are pushed to others. Git is definitely the least bad solution now but it's not as seamless as it could be.
Give it a try, but also know that git branches and _topological_ branches are different things
I still stand by my testimonial! My greatest achievement has been convincing Steve Klabnik to try out Jujutsu.
I also saw a comment from JJ's lead author martinvonz, where he pointed out that adding new functionality is much simpler to jj than it is to older systems like Git and Sapling/Mercurial. Having each spent many years working on source control, both Martin and I came to a general belief that a lot of implementation complexity comes from the modal states created by merge conflicts. Because Jujutsu's core UX is more straightforward, this is less of an issue and Jujutsu's devs can prototype changes quicker.
I came across jj from a recent Bluesky post by Steve Klabnik who was talking about having to learn something with git. That seemed very odd to me. I then gathered that he (and many others in the comments) had been using jj exclusively for some time.
I haven't had time to give it a try, but I definitely will. Your achievement has ripple effects.
Can anyone explain why people want something like JJ? Is it simplicity? I actually quite like the staging and index in git. Although it took me a while to grok.
Looking at the other thread though I don't think it's more simple. It's just different. https://news.ycombinator.com/item?id=43020180
Significantly nicer UI, very simple but powerfully expressive DSLs for formatting logs and selecting revisions to log, respectively.
For me it clicked when I really grasped how the latter DSL (called the revset language) wasn't just for selectively logging, it was for selecting commits/changes to act upon. The revset grammar is about specifying subgraphs of the graph of changes. Using it, and taking advantage of jj's by-design automatic rebasing of downstream commits, you'll eventually think nothing of rebasing N branches in a single command.
Even conflicts are less trouble when they crop up during auto-rebasing. jj leaves conflicts on the change where they originated, and marks that commit (and all downstream ones) as conflicted. Fixing conflicts is no longer a "drop everything and fix" matter; you can decide when to fix them without delaying work on other parts of the tree.
Haven't even gotten into the optional auto committing on every file change (it's better than it sounds). You might miss the staging area for a bit, and then you realize that there isn't really a difference between the staging area and a "current commit" that stays up to date on its own.
I guess what I'm saying is I would recommend it.
"Oh I forgot to make a change in this commit and I'm 4 commits in..." -> "I can go change that commit and everything else magically rebases on it".
It treats changes as this conceptual thing, and the whole immutable commit thing as a backing store but not how _you_ are looking at a problem. Or at least not how I look at it.
I added a test, put in a new feature, updated the readme. Those changes are not about all the metadata around the change, they're the change. Of course you still have the immutable commit graph when you want to _really specifically_ talk about a certain commit, but for most intents and purposes it's the changes that matter.
jj gives you similar workflows to git in the easy case. In the hard cases, jj makes it easier to do the work than git. So it's nicer!
I hit this today in a fun way. I was working along as one does, and needed to write a parser for a simple s-expression based language. So I did that fairly quickly, added a few tests which appeared to pass, and kept on working.
A few commits later I reached a point where I needed to manually test my code. And promptly hit a stack overflow in the parser. A few minutes of debugging I understood the issue (I'd used the wrong function and failed to require brackets around a nested s-expression) - except I could have sworn my tests should catch it. So I ran my tests manually, and lo and behold they did catch it.
Turns out that there was a bug in the tool I was using to run my tests [1] that was masking the test failure. I had completely broken my tests for the last few commits (all of them since I wrote the parser).
Anyways, I went and fixed the tool, but by now on my main repo I had fixes for the last few commits all mixed together with a fairly large WIP commit. Thankfully instead of making this the frustrating game of rebasing with git, jj makes it just take a few jj split (select changes you want to split off, sort like git add -p); jj squash (merging changes into the existing commit) commands.
Could I have fixed up the previous commits with git? Of course. But it would have made a frustrating hour even more frustrating.
There's other functionality in there too, including just straight editing old commits without having to deal with the child commits yourself.
It's `git rebase -i`, and edit a file to tell it what to do, and work in a limited rebase-state of git. Versus `jj squash --into <commit>` and if doesn't merge nicely, or it causes conflicts, you stay in the normal state with the normal tools. It's a difference in quality of life, of reducing friction, not the power of the tools.
Even the `git add -p`s interface is also just less nice than `jj split`'s interface. Where `git add -p` feeds you changes one at a time and makes you select them, `jj split` gives you a interactive selection gui that lets you scroll through changes, collapse and uncollapse files, select and unselect whole files, single diff sections, or even indivudal lines. It's nicer.
> including just straight editing old commits without having to deal with the child commits yourself.
This is sort of an example. Git needs a special mode, invoked via this file-based interface in `rebase -i` (though I imagine there's a more direct command that no one knows as well). While in that mode you can't do things like just switch to another branch and work on something else. It's "strange".
In jj this is just `jj edit <commit>`. It's within the set of operations that are "normal" in jj's model. You can do everything you can normally do.
If something like a merge conflict happens git says "and how you need to deal with this right away". jj says "these commits have merge conflicts in them, fix them if/when you like".
Every small piece of my interaction feels slightly better than the same small piece of the interaction with git, and it's remarkable because it really does feel like it's every piece of the interaction with jj that is better.
I don't feel like I'm being very eloquent - but hopefully the difference I'm trying to describe is coming across at least to some degree.
Same with the working copy.
There are a couple of different ways that people use JJ, but I normally use the "squash" approach. When I want to make a new branch/pr/etc, I run `jj new master` (to create a new change off the master bench), and then I run `jj new` again. This gives me two empty changes - the older change is where I'm going to store my work when I'm finished, and the newer change is where I'm going to actually be working. This functions like staging. As I edit files, the staging change fills up. When I think I'm finished, I can run `jj diff` to exactly what's in my staging change. If I want to keep all of it, I can run `jj squash` to squash the entire change into the parent change, or I can run `jj squash -i` to interactively squash the bits that I want to keep, and keep the rest in staging.
You might argue that we've just reinvented the wheel - we used to have a pseudo-commit as our staging/index, and now we have a JJ change, but we're still doing the same thing with it. This is true, but the value of having our staging area be a change, i.e. a first class entity in our VCS, is that we don't need to special case it any more. Every command that works for our staging area works for any other change in our history. This is simpler because there's less stuff going on (fewer commands, fewer concepts), but we still have the same capabilities as before. On top of that, because our staging area is just another change, it is automatically stored in the local repository history. This means that if I've got stuff staged and I want to create a new branch somewhere else, I can do `jj new master`, and the stuff I've got staged will stay where it is. This also removes the need for an explicit stash: if all the work I'm doing is already part of a change, I don't need to create temporary "stash" commits to store them, they're already stored. Again, fewer commands, fewer concepts.
Just by itself, I find this feature really helpful because it simplifies how I need to think about changes a lot. It's so much easier to navigate around a repository like this. But it's just one of the features that has been simplified from existing VCSs. For example, you've also got automatic rebases - you can make a change to an older change (either directly or using something like `jj squash`), and all the parent change will automatically get rebased. Sometimes this causes conflicts, but it never fails. Instead, conflict tracking is part of the history itself. This allows you to fix conflicts in your own time - you're not put into the "rebase world" like in Git, where Git needs to statefully remember that it's currently doing a rebase, and you need to use different commands to resolve a rebase conflict than you need to commit a merge conflict, say. No, conflicts are first-class, which again massively simplifies how you interact with the repository - fewer commands, fewer concepts.
This is what I think people mean by simple - it's not simple in the sense of "make everything easy by wrapping it in an abstraction", instead it's simple in the sense of "find the ideal underlying model to expose to the user". Version control is always going to be fairly complex, but JJ feels like something that is very close to the minimum complexity required for a VCS: no simpler (because then it wouldn't work very well), but also but very much more complex.
* "Change" in this context refers to JJ's equivalent of commits, i.e. the individual rows you see when your run `jj log`.
As a minor update - sapling now also supports the .git on-disk formats so that you can use git and sl interchangeably in the same repo
jj always, automatically turns your current state (even if it's empty) into a "change" which is like a commit (it has a hash). in practice, this was actually incredibly annoying. when working in git, you often think about the current commit you're on, and amending/making a new commit. with jj, you're actually making "changes" (which are like commits) constantly. it's also so infuriating that there are two sets of hashes, jj changes and git commits, and they are not the same/interchangable.
on the flip side, sapling has been a breeze. it does force you to one commit per PR (which might be annoying to some but if you're going to squash the commit into main when you commit, why not do it locally too). its inter-op with github is really nice, you can do `sl goto pr1234` and it will just bring you to the code for pr #1234 (and fetch it in the process if you don't have it locally).
I don't know what this means, and it seems like a fairly large misunderstanding.
- I am absolutely loving the improved UI/UX for common operations - being able to do the same actions with fewer commands and fewer concepts to understand, and much more helpful error messages when I try to do something invalid
- The existence of some unique features (`split`, `absorb`, and `restack` being particular favourites -- IIRC people have created third-party scripts to replicate these commands for git, but last I checked they weren't as good, and they aren't installed by default)
- Having the commit log integrated with github (being able to see which of my branches match to which PRs, and whether the PR is unreviewed / accepted / rejected / merged)
As a 1+ year jj user now, I can’t think of a single time I’ve needed to use a git hash. They’re there underlying things, but you’re always just using revision IDs.
I've been considering trying out jj and thought it could work since it uses git as store, but if I need to keep track of different hashes based on if I access version control from my IDE (primarily IntelliJ for now) or from my command line, that seems like an actual problem to me.
(I'm also used to work in the command line and I prefer it to point and click for many things, but version control is one of the things that I much prefer do approach from the same place I write my code.)
There's also https://www.visualjj.com/ if you want to make commits in your editor, though personally (both with git and jj) I've always preferred doing that from the terminal.
Is that only because GoT predates JJ by 2-years?
I can not speak for GoT, but I do see several obvious issues. One, jj is Apache 2.0 and it is explicitly considered undesirable by the OpenBSD developers as it imposes further restrictions than their license of choice [1]. Two, it is written in Rust and not only do I doubt that it can be ported to the platforms OpenBSD wants to support [2], but it may very well be impossible to build on said platforms which to the best of my knowledge is a strict requirement for inclusion.
[1]: https://www.openbsd.org/policy.html
[2]: https://www.openbsd.org/plat.html
It should be noted that while GoT is developed by established OpenBSD developers, there is to the best of my knowledge no concrete plan to replace CVS at this point. While it may happen (and I personally hope it will as GoT is a very nice tool in my book), it is still up in the air and left to the rest of the OpenBSD developers when the time comes.
Is that a correct mental model for working with JJ?
The most notable mental shift is that you create a Commit (git add && git commit) after you are done, whereas you create a Change (jj new) before starting working on it, and you don't need to explicitly save it after, you bookmark and push it, or start working on a new change.
Consequently, there is no distinction between working directory, index and the commit history. You only have one concept - Change, and you edit them however you wish, whether it's newly created change, or a very old one, or a merge, or whatever else
From my experience, you should be able to fit any git worklow in this model:
- Anything that requires creating commits is jj new, merge is also jj new.
- If you wish to only save part of the changes (like git add -p) you call jj split.
- git squash is jj squash
- amend is also squash
- rebase is automatic (jj edit or jj new --{after,before})
- git stash is obsolete (everything is saved anyway, just run jj new)
- etc.
I am want to know people who use anything other than the very basic git checkout, fetch, push, merge, rebase, stash, status commit, reset.
What are you doing?
I think I have used cherrypick once? Generally I open two copies of the same repo, copy paste the code I need and commit as a new commit - the former invariably ends up as a whole mess which takes longer than copy paste efforts of the “feature” I wanted to retrieve.
This is a genuine question not rage bait. I have been in the game for 15 years. Surely there are mega complex things using git internals / obscure commands to run analytics in CI systems or hosted git solutions. But I want to know what you are doing in your workflow that requires “more” of git.
I `git cherry-pick` all the time when I graft a PR. Sometimes I drift and start doing something unrelated to a task, and I often want to push it as a separate thing. Cherry picking it into another branch is most often how I achieve this. This does mean that I have to be structured in how I construct commits while I work though. Drafting, crafting, fixing and maintaining releases is however probably the most common use case for cherry-picking. Some commits needs to be cherry-picked for other release lanes.
`git worktree` is a feature I've often heard being raved about. I've not yet been able to use it for myself, in out code bases, but I imagine it being very nice if you have it work for you.
Aside: suggestion to add `git switch` and `git restore` to your repertoire. I'm guessing you use `git checkout` for these commands, but switching branches and resetting the staging area are two _very_ different tasks and making these two different commands is a wise decision. Lessens the likelihood for errors, at least for me.
I have only worked on huge applications that take ages to spin up so bisect was never really an option.
> start doing something unrelated to the task I would change branch to that — if I ever had liberty to change priorities like that.
I have not tried git switch thank you
I usually keep an improvements commit at the start of my branch which I `--amend` to while picking away at the task. If the commit gets too big I submit it separately, to be reviewed in isolation.
That's the best time to use bisect, set up a script to automate the build and test for the specific thing you're looking for, then use "git bisect auto ./foo.sh" and go to lunch.
If you work in a commit to trunk model, every commit needs to build and pass tests and be a self-contained unit of work, ideally small, so it's quick to review.
Since you don't want to be held up by the speed of review, you make more commits, each one building on the last, to form a chain. The chain of commits is reviewed, each commit separately, and they're merged in order as you go, in one big bunch at the end, or some mix. That doesn't really matter.
The problem is addressing review comments on early commits in a chain. With git, you need to stash your work or make a WIP commit, jump to the right commit, made edits, then rebase the rest of your chain on that commit. This requires fiddly copying of commit hashes or a proliferation of branch names. If you have a tree of commits instead of a linear chain, it's even more work.
Ideally, when looking at all the comments on your chain of commits, you just make local edits and run a magic command which amends every commit each edit "belongs" to, automatically - this is what `hg absorb` does in mercurial.
(All of the above shouldn't be mistaken for saying jj makes this easier to do. I don't know if it does. I just know that git is painful for this workflow and I'd like something better.)
git commit --fixup <sha>
and git rebase -i --autosquash <previous-sha>
do what you need? Random blog link explaining it: [1][1] https://fle.github.io/git-tip-keep-your-branch-clean-with-fi...
git rebase -i
?
Forgot to add something to a previous commit? Run "jj squash -i" to move the lines you select to whatever commit you want. Or you can run "jj edit" to check out that commit and edit it directly.
Want to split a commit into two separate commits? Run "jj split".
Need to reorder commits? Run "jj rebase", and if you have a conflict you can "jj edit" the commits that are marked as conflicted to fix it later, unlike Git where you have to run through a lengthy process of fixing conflicts on commits you don't remember and then review the changes later to see whether they still make sense.
If you want to have a messy working copy of your repo that's very easy to do. The workflow would mostly involve:
- Develop the feature
- "jj split" to pick out the stuff you need into a separate commit, which will appear between master and the working copy commit
- "jj describe" to add a commit message
- "jj bookmark set feature-branch" on the commit containing the stuff you want to push
- "jj git push" to push it
- "jj edit" to return to the commit containing the working copy.
You'd end up with a tree that looks kind of like this:
@ ptswumyk 2025-02-12 13:16:36 de46f8c1
│ messy working copy
○ slwozrlr 2025-02-12 13:16:22 feature-branch@origin d3d246a1
│ feature implementation
◆ tssssuzr 2025-02-12 12:34:28 master* 8a9bab0f
│ generate flake registry from inputs
~
So it's not that I really need more features to git, just a better UX, which is what JJ provides.git archive's more obscure, as is ls-files and cat-file. git worktree's useful; might want to take a look at that one. bisect is another one that comes in handy.
There's a lot more to be gotten from git, but if you can't get cherry-pick to work for you, I'd start there. (which, if you open two copies of the same repo and copy and paste code, you're doing it wrong)
Be able to split a commit into multiple smaller commits. Be able to split a single commits into separate hunks, be able to rebase -i and squash them into something that looks good.
You should be able to move between branches effortlessly and cherry-pick commits onto different branches while working.
Using git well means making it work for you instead of fighting it. Locally, I do "git commit -a -m wip" incessantly and messily, but it all gets squashed and picked back apart before it goes up for review.
It has worked for 10+ years - is that really doing it wrong, or just doing it in a way different from you.
> into something that looks good
Is this a requirement? Or just out of being particular?
>squashed […] before review
Is deleting (obfuscating) the history of your approach (chain of thought) important to you? The reviewer will review the last commit anyway (always) so is this necessary other than subjective neatness? This can be squashed at the point it is merged
It appears you are very efficient at git. I am having difficulty understanding the real utility of much you described. I wonder if it is a justification for finding the fun in using more uncommon features of git, or for “neatness”? Which is fine.
But when the tool you're using has a feature specifically designed for copying its kind of things from one place to another, and then you choose not to use that feature, and choose to copy things from one place to another using a different but more familiar system, then yes I'm going to say you're doing it wrong. Now, that's just my opinion as a random Internet someone who doesn't know you, your workflow, your use-case, so it's entirely possible I'm wrong about you being wrong. But without clarifying details, yes, that's wrong. Now, wrong things still work, and fundamentally things need to work, so just because it's wrong doesn't mean it doesn't work.
> Is deleting (obfuscating) the history of your approach (chain of thought) important to you?
It's essential. I do dumb things but I don't like looking dumb, so no one needs to know that I spent however many hours going off in the wrong direction before figuring out what I'm really trying to do. The reviewer only sees what I show them, and they'll never see the abandoned commits and branches on my laptop. They don't need to, it's a waste of their time. Treat your reviewer's time as the most valuable thing in the world and don't waste it by having them trawl through your mess.
> I am having difficulty understanding the real utility of much you described.
The real utility, the thing that got me to invest the time into learning git so it works for me, is that when things stop working, I can rewind to when things were working, do a git diff, and it highlights the change which would have taken me hours to find without git.
There's more utility elsewhere, but that's the singular thing that convinced me to invest the time to get good at git.
What's the oldest code you've written? Go back to then and try and tweak a feature. Being neat about git is because that'll happen to you at some point, and future you, two or twenty years from now, will either thank you, or curse you for your commit message.
At my job, we use it to backport upstream commits from vendors or other internal branches. If we just copied the contents (everything at once?!) and made new commits, it would become impossible to track and we'd end up with chaos.
Keeping the original rationale for changes is invaluable when trying to understand complex code. There are few things more frustrating than a commit that says only "fix bug".
As for other git commands, git bisect is wonderful.
You obviously don't need many git features if you are working alone or with only 1-2 team members but the bigger the organization the more complex the git workflow becomes.
Then instead of making a second copy and then pasting the code over and commiting it, it's just git cherry-pick and you're done.
Even people who are better at git still often don't know that much about it. I was the git person at our last job and Im not that good.
Besides push, merge, rebase, stash, status commit I've used submodules and git-svn(both for converting a repo to git and for having a local copy for my own use). Rebasing when you run into conflicts is annoying, I much prefer to have intellij to guide me through the rebase then using the commandline. Submodule are annoying but I assume part of that is inevitable when you have two repos linked like that, some of it is probably gits fault.