- Superset makes it easy to spin up git worktrees and automatically setup your environment
- Agents and terminal tabs are isolated to worktrees, preventing conflicts
- Built-in hooks [0] to notify when your coding agents are done/needs attention,
- A diff viewer to review the changes and make PRs quickly
We’re three engineers who’ve built and maintained large codebases, and kept wanting to work on as many features in parallel as possible. Git worktrees [1] have been a useful solution for this task but they’re annoying to spin up and manage. We started superset as a tool that uses the best practices we’ve discovered running parallel agents.
Here is a demo video:
https://www.youtube.com/watch?v=pHJhKFX2S-4
We all use Superset to build Superset, and it more than doubles our productivity (you’ll be able to tell from the autoupdates). We have many friends using it over their IDE of choice or replacing their terminals with Superset, and it seems to stick because they can keep using whatever CLI agent or tool they want while Superset just augments their existing set of tools.
Superset is written predominantly in Typescript and based on Electron, xterm.js, and node-pty. We chose xterm+node-pty because it's a proven way to run real PTYs in a desktop app (used by VSCode and Hyper), and Electron lets us ship fast. Next, we’re exploring features like running worktrees in cloud VMs to offload local resources, context sharing between agents, and a top-level orchestration agent for managing many worktrees or projects at once.
We’ve learned a lot building this: making a good terminal is more complex than you’d think, and terminal and git defaults aren’t universal (svn vs git, weird shell setups, complex monorepos, etc.).
Building a product for yourself is way faster and quite fun. It's early days, but we’d love you to try Superset across all your CLI tools and environments, we welcome your feedback! :)
Most of these agents solutions are focusing on git branches and worktrees, but at least none of them mention databases. How do you handle them? For example, in my projects, this means I would need ten different copies of my database. What about other microservices that are used, like redis, celery, etc? Are you duplicating (10-plicating) all of them?
If this works flawlessly it would be very powerful, but I think it still needs to solve more issues whan just filesystem conflicts.
For example: • if you’re using Neon/Supabase, your setup script can create a DB branch per workspace • if you’re using Docker, the script can launch isolated containers for Redis/Postgres/Celery/etc
Currently we only orchestrate when they run, and have the user define what they do for each project, because every stack is different. This is a point of friction we are also solving by adding some features to help users automatically generate setup/teardown scripts that work for their projects.
We are also building cloud workspaces that will hopefully solve this issue for you and not limit users by their local hardware.
For databases, if you can’t see a connection string in env vars, use sqlite://:memory and make a test db like you do for unit testing.
For redis, provide a mock impl that gets/sets keys in a hash table or dictionary.
Stop bringing your whole house to the camp site.
What does that mean in this context?
What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever.
The point isn’t you shouldn’t have a database, the point is what are your concerns? For me and my teams, we care about our code, the performance of that code, the correctness of that code, and don’t test against a live database so that we understand the separation of concerns between our app and its storage. We expect a database to be there. We expect it to have such and such schema. We don’t expect it to live at a certain address or a certain configuration as that is the databases concern.
We tell our app at startup where that address is or we don’t. The app should only care whether we did or not, if not, it will need to make one to work.
This is the same logic with unit testing. If you’re unit testing against a real database, that isn’t unit testing, that’s an integration test.
If you do care about the speed of your database and how your app scales, you aren’t going to be doing that on your local machine.
For bug fixes and quick changes I can definitely get to 5-7 in parallel, but for real work I can only do 2-3 agents in parallel.
Human review still remains the /eventual/ bottleneck, but I find even when I'm in the "review phase" of a PR, I have enough downtime to get another agent the context it needs between agent turns.
We're looking into ways to reduce the amount of human interaction next, I think there's a lot of cool ideas in that space but the goal is over time tools improve to require less and less human intervention.
And yeah the next frontier is definitely offloading to agents in sandboxes, Kiet has that as one of his top priorities.
https://github.com/wandb/catnip
How is this different?
But it on the roadmap and glad to know theres interest there :)
Conductor
Chorus
Vibetunnel
VibeKanban
Mux
Happy
AutoClaude
ClaudeSquad
All of these allow you to work on multiple terminals at once. Some support work trees and others don’t. Some work on your phone and others are desktop only.
Superset seems like a great addition!
For more complex setups if your app has hardcoded ports or multiple services that need coordination you can use setup/teardown scripts to manage this. Either dynamically assigning ports or killing the previous server before starting a new one (you can also kill the previous sever manually).
In practice most users aren't running all 10 agent's dev servers at once (yet), you're usually actively previewing 1-2 at at time while the other are working (writing code, running tests, reviewing, etc). But please give it a try and let me know if you encounter anything you want us to improve :)