I identified [0] the source for much of it (sub-shells and pipes) and began a PR [1], but became bogged down with BATS testing, and then found mise / rtx, so kind of lost interest. Sorry. You can always implement these if you’d like.
[0]: https://github.com/asdf-vm/asdf/issues/290#issuecomment-1383...
asdf hasn't just started getting popular. It's been popular for a long time already. IIRC I started using it ~8 years ago (~2016). asdf has been around since 2014. I believe Mise (rtx) has been around for a couple of years already too.
Also, contrary to the other comments in this chain I don't find it particularly slow..
People will generally change taste and likes/dislikes every few years.
Where-as asdf creates shims that go into the PATH. That way any processes launching processes using normal env rules have asdf applied.
Mise looks well built & is very fast. But it's jaw dropping to me that it's coverage is so drastically lower than asdf.
I switched from asdf to mise and everything works fine _if_ you setup shims.
The main differences are better UX with simpler commands and it not using shims, which means much better performance
Solutions considered include adopting the vfox plugin system or transpiling all asdf plugins to ShellJs.
Now I know that vfox exists.
It'll be a long road ahead and I could certainly use some help if anyone out there is interested in moving it forward. That said, vfox is a really great project and they are targeting windows specifically. Windows will probably always be second in the mise ecosystem (because I don't use it) but my hope is I can get at least a baseline of support which would help teams that have occasional windows contributors.
It sure is great, it is! However, like you, I tend to prefer minimalistic and predictable tools.
That's why I decided to add the small comment in the discussion section of the post, to be fair but also kind of clear that bloating the runtime manager that was supposed to help manage the bloated runtimes and package managers isn't a great idea.
Having said that, if the scope of mise stabilizes and it doesn't turn into a kitchen-sink kind of project, it sure seems sweet!
However, it turns out that a tool that needs to be extremely CWD-aware also makes a great .env tool and task runner. I was also a little skeptical, but it’s actually super super useful. Especially because it’s easy to convince team members to install it for the tools, they get the rest for free with easy syntax.
In the future though I see tasks as being the headline for mise over tools. That's a ways out, certainly more than a year, but the thing about tasks is they don't suffer from the drawbacks that both PATH and shims have for putting your tools in the right place. In my personal use of mise I don't actually like using `mise activate` whatsoever. The problem is just that I can't yet do everything with tasks easily enough. Tasks need to get to a point where they're so easy you won't want to bother with having tools in your shell.
Though who knows. I may be off my rocker on that one. I certainly get things wrong as much, if not more, than I get them right.
That said, I think if you thought about _why_ you like minimalistic and predictable tools you may find that mise solves the underlying reasons for that. My whole thing is about augmenting your environment and not replacing it. This is generally where I contrast mise with tools like nix and docker but I thought it was worth calling out.
I think people like mise because they can use it for just setting some env vars, installing a few npm packages globally, having an easy way to synchronize tool versions between local dev and CI/CD. You can use it for any one of those things and it slots right in wherever you are—whether that's inside VSCode, ssh'ed into a remote machine, in a github action, or inside a docker container in a k8s fleet.
Yeah mise is capable of a lot of different things, but the important thing is that it doesn't force you to change anything _else_ about your setup.
Why is it based on the Conda ecosystem? Do you happen to know?
I assume it's for portability, but that sounds heavy.
My org does a lot of work combining machine learning with oceanographic and climate modeling, which are both domains that have deep dependency chains that don't always mesh well, especially as our researchers mix in R and other languages as the same time, and the Conda ecosystem helps us a ton with that, but there are issues that `conda` and `mamba` don't help us out with.
Pixi takes a swing at some of what the Conda ecosystem hasn't been great at (at least without a lot of manual custom ceremony) that Cargo, Poetry, Pipenv, PDM, and other dependency and workflow management tools have demonstrated can be done such as lock files, cross platform dependency management, task running, and defining multiple related environments.
What's really cool when you have a mix of projects, Pixi can work almost entirely PyPI native out of a `pyproject.toml`, other than installing Python from Conda-Forge, so you can mix and match environments but stay with the same tool. https://prefix.dev/blog/using_python_projects_with_pixi docs: https://pixi.sh/latest/advanced/pyproject_toml/
asdf handles tools, not really packages. So asdf would install Python and not Python packages.
The other 2 reasons are more subjective. The first being I've had less issues installing packages with PDM. It's been a while and I don't remember which specific packages Poetry struggled to install, but my takeaway was just that I don't run into those problems with PDM the way I did with Poetry.
The other larger/more consequential reason is the maintainers of Poetry. There have been a number of GitHub issues replies laying out plans for things they will/won't do with Poetry, and it didn't inspire confidence in me. I want to do things the "Python way," not the "Poetry way," and PDM adheres more closely to the direction the Python project is moving. I don't want surprises later down the road when a tool (Poetry) is doing a bunch of custom stuff that eventually may not be compatible with the ecosystem at large.
All that being said, I had a lot of good things to say about Poetry while I was using it, and I do understand why people make it their package/project manager. Just wasn't the right fit for me.
asdf.vm together with pipenv is my go-to for Python environment management.
[0] https://pip-tools.readthedocs.io/en/stable/
As for Poetry, it is constantly improving and has gotten very popular. It should not be dismissed, especially for larger projects since its dependency management is so much better than pipenv. This is a good primer: https://diegoquintanav.github.io/poetry-primer.html
I think it’s fair to see appeal in poetry, but ultimately the maintainers have created a culture that feels a bit too non-collaborative to outside ideas or use cases beyond what they more narrowly envisage. That said, my perspective may just be tainted by multiple poor experiences interacting with the maintainers.
then you'd also need rbenv, nvm, etc.
and pyenv can implode in marvelous ways.
pyenv isn’t perfect, and isn’t what I’d use for prod images, but for dev setup it’s relatively bulletproof and less issue-prone than any alternative I’ve seen.
I just tried googling it having not really used CL in a while, and apparently it was seo'd to the top of google results too?
(it's bit like Python stealing the name of a CL compiler ...)
I don’t like Nix but I haven’t found anything else that scales along those critical requirements. I don’t think it’s a good idea to simply replace rbenv/nvm/etc with asdf-ruby-plugin and so on - unless your software isn’t intended to leave your development machine?
(Docker for me fails in the opposite direction - fairly miserable to develop with but trivial to deploy.)
Of course it only works if your codebase and tools are all JS-based!
Having worked recently on a project that was mostly TypeScript with some Python, the TS bits were mostly straightforward but the Python was a hassle in both dev and production (I used venv). I can see that asdf might have been handy for development but if it didn't have a good deployment workflow that wouldn't have helped.
ASDF generally doesn't reinvent version management, but wrap and re-use ruby-build, node-build etc.
It fails if your single project is a legacy monster needing four versions of node, two pythons and a handful of javas - but that's not a common use case.
More commonly you have multiple projects, each with a single version of node, python and java. For deployment you only need one of each - it's in development you need five of each when switching between projects.
Devenv seems nice (in fact it’s how I started down this path) but I haven’t found anything it does for me that I can’t get out of flakes - so far.
Our pilot is quite a bit larger. Sticking to plainer flakes has made it easier for folks to self-service for now but we do intend to re-evaluate devenv.
Same username on twitter if you’re interested in chatting.
- steep steep learning curve, so your team is split between those who can understand it and those who have to blindly follow checklists and ask for help when something breaks
- it doesn’t play well on macOS
Agree about the learning curve; but I am going to experience onboarding my coworkers onto using Nix only for developer environments over the next months; I feel the curve is not quite that steep for that limited use case.
Re: onboarding - I’m doing the same thing at a somewhat larger scale. Same username on twitter if you want to start a support group. :)
I’ve also had issues with GUIs and Xcode as noted in other comments but I don’t mind that - those are much more of a solved problem than, say, keeping seven different JDKs around.
Also it plays really nicely on macOS unless you’re trying to share nix config across macOS and Linux which…just fork and move on, it’s not worth it. :)
For a single user with one development machine, simply having say a time-machine backup could be sufficient. I haven't had challenges for personal projects where details mattered. e.g. a Maven pom.xml, or Go modules/packages was sufficient for my needs.
Historically I'd only cared about automating the spec of production environments. Why would I want/need this?
I now recollect once being contacted out of the blue as being a person who might be able diagnose/solve an issue at a company I'd never worked with. They had two dev machines and only one of them could produce a working program. Their team couldn't figure it out. I gave them a rate and arrived on-site. It was a Visual Basic 6 program, so I just took two half days going through every EXE & DLL related to Windows and VB, eventually finding the difference. Tedious but not rocket science. Is it to avoid these cases?
Edit: We have project onboarding instructions where I work. I suppose it could be useful for making those. I don't make them but could appreciate if they used a standard rather than bespoke scheme.
always, golang is overly opinionated regarding where modules and binaries are stored. I don't like that and I've blown my local development environment into pieces because of that (looking at you GRPC, yikes)
But also, imagine that you, like me, need to test Python, Java+Kotlin+Gradle and NodeJS+Angular stuff. Do you really want to install _all that_ natively ? Just for a couple of merge reviews, and even if not, do you _really_ want to install all that natively ? The answer is always, IMHO, a resounding and clear no.
> It was a Visual Basic 6 program, so I just took two half days going through every EXE & DLL related to Windows and VB, eventually finding the difference. Tedious but not rocket science. Is it to avoid these cases?
For example, but also much worst, as mentioned in the OP it's to prevent the very real possibility of crippling your OS's language runtimes and also to stay productive.
At work, because everyone uses Mac, we ended up using Kandji to achieve the same thing: everyone has the same tools and environments, but that is only if you already have to use that due to security audits and stuff like that.
If I had a small company myself I would probably setup everything with Guix as I really like the way it works, more than Nix (though only because I prefer Lisp config files and because Guix doesn't suffer from any polemics like the flake soap opera).
https://learn.microsoft.com/en-us/aspnet/core/grpc/basics?vi...
It is quite helpful. Also incredibly practical when chevking whether Library X will work on an older version.
I do open source and consulting for clients. I deal with a lot of projects, my own and other's.
Thanks all for the replies. And sorry if I'm asking basic questions and should just read the asdf readme. On my custom layout I have to type A-S-R-H to get asdf.
Still no. asdf manages the versions of the runtimes themselves. E.g. I have a project that uses ruby 2.7.2 and Terraform 1.1.7. If I'm using asdf, I declare this in the .tool-versions file of the project, and then when I navigate to that directory, every invocation of `ruby` or `terraform` will run those exact versions.
Separately, packages (Ruby Gems, Python packages, etc) will also be isolated per-version of each runtime, but that's a side effect rather than the goal with asdf.
Any tool installed via asdf is available on any directory as long as you are accessing that directory via a shell spawned with a .profilerc or similar which contains your asdf configuration.
> asdf Node installs aren't like Python virtual environments
Correct, neither should they be.
> they're centrally installed and one Node version (and its packages) is shared across all diretories
sure, they are, and that's by design. You're conflating a runtime manager with a package manager. Venvs are _not_ runtime manager, the moment you need another Python version you're done for. asdf.vm is _not_ a package manager, the moment you want package isolation while working on an asdf install is the moment you install yourself pipenv, poetry, pdm or use python venvs for that.
> that want it + global tools.
which is achieved as I've shown below. Still, there was no reason in your usecase to modify or play with globally installed tools besides asdf, through which you can then define global runtime versions and those global versions will hold your global tools, usable wherever.
By giving each box it's own home folder vscode in each has only the extensions for that language. E.g I don't have any python extensions in my nodejs box.
Been working like this for a couple of weeks now and it's pretty good.
If I end up breaking a box I can simply delete it and start over.
One project of mine, though, requires a shareable / pseudo-reproducible dev environment. Devbox didn't cut it, and mise especially couldn't have, since it requires some system deps. I went with a Nix Flake, which worked fine, but also started building a special distrobox image for the project, this time udimg Fedora as a base, as I perceived it as more stable. Using it too distrobox still had some issues, but I managed to make a shim/helper that runs `pnpm` from inside the container and that works pretty perfectly. Might be a bit worse on the performance side, though. We'll see.
The problem with this is that docker doesn't like it when not all mounts are found, so within a team it requires something more sophisticated.
That's basically what asdf does, just automated.
Edit: Decided to peruse the code for the python asdf plugin myself and it seems to just use pyenv under the hood anyway, so I guess it's not really a question of what asdf does anyway.
Generally asdf shines when you need more than one system installed - say a html or SQL language server that depends on node in addition to python for your main app.
There are only three for me and I don't have a reason for more in the near future, so I just remember.
> Generally asdf shines when you need more than one system installed
Makes a little more sense to me.
(Note: I like the tool)
> can activate project tooling upon cd’ing into a project folder
this probably can be replicated with zsh hooks: https://zsh.sourceforge.io/Doc/Release/Functions.html#Hook-F...
That example in the article of managing multiple python 2.7 versions sounds like a horror story.
> Is it just me that never even wants to get to the problems that asdf attempts to solve?
You aren't alone, the scenario isn't ideal. However, brew's Python installation on MacOS as are Debian's and Ubuntu's are _extremely_ brittle. You are one cask, formulae or apt package away from needing to do a weekend-long spelunking session or a full blown system re-install if you have deadlines.
PyEnv is a pain to set up, and maintain, which is what I used in the years before as well as after Python 2 was deprecated and projects started slowly migrating to newer python versions.
> That example in the article of managing multiple python 2.7 versions sounds like a horror story.
It is a horror story, but is very common.
Have you tried to install and maintain Java, Kotlin and Graddle installations for a given project although your machine is not primarily a Java, Kotlin, Graddle box? That is a real nightmare, not so much with asdf.vm.
It will manage the JDK for you. Usage is basically this:
# Install a JDK, that version is now default
sdk install java <version>
# Another one, it asks if you want to change the default
sdk install java <another-version>
# List available and installed versions
sdk list java
# Change which one you're using in this shell
sdk use java <version>
That's all.You can also manage Gradle/Maven installations with SDKMAN, but that's not necessary, usually, because most JVM projects include a "wrapper" script which downloads the needed Maven/Gradle version for you.
This works regardless of whether your project also needs Kotlin/Groovy etc. as those are just managed by Gradle/Maven (the only exception I can think of is if you use Kotlin Multiplatform as that will depend on the platform dependencies as well).
So once you know SDKMAN, you can manage any JVM-based project with just this:
sdk use java <jdk-version-used-by-project>
./gradlew build # or ./mvnw package
If you need to do anything else, you should complain to the project authors as this is really all you should need!Sure. But you might need node for some front end build tool, or a language server for sql. Then you can use two version managers, or just asdf.
I've been using python installed using homebrew and haven't found any issues. In homebrew you can install a specific python version like python@3.11 and using venvs avoids most of the issue (I think you can't install packages outside of a venv in python 3.12 or higher).
.. I run everything for a project in a container. Each project then matches perfectly the container actually used in production, so if it works there, it also works on my machine. I just volume mount the project folder into the container so I can't edit files from my IDE, and then pycharm has ok support for remote interpreters.
A couple of moons ago I used Poetry, but gave up on it because it was so heavy and unfortunately would bug out often.
I did try using something similar to asdf (can't remember the name, think it changed), but it still didn't really solve the problem of OS dependencies and things needing to be compiled, and the problems arising from me not running the same OS as the application would run on. A dockerfile solves that, my system is a carbon copy of the prod environment.
Yeah, that can definitely not be beat, if at all probably only due to comfort.
> Not tried pipenv
I've been meaning to put a tutorial out there with my workflow since forever, if I had it I'd point you to it.
I recommend you give it a try if you get the chance, you might like it.
Nice in theory but not worth it.
For one, you have to limit yourself to vscode and/or other IDEs with this capability - which ought to be a dealbreaker right there.
But then you still have issues around syncing permissions and paths inside+outside the environment. And that all your other windows have a different view into your project.
That alone is another dealbreaker (which you can bandaid, but...).
And then if you need access to USB devices, well for one I hope you are running linux but even then that is a frigging nightmare. And there is some headache balancing everything above with admin rights etc.
Yes in theory it is perfect. In practice we are not nearly there and you quickly realize the effort to do this well is orders of magnitude more work than just running native.
I still always have a container for continuous integration in a way that you can run it easily on your workstation and can be turned into a devcontainer and/or built manually for small fixes in an old project you haven't used in a long long time. Which is great!
But for your main development? I really tried but it is a nightmare in disguise.
You don't have to. There's a devcontainer CLI.
https://code.visualstudio.com/docs/devcontainers/devcontaine...