Fyn: An uv fork with new features, bug fixes, stripped telemetry
61 points
2 hours ago
| 11 comments
| github.com
| HN
jcattle
1 hour ago
[-]
Nah sorry, so far 4 of the 9 commit messages in that fork are "cleanup".

And the first two commits are "new fork" and "fork", where "new fork" is a nice (+28204 -39206) commit and "fork" is a cheeky (+23971 -23921) commit.

I think I'm good. And I would question the judgement of anyone jumping on this fork.

reply
albinn
52 minutes ago
[-]
I agree, I like some of the directions the fork would go and dislike some. The apparent, fork, publish on HN, then change (and the change showing not a lot of understanding) makes me throughly question the legitimacy and long term stability of it.
reply
emil-lp
1 hour ago
[-]
Commit messages say a lot about people.
reply
jcattle
52 minutes ago
[-]
They really do. Let me cite another one from the repo:

"fix: updated readme. sorry was so tired i accidentally mass replaced uv with fyn for all"

reply
bjornarv
1 hour ago
[-]
creator is definitely just jumping on this for some clout
reply
lr1970
1 hour ago
[-]
From fyn's roadmap:

> 2. Centralized venv storage — keep .venvs out of your project dirs

I do not like this. virtual environments have been always associated with projects and colocated with them. Moving .venv to centralized storage recreates conda philosophy which is very different from pip/uv approach.

In any case, I am using pixi now and like it a lot.

reply
simonw
1 hour ago
[-]
Here's where that feature was (and is still being) discussed in the uv repo: https://github.com/astral-sh/uv/issues/1495

It's been open for two years but it looks like there's a PR in active development for it right now: https://github.com/astral-sh/uv/pull/18214

reply
dec0dedab0de
1 hour ago
[-]
thats my biggest problem with uv, i liked the way pipenv did it much better. I want to be able to use find and recursive grep without worrying that libraries are in my project directory.

uv is just so fast that i deal with it.

reply
valicord
51 minutes ago
[-]
rg/fd respect gitignore automatically which solves this problem
reply
santiagobasulto
51 minutes ago
[-]
Sometimes I want the venvs to be in a centralized location, and just do:

UV_PROJECT_ENVIRONMENT=$HOME/.virtualenvs/{env-name} uv {command}

reply
imcritic
1 hour ago
[-]
How is pixi better than uv?
reply
lr1970
1 hour ago
[-]
> How is pixi better than uv?

pixi is a general multi-languge, multi-platform package manager. I am using it now on my new macbook neo as a homebrew _replacement_. Yes, it goes beyond python and allows you to install git, jj, fzf, cmake, compilers, pandoc, and many more.

For python, pixi uses conda-forge and PyPI as package repos and relies on uv's rattler dependency resolver. pixi is as fast as uv (it uses fast code path from uv) but goes further beyond python wheels. For detail see [0] or google it :-)

[0] https://pixi.prefix.dev/latest/

reply
thinkadd
49 minutes ago
[-]
How is it different than mise?
reply
nilslindemann
58 minutes ago
[-]
They are all anachronisms, as they have no GUIs, just commands to be typed into a REPL.
reply
Levitating
1 hour ago
[-]
It has been working fine for build systems like cargo.
reply
short_sells_poo
1 hour ago
[-]
I like it a lot :D.

Virtual environments have been always associated with projects in your use case I guess.

In my use case, they almost never are. Most people in my industry have 1-2 venvs that they use across all their projects, and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.

I dislike conda not because of the centralized venvs, but because it's bloated, poorly engineered, slow and inconvenient to use.

At the end of the day, this gives us choice. People can use uv or they can use fyn and have both use cases covered.

reply
lr1970
1 hour ago
[-]
> and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.

Actually, uv intelligently uses hardlinks or reflinks to avoid file duplication. On the surface, venvs in different projects are duplicate, but in reality they reference the same files in the uv's cache.

BTW, pixi does the same. And `pixi global` allows you to create global environments in central location if you prefer this workflow.

EDIT: I forgot to mention an elephant in the room. With agentic AI coding you do want all your dependencies to be under your project root. AI agents run in sandboxes and I do not want to give them extra permissions pocking around in my entire storage. I start an agent in the project root and all my code and .venv are there. This provides sense of locality to the agent. They only need to pock around under the project root and nowhere else.

reply
mr_mitm
52 minutes ago
[-]
This is actually the feature that initially drew me towards uv. I never have to worry about where venvs live while suffering literally zero downsides. It's blazing fast, uses minimal storage, and version conflicts are virtually impossible.
reply
derodero24
1 hour ago
[-]
As someone shipping native Node addons, registry telemetry (OS, arch, platform) is one of the few ways I know which build targets to actually prioritize. Without it I'd be guessing whether anyone's even using linux-arm64-musl. I get the instinct to strip it, but for package maintainers it's genuinely useful data.
reply
Bender
1 hour ago
[-]
Given the telemetry, how did uv ever get approved/adopted by the open source community to begin with, or did it creep in? Why isn't it currently burning in a fire?
reply
simonw
1 hour ago
[-]
The telemetry they removed here isn't unique to uv, and it's not being sent back to Astral. Here's the equivalent code in pip itself: https://github.com/pypa/pip/blob/59555f49a0916c6459755d7686a...

It's providing platform information to PyPI to help track which operating systems and platforms are being used by different packages.

The result is useful graphs like these: https://pypistats.org/packages/sqlite-utils and https://pepy.tech/projects/sqlite-utils?timeRange=threeMonth...

The field that guesses if something is running in a CI environment is particularly useful, because it helps package authors tell if their package is genuinely popular or if it's just being installed in CI thousands of times a day by one heavy user who doesn't cache their requirements.

Honestly, stripping this data and then implying that it was collected by Astral/OpenAI in a creepy way is a bad look for this new fork. They should at least clarify in their documentation what the "telemetry" does so as not to make people think Astral were acting in a negative way.

Personally I think stripping the telemetry damages the Python community's ability to understand the demographics of package consumption while not having any meaningful impact on end-user privacy at all.

Here's the original issue against uv, where the feature was requested by a PyPI volunteer: https://github.com/astral-sh/uv/issues/1958

Update: I filed an issue against fyn suggesting they improve their documentation of this: https://github.com/duriantaco/fyn/issues/1

reply
stackedinserter
50 minutes ago
[-]
Now people on HN defend telemetry. How did we come to this point?

Don't be surprised when you're asked to drink control bottle in order to continue living.

reply
simonw
48 minutes ago
[-]
Explain to me the harm that is caused to users of pip when this particular set of platform information is sent to PyPI.

(In case you were going to say that it associates hardware platform details with IP addresses - which would have been my answer - know that PyPI doesn't record IPs: https://www.theregister.com/2023/05/27/pypi_ip_data_governme... )

Then give me your version of why it's not reasonable for the Python packaging community (who are the recipients of this data, it doesn't go to Astral) to want to collect aggregate numbers against those platform details.

reply
albinn
1 hour ago
[-]
I don't think it is too bad, the telemetry it sends is quite rudimentary. However, would have been a good move from astral-sh to be open and explicit about it, and allow turning it off.
reply
arjie
1 hour ago
[-]
> These things include your OS, py version, CPU architecture, Linux distro, whether you're in CI. All baked into the User-Agent header via something called "linehaul". We ripped that out. Now it just sends fyn/0.10.13. That's it.

I imagine it's just that the User-Agent is something that we've grown accustomed to passing information in. I am fairly biased since I'd always opt-in even to popcon. I think it's useful to have such usage information.

reply
PurpleRamen
38 minutes ago
[-]
This is so useful, I'm shocked they even make a big thing out of it. And now I'm questioning whether this is even their real intention, or just a diversion?
reply
blitzar
1 hour ago
[-]
It was really really good.
reply
Ygg2
1 hour ago
[-]
Telemetry isn't bad in OSS per se. Without it, it's hard to say how an app is used and how to develop it in the future.
reply
add-sub-mul-div
1 hour ago
[-]
Because not everyone has a knee-jerk emotional reaction to a word when that word can mean something benign aside from its typical FUD connotation.
reply
Bender
1 hour ago
[-]
I will always have a "knee-jerk" response to opt-out or mandatory telemetry or any other outbound connections I did not ask for being initiated automatically. In a corporate world I would have to block this and depending on what the telemetry is connecting to that could impact other outbound connections leading to contention without the org.

One of the optimal ways to do this would be to opt-in by setting an environment variable to enabled any combination of extra debugging, telemetry, stats, etc... Perhaps even different end-points using environment variables.

reply
maverwa
59 minutes ago
[-]
If I understand the description of this „telemetry“ in fyns „MANIFESTO.md“ correctly, it does not make outbound connections you did not asked for. It sets the user agent http header to something that identifies your OS, CPU, python version and if you are running in Ci when communicating to the package registry. It does not send any of that to astral, not ist any of that highly personal.

Sure, it should not be there by default, especially OS & CPU imho. But it’s not really what I’d call „invasive telemetry“.

reply
dirkc
1 hour ago
[-]
I suspect that my normal workflows might just have evolved to route around the pain that package management can be in python (or any other ecosystem really).

In what situations are uv most useful? Is it once you install machine learning packages and it pulls in more native stuff - ie is it more popular in some circles? Is there a killer feature that I'm missing?

reply
simonw
1 hour ago
[-]
If you have hundreds of different Python projects on your machine (as I do) the speed and developer experience improvements of uv make a big difference.

I love being able to cd into any folder and run "uv run pytest" without even having to think about virtual environments or package versions.

reply
dirkc
23 minutes ago
[-]
Do you run those projects on the host system as your normal user without any isolation?
reply
simonw
11 minutes ago
[-]
Yes, which makes me very vulnerable to supply chain attacks.
reply
dirkc
7 minutes ago
[-]
Yikes! I had a scare once, and since then I only run sandboxed code or scripts I've written with minimal 3rd party deps.

I assume you have other mitigations in place?

reply
politelemon
1 hour ago
[-]
Imo, uv scripts with the dependencies in the header.

https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...

reply
dirkc
20 minutes ago
[-]
I guess that could be useful. I don't have many standalone python scripts, and those that I do have are very basic. It would be really nice if that header could include sandboxing!
reply
simonw
10 minutes ago
[-]
So much this! I've been bugging Astral about addressing the sandboxing challenge for a while, I wonder if that might take more priority now they're at OpenAI?
reply
dec0dedab0de
1 hour ago
[-]
UV is most useful because it is so much faster than everything else. All the other features I could do without.
reply
dirkc
22 minutes ago
[-]
Yep, the speed is nice, I can't argue with that!
reply
albinn
1 hour ago
[-]
The shell and upgrade commands are helpful, especially when onboarding someone who has not used uv before.

Crazy that there is not way in uv to limit the cache size. I have loved using uv though, it is a breath of fresh air.

reply
bovermyer
1 hour ago
[-]
I like the direction this fork is going in. I will wait to use it until it achieves a little more critical mass in adoption, though.
reply
tcbrah
1 hour ago
[-]
love that "we removed the telemetry" is now a headline feature worth forking an entire project over. says a lot about where dev tooling is headed tbh
reply
trollbridge
1 hour ago
[-]
Looks great, and in particular, uv’s cache growing forever and lack of the uv shell command were both maddening.

I assume mainstream uv development will go into maintenance mode now, so it’s great to see a quality lineage like this.

reply
worksonmine
1 hour ago
[-]
Why prefix the settings `UV_CACHE_MAX_SIZE` and `UV_LOCKFILE` with `UV_` if they're new features? Makes no sense.
reply
_flux
1 hour ago
[-]
They are environment variables. I enjoy seeing from my large number of environment variables to which applications they belong to.
reply
worksonmine
1 hour ago
[-]
I know what an environment variable is, my question is why name them `UV_` instead of `FYN_`? I thought that would've been obvious for exactly the same reason you mention, it should be named for the application they belong to.
reply
_flux
1 hour ago
[-]
Ah, I completely missed the point of your question :).

Yes, I think that's a good point. Possibly they were made before the project name was changed and no further thought was given to them after.

reply
lr1970
1 hour ago
[-]
Would be more logical to use FYN_ prefix
reply
unethical_ban
1 hour ago
[-]
Facilitates drop-in migration from uv to the new tool. So if uv adopts the feature it's "just there".
reply
fmajid
1 hour ago
[-]
I'm worried about OpenAI enshittifying uv and ruff now they've acquired Astral, so it's good to have options.
reply