The vacuum that arXiv originally filled was one of a glorified PDF hosting service with just enough of a reputation to allow some preprints to be cited in a formally published paper, and with just enough moderation to not devolve into spam and chaos. It has also been instrumental in pushing publishers towards open access (i.e., to finally give up).
Unfortunately, over the years, arXiv has become something like a "venue" in its own right, particularly in ML, with some decently cited papers never formally published and "preprints" being cited left and right. Consider the impression you get when seeing a reference to an arXiv preprint vs. a link to an author's institutional website.
In my view, arXiv fulfills its function better the less power it has as an institution, and I thus have exactly zero trust that the split from Cornell is driven by that function. We've seen the kind of appeasement prose from their statement and FAQ [1] countless times before, and it's now time for the usual routine of snapshotting the site to watch the inevitable amendments to the mission statement.
"What positive changes should users expect to see?" - I guess the negative ones we'll have to see for ourselves.
I think both sides could learn from the other. In the case of ML, I understand the desire to move fast and that average time to publication of 250-300 days in some of the top-tier journals can feel like an unnecessary burden. But having been on both sides of peer review, there is value to the system and it has made for better work.
Not doing any of it follows the same spirit as not benchmarking your approach against more than maybe one alternative and that already as an after-thought. Or benchmaxxing but not exploring the actual real-world consequences, time and cost trade offs, etc.
Now, is academic publishing perfect? Of course not, very very far from it. It desperately needs to be reformed to keep it economically accessible, time efficient for both authors, editors and peer reviewers and to prevent the "hot topic of the day" from dominating journals and making sure that peer review aligns with the needs of the community and actually improves the quality of the work, rather than having "malicious peer review" to get some citations or pet peeves in.
Given the power that the ML field holds and the interesting experiments with open review, I would wish for the field to engage more with the scientific system at large and perhaps try to drive reforms and improve it, rather than completely abandoning it and treating a PDF hosting service as a journal (ofc, preprints would still be desirable and are important, but they can not carry the entire field alone).
The current balance where people wrote a paper with reviers in mind, upload it to Arxiv before the review concludes and keep it on Arxiv even if rejected is a nice balance. People get to form their own opinion on it but there is also enough self-imposed quality control on it just due to wanting it to pass peer review, that even if it doesn't pass peer review, it is still better than if people write it in a way that doesn't care or anticipate peer review. And this works because people are somewhat incentivized to get peer reviewed official publications too. But being rejected is not the end of the world either because people can already read it and build on it based on Arxiv.
The problem is that "optimizing for peer-review" is not the same thing as optimizing for quality. E.g., I like to add a few tongue-in-cheeks to entertain the reader. But then I have to worry endlessly about anal-retentive reviewers who refuse to see the big picture.
It is an interesting instance of the rule of least power, https://en.wikipedia.org/wiki/Rule_of_least_power.
People think, for instance, that RDFS and OWL are meant to SHACL people into bad an over engineered ontologies. The problem is these standards add facts and don’t subtract facts. At risk of sounding like ChatGPT: it’s a data transformation system not a validation system.
That is, you’re supposed to use RDFS to say something like
?s :myTermForLength ?o -> ?s :yourTermForLength ?o .
The point of the namespace system is not to harass you, it is to be able to suck in data from unlimited sources and transform it. Trouble is it can’t do the simple math required to do that for real, like ?s :lengthInFeet ?o -> ?s :lengthInInches 12*?o .
Because if you were trying OWL-style reasoning over arithmetic you would run into Kurt Gödel kinds of problems. Meanwhile you can’t subtract facts that fail validation, you can’t subtract facts that you just don’t need in the next round of processing. It would have made sense to promote SHACL first instead of OWL because garbage-in-garbage out, you are not going to reason successfully unless you have clean data… but what the hell do I know, I’m just an applications programmer who models business processes enough to automate them.Similarly the problem of ordered collections has never been dealt with properly in that world. PostgreSQL, N1QL and other post-relational and document DB languages can write queries involving ordered collections easily. I can write rather unobvious queries by hand to handle a lot of cases (wrote a paper about it) but I can’t cover all the cases and I know back in the day I could write SPAQL queries much better than the average RDF postdoc or professor.
As for underengineering, Dublin Core came out when I worked at a research library and it just doesn’t come close in capability to MARC from 1970. Larry Masinter over at Adobe had to hack the standard to handle ordered collections because… the authors of a paper sure as hell care what order you write their names in. And it is all like that: RDF standards neglect basic requirements that they need to be useful and then all the complex/complicated stuff really stands out. If you could get the basics done maybe people would use them but they don’t.
This just isn't true. arXiv is not a venue. There's no place that gives you credit for arXiv papers. No one cares if you cite an arXiv paper or some random website. The vast vast majority of papers that have any kind of attention or citations are published in another venue.
Personally I think this resource mismatch can help drive creative choice of research problems that don’t require massive resources. To misquote Feynman, there’s plenty of room at the bottom
Is a mid-to-high engineering salary outlandish for a CEO of what is likely to be a fairly major non-profit? Even non-profits have to be somewhat competitive when it comes to salary, and the ideal candidate is likely someone who would be balancing this against a tenured position at a major university
And while academic salaries are generally not great, tenured professors at big universities tend to make a fair bit (plus a lot more vacation time and perks than is normal in the US)
So if this is correct, then even in Switzerland, it seems like $300,000 per year would be an obscenely high salary for a senior developer.
Even if we scope it to SWE, I don't think that's far off the US percentiles.
In London I imagine the top 10% SWE is not even 100k GBP. In Germany even worse.
I can not imagine what one could possibly need $300,000 per year for unless an apartment costs like $200,000 per year.
Not really a tenable long-term situation for a senior employee with plans to start a family. Family homes of decent size and area are literally millions of dollars.
Besides, I did already say that everyone else was underpaid relative to costs. But that's not unique to the Bay Area. Cost of housing relative to income is terrible in almost all of the major European cities too.
Once cities become wealthy enough to develop a home owning class, they seem to cease being able to provision adequate housing supply in general.
It is actually quite common to come across HAL in subfields of mathematics in my experience.
arXiv does not need to and should not optimize for “shareholder value”, which is at least nominally the justification for outlandish CEO pay packages.
Though, saying that, I suppose all the reputation data is kind of public. Apart from emails/accounts.
The reason is because arxiv is growing significantly leading to 297,000 deficit in operating costs for 2025 alone. Corenell has helped with donation a long with other organizations that pay membership fees.
As a result, donors + leaders of arxiv think it's best to spin off to increase funding.
Most people I talk to hate that pipeline and spend a lot of debug hours on it when Arxiv can't compile what overleaf and your local latex install can.
The reason authors like and use arxiv is that it gives 1) a timestamp, 2) a standardized citable ID, and 3) stable hosting of the pdf. And readers like the no-nonsense single click download of the pdf and a barebones consistent website look.
All else is a side show.
Also, the "human review" is a simple moderation process [1]. It usually does not dig into the submission's scientific merits.
I've contracted into some consultancy teams which you could uncharitably describe as "15 people and $4mn/yr to create one PDF per month".
arXiv is doomed. It was nice while it lasted.
A setup as a US-based "non-profit" is worrisome, if only because 300K is an obscene salary even in a for-profit setting. That the US-based posters can't see this is evidence of the basic problem which is that the US, both left and right, has been taken over by a neoliberal feudal antidemocratic nativist mindset that is anathema to the sort of free interchange of ideas that underlay the ArXiv's development in the hands of mathematicians and physicists now swept aside and ignored by machine learning grifters and technicians who program computers.
Could they not have made it into some legal structure that puts universities at the top? Say, with a bunch of universities owning shares that comprise the entirety of the ownership of arXiv, but that would allow arXiv to independently raise funds?
The article says that "it will become an independent nonprofit corporation", and as OpenAI's failed attempt showed, converting a non-profit to a for-profit organization is either really hard or impossible.
> Could they not have made it into some legal structure that puts universities at the top?
As a corporation (even a non-profit one), it will have a board of directors. I have no idea what their charter will look like, but I would be surprised if at least one seat wasn't reserved for a university representative, and more than that seems quite likely as well.
So if OpenAI with billions of dollars only partially succeeded at converting to a for-profit business, then that suggests that organizations with fewer resources (like arXiv) have much worse odds.
Any change to the basic premise will be a negative step.
They should just be boring quiet unopininionated neutral background infrastructure.
All the Mozilla executives have done for the last 15+ years is
* lay off developers
* spend lots of money on stupid side projects nobody asked for or wants
* increase their own salaries
and all that with the backdrop of falling quality, market share, and relevance.
I would happily donate to Firefox, but this fucked up organization will never see a single cent from me. They will spend it on anything but Firefox, which is the only thing anybody wants them to spend it on.
It might already be too late, and we will be left with a browser monopoly.
"oh no, you see we are not a preprint server host anymore, our mission is a values driven blablabla to make a meaningful change in the blablabla, we have spent X dollars to promote the blablabla, take me seriously please I'm also fancy like you! "
Exactly. It should be a utility. Not quite dumb pipe, but not too far either.
OpenAI shows exactly how well that works and what that kind of governance does to a company and to its support of science and the commons.
TL;DR, it's fucked.
You need your favourite academic gatekeeper (= thesis advisor) to vouch for you in order to be allowed to upload.
Then AI slop gets flagged and the shame spreads through the graph. And flaggings need to have evidence attached that can again be flagged.
> arXiv requires that users be endorsed before submitting their first paper to arXiv or a new category.
It's probably not perfect but in practice, it seems to have been enough to get rid of the worst crackpotty spam.
another will need to rise to take its place.
To this end, they added an endorsement requirement this year: https://blog.arxiv.org/2026/01/21/attention-authors-updated-...
People keep falling into the same trap. They love monopolies, then are shocked when those monopolies jerk them around.
Everything published on arXiv could also be published on Zenodo, but not the other way around.
I don't see much of a monopoly, nor any "moat" apart from it being recognised. You can already post preprints on a personal website or on github, and there are "alternatives" such as researchgate that can also host preprints, or zenodo. There are also some lesser known alternatives even. I do not see anything special in hosting preprints online apart from the convenience of being able to have a centralised place to place them and search for them (which you call "monopoly"). If anything, the recognisability and centrality of arxiv helped a lot the old, darker days to establish open access to papers. There was a time when many journals would not let you publish a preprint, or have all kinds of weird rules when you can and when you can't. Probably still to some degree.
I am wary of that. IMO the business model is damaged therein. You can say in 2022 we had 27; bankrupt in 2030.
I had to tell my AI to set up an MCP for "fetch while bypassing arXiv's rate limit" so that it doesn't burn 40k tokens looking for workarounds every time it wants to look at a paper and gets hit with a "sorry, meatbags only" wall.
Very annoying, given how relevant arXiv papers are for ML specifically, and how many of papers there are. Can't "human flesh search" through all of them to pick the relevant ones for your work, and they just had to insist on making it harder for AIs to do it too.
That is, it's not readily parseable, it really gives an insider term vibe - like this isn't for you if you don't already know what it means or how you should read or say it. It sort of reminds me of the overuse of latin and latinate terms generally in the old professions and, well, the academy.
Just always struck me as being somewhat at odds with the goal.
To me it's just a way to get out your work fast, so that there is already a trace of it on the Internets - nothing more and nothing less.
> That is, it's not readily parseable, it really gives an insider term vibe...
Isn't that normal with highly specialized research fields? I agree many papers could benefit from clearer wording, but working in a niche means you sometimes don't reach a broader audience
But I did justify and maybe to reword slightly, surely if one of the main drivers is opening up research, the brand name should be something that's less obscure and more accessible / understandable as to what it is on first sight?
Maybe arXiv evoking the word 'archive' with an ancient Greek twist does that for some, but it's clearly a bit cryptic for many, and if the point is to open up probably the brand should just be something much plainer.
Using a brand as a filter where you have to already know what it means to get it is exactly the opposite of what it's supposed to achieve.
Consider the most exclusive (successful) brands that exist. Even there, where exclusivity is a brand goal, none of them have this property of being obscure on first contact.
Its reasonable to have a tradeoff here to avoid cranks and now AI psychosis slop. You can still post on research gate and academia.edu or you own github page or webhosting.
The original service didn't even have a name, only a description, and it was amusingly hosted at xxx.lanl.gov. But LANL wasn't really interested in it, and the founder eventually left for Cornell. At that point, the service needed a domain name, but archive.org was already taken.
And besides, the name has Ancient Greek influences. A similar Latinate term might be something like "archive".
Isn't that actually kindof a good brand signal for a repo of very specialized papers? "Fun with learning" in comic sans wouldn't help credibility.