if you don't care about W or it is essentially constant - then it can be dropped
but it is an input parameter that will change execution time
"sorting" means assigning things into bins (which are usually ordered).
Supposing one uses a 'trie' as a priority queue, the inserts and pops are effectively constant.
(There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)
You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.
If you have an unbounded length then you can make sorting O(l n) where `l` is a bound on the lengths of your input. It's still linear in n, and also better than the O(l n logn) you would with traditional comparison based algorithms once you factor in the O(l) complexity of the comparison function for such elements.
If you don’t have large numbers of repeats of each element, then l needs to scale like O(log n), so O(l * n) is at least O(n log n).
Fundamentally, what’s going on here is that switching between computation models can easily add and remove log factors.
I guess you mean "at least O(n*log(n))".
The transdichotomous model looks interesting.
The unstated implication is that the theory tracks with real world behavior. This is more or less the case for simple problems: your O(n^2) algorithm won't scale, your database query without an index will take forever and so on, it's definitely a useful and high-signal way of thinking about computational problems.
Obviously modern hardware isn't much like a 70s machine and things like "all memory accesses are O(1)" are so wrong it's not even funny.
It's a pure thought experiment with no possible counterfactual, but I think if you tried to develop basic CS theory from a purely mathematical angle (e.g. consider a machine defined so and so with basic operations costing 1, let's run with this ball without caring much about the real world) you'd naturally end up with some (but not all) of the concepts we're used to. For example, arrays and buffers are very natural. We need more space than our basic unit of memory can accomodate, let's use several in a row. Pointers also follow very nicely, and with them structures like linked lists. Other stuff like B-trees and to some extent hash-tables and friends very much less so, they're definitely "imported" from real-world usage.
I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".
Indeed: https://systemsapproach.org/books-html/
If you are cheap on money, but you do have time, and like to get into networking, I can only highly recommend https://book.systemsapproach.org/
this was a lot of words that sum up to "I heard that new algorithm exists but spent zero effort actually evaluating it"
https://arxiv.org/pdf/2504.17033 - Linked from the second sentence of the submission, not hard to track down. And the complexity (by which I presume you mean algorithmic complexity) is stated in the submission and in the PDF linked by the submission and that I just shared with you.
https://youtu.be/3ge-AywiFxs?si=TbcRsBNkzGhpOxQ4&t=842
(timestamped=) He shows a derivation that at best, a sorting algorithm can do is O(n log(n)) for n real positive numbers.
(side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)
It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.
Also please, please, can we stop with the "eww, math" reactions?
> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)
I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."
e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).
If m > n (log n)^{1/3}
Then this algorithm is slower.
for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)
A maximally sparse connected graph by mean (degree edge/node ratio) is any tree (mean degree ~ 1), not necessarily a linked list.
I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?