Faster Than Dijkstra?
64 points
3 days ago
| 7 comments
| systemsapproach.org
| HN
vprcic
1 hour ago
[-]
Each time a discussion about sorting starts, I'm reminded of a "lively debate" I had with my boss/friend about the most optimal sorting approach. He claimed it's O(n) pointing to counting sort as an example. This didn't sit well with me. A sorting algorithm, I insisted, should be defined something like "a function taking an unsorted array of elements and returning a sorted one". But it seems there is no agreed upon definition and you can have something like counting sort where you get an array in O(n) but still need O(m) to get the index of a specific element. I argued that then the best sorting algorithm is the do-nothing algorithm. It returns the initial array in O(1), but needd O(m log m) to get you the index of an element (it uses merge sort to do it).
reply
jerf
41 seconds ago
[-]
A more full statement is that no sorting algorithm based on comparisons can complete in less than O(n log n) comparisons. There are sorting algorithms such as radix sort that can sort in O(n), but only when you can put numbers in a bucket in O(1) time and read them out at the end in some semblance of O(1) time, which turns out to be a pretty harsh set of restrictions in practice that tend to limit you to "small integers" and not much else.
reply
Epa095
1 minute ago
[-]
Usually when one talk about sorting, without specifying closer, one means comparison sort [1], which indeed has an average-case lower bound of O(n*log(n)). In more special cases all kinds of other runtimes are possible.

1: https://en.wikipedia.org/wiki/Comparison_sort

reply
hinkley
9 minutes ago
[-]
I can’t think of a single time I’ve needed a sorted list of only numbers. It’s always numbers and something else, like names or dates. Maybe for median calculations, but I don’t even use those that much either. Especially in telemetry, where mean is easy and median is not.
reply
NooneAtAll3
1 hour ago
[-]
counting sort is O(nW), where W is largest value

if you don't care about W or it is essentially constant - then it can be dropped

but it is an input parameter that will change execution time

reply
emil-lp
3 minutes ago
[-]
W is the span or range.
reply
myhf
33 minutes ago
[-]
"ordering" means arranging things in order by some metric.

"sorting" means assigning things into bins (which are usually ordered).

reply
emil-lp
3 minutes ago
[-]
This is news to me. Source?
reply
tpoacher
1 hour ago
[-]
You reminded me of "Sleep Sort" [0]

[0] https://news.ycombinator.com/item?id=2657277

reply
shiandow
2 hours ago
[-]
Interestingly sorting is O(N) for a surprisingly large class of datatypes. Anything that behaves well with lexicographic sorting really.

Supposing one uses a 'trie' as a priority queue, the inserts and pops are effectively constant.

reply
qsort
2 hours ago
[-]
Only if you assume a finite alphabet and bounded length. Relax either and you're back to O(n log n) for a fully general solution. Examples of both: tuples and strings.

(There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)

reply
SkiFire13
1 hour ago
[-]
> Only if you assume a finite alphabet and bounded length

You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.

If you have an unbounded length then you can make sorting O(l n) where `l` is a bound on the lengths of your input. It's still linear in n, and also better than the O(l n logn) you would with traditional comparison based algorithms once you factor in the O(l) complexity of the comparison function for such elements.

reply
amluto
1 hour ago
[-]
> O(l n)

If you don’t have large numbers of repeats of each element, then l needs to scale like O(log n), so O(l * n) is at least O(n log n).

Fundamentally, what’s going on here is that switching between computation models can easily add and remove log factors.

reply
SkiFire13
50 minutes ago
[-]
I think you're making some assumptions on n that I'm not making. I'm considering it to be the number of elements to sort, not the size of the input.
reply
robotpepi
1 hour ago
[-]
> so O(l * n) is at least O(n).

I guess you mean "at least O(n*log(n))".

reply
amluto
1 hour ago
[-]
Indeed, and thanks. I edited it :)
reply
ogogmad
2 hours ago
[-]
In the most extreme case of this, you can use Counting Sort. Tangential to this, Spaghetti Sort makes me wonder about which parts of CS (especially the data structures, like arrays) are objective or just accidental.

The transdichotomous model looks interesting.

reply
qsort
1 hour ago
[-]
If we're taking the pure math view, complexity analysis is agnostic to this. Strictly speaking you'd have to define how your abstract machine works: O(n log n) is (again, strictly speaking) literally just a set of functions. In practice we usually hand-wave this away (not without problems: arithmetic taking time O(log n) and hash table lookups taking time O(1) are not consistent with each other!)

The unstated implication is that the theory tracks with real world behavior. This is more or less the case for simple problems: your O(n^2) algorithm won't scale, your database query without an index will take forever and so on, it's definitely a useful and high-signal way of thinking about computational problems.

Obviously modern hardware isn't much like a 70s machine and things like "all memory accesses are O(1)" are so wrong it's not even funny.

It's a pure thought experiment with no possible counterfactual, but I think if you tried to develop basic CS theory from a purely mathematical angle (e.g. consider a machine defined so and so with basic operations costing 1, let's run with this ball without caring much about the real world) you'd naturally end up with some (but not all) of the concepts we're used to. For example, arrays and buffers are very natural. We need more space than our basic unit of memory can accomodate, let's use several in a row. Pointers also follow very nicely, and with them structures like linked lists. Other stuff like B-trees and to some extent hash-tables and friends very much less so, they're definitely "imported" from real-world usage.

reply
alias_neo
3 hours ago
[-]
Deja Vu.

I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".

reply
_bernd
1 hour ago
[-]
> and thought to myself "they do textbooks?".

Indeed: https://systemsapproach.org/books-html/

If you are cheap on money, but you do have time, and like to get into networking, I can only highly recommend https://book.systemsapproach.org/

reply
yorwba
2 hours ago
[-]
This submission was made three days ago and bumped two hours ago: https://news.ycombinator.com/submitted?id=drbruced
reply
NooneAtAll3
1 hour ago
[-]
am I missing something?

this was a lot of words that sum up to "I heard that new algorithm exists but spent zero effort actually evaluating it"

reply
WoodenChair
53 minutes ago
[-]
No, I don't think you're missing anything. He never answered the title of the post ("Faster Than Dijkstra?"). Instead he went on a huge tangent about his experience writing software for routers and is dismissive of the algorithm because the router problem space he was working in did not deal with a node count high enough to warrant the need for a more complex algorithm. Dijkstra's algorithm is used for problem spaces with far higher number of nodes than he mentions... basically an article that talks about some kind of interesting things but doesn't say much about its central question.
reply
moffkalast
23 minutes ago
[-]
What I'm missing is certainly what the hell the algorithm even is and what is its complexity. This guy just rambles about old switches.
reply
Jtsummers
4 minutes ago
[-]
> What I'm missing is certainly what the hell the algorithm even is and what is its complexity.

https://arxiv.org/pdf/2504.17033 - Linked from the second sentence of the submission, not hard to track down. And the complexity (by which I presume you mean algorithmic complexity) is stated in the submission and in the PDF linked by the submission and that I just shared with you.

reply
K0IN
1 hour ago
[-]
I can only recommend (for all Germans here) this video from dorfuchs:

https://youtu.be/3ge-AywiFxs?si=TbcRsBNkzGhpOxQ4&t=842

(timestamped=) He shows a derivation that at best, a sorting algorithm can do is O(n log(n)) for n real positive numbers.

reply
drfuchs
55 minutes ago
[-]
From who now?
reply
jason_s
3 hours ago
[-]
Intriguing article. Sometimes practical issues override theoretical ones, and it would be interesting to see which one dominates in networking.

(side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)

reply
qsort
3 hours ago
[-]
I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.

It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.

Also please, please, can we stop with the "eww, math" reactions?

> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)

I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."

reply
yborg
3 hours ago
[-]
The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.
reply
tialaramex
2 hours ago
[-]
Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that.

e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).

reply
shermantanktop
3 hours ago
[-]
I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.
reply
gowld
3 hours ago
[-]
More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.

If m > n (log n)^{1/3}

Then this algorithm is slower.

for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)

reply
bee_rider
3 hours ago
[-]
Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.
reply
usrusr
3 hours ago
[-]
"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)
reply
gowld
48 minutes ago
[-]
A linked list is sparse by the metric of minimum maximum degree (2).

A maximally sparse connected graph by mean (degree edge/node ratio) is any tree (mean degree ~ 1), not necessarily a linked list.

reply
mightyham
3 hours ago
[-]
> I struggle to see the point. The paper in question doesn't claim to be practically faster...

I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?

reply