Faster Than Dijkstra?(systemsapproach.org)
68 points bydrbruced3 days ago |8 comments
vprcic2 hours ago
Each time a discussion about sorting starts, I'm reminded of a "lively debate" I had with my boss/friend about the most optimal sorting approach. He claimed it's O(n) pointing to counting sort as an example. This didn't sit well with me. A sorting algorithm, I insisted, should be defined something like "a function taking an unsorted array of elements and returning a sorted one". But it seems there is no agreed upon definition and you can have something like counting sort where you get an array in O(n) but still need O(m) to get the index of a specific element. I argued that then the best sorting algorithm is the do-nothing algorithm. It returns the initial array in O(1), but needd O(m log m) to get you the index of an element (it uses merge sort to do it).
Epa09543 minutes ago
Usually when one talk about sorting, without specifying closer, one means comparison sort [1], which indeed has an average-case lower bound of O(n*log(n)). In more special cases all kinds of other runtimes are possible.

1: https://en.wikipedia.org/wiki/Comparison_sort

NooneAtAll31 hour ago
counting sort is O(nW), where W is largest value

if you don't care about W or it is essentially constant - then it can be dropped

but it is an input parameter that will change execution time

marcosdumay9 minutes ago
It's O(n+W), not O(n*W).

> if you don't care about W or it is essentially constant - then it can be dropped

Also, every algorithm that ends in a real computer is bound to a constant time. That's still not a practical thing to do.

emil-lp45 minutes ago
W is the span or range.
hinkley51 minutes ago
I can’t think of a single time I’ve needed a sorted list of only numbers. It’s always numbers and something else, like names or dates. Maybe for median calculations, but I don’t even use those that much either. Especially in telemetry, where mean is easy and median is not.
pvillano34 minutes ago
To be pedantic, median is cheaper than sorting. O(n) with a quicksort-like algorithm.

Also, if you're taking an average of floating point numbers, you might want to sort it first and add from smallest to largest, to better preserve precision

vlovich12330 minutes ago
If the primary key is the number, it still works (and dates are just numbers by the way) because you can sort a heterogenous dataset by a single numeric key pretty trivially.

But sorting by arbitrary strings like names can’t avoid comparison sort.

myhf1 hour ago
"ordering" means arranging things in order by some metric.

"sorting" means assigning things into bins (which are usually ordered).

emil-lp44 minutes ago
This is news to me. Source?
tpoacher1 hour ago
You reminded me of "Sleep Sort" [0]

[0] https://news.ycombinator.com/item?id=2657277

fn-mote27 minutes ago
Should have a more conspicuous link to the Quanta article that it references if not the original. (It’s there, just not on the top.)

https://www.quantamagazine.org/new-method-is-the-fastest-way...

https://arxiv.org/abs/2504.17033

Jtsummers20 minutes ago
He links to those in the first two sentences. How is that not "on the top"?
alias_neo3 hours ago
Deja Vu.

I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".

_bernd2 hours ago
> and thought to myself "they do textbooks?".

Indeed: https://systemsapproach.org/books-html/

If you are cheap on money, but you do have time, and like to get into networking, I can only highly recommend https://book.systemsapproach.org/

_bernd31 minutes ago
yorwba3 hours ago
This submission was made three days ago and bumped two hours ago: https://news.ycombinator.com/submitted?id=drbruced
NooneAtAll31 hour ago
am I missing something?

this was a lot of words that sum up to "I heard that new algorithm exists but spent zero effort actually evaluating it"

WoodenChair1 hour ago
No, I don't think you're missing anything. He never answered the title of the post ("Faster Than Dijkstra?"). Instead he went on a huge tangent about his experience writing software for routers and is dismissive of the algorithm because the router problem space he was working in did not deal with a node count high enough to warrant the need for a more complex algorithm. Dijkstra's algorithm is used for problem spaces with far higher number of nodes than he mentions... basically an article that talks about some kind of interesting things but doesn't say much about its central question.
moffkalast1 hour ago
What I'm missing is certainly what the hell the algorithm even is and what is its complexity. This guy just rambles about old switches.
Jtsummers46 minutes ago
> What I'm missing is certainly what the hell the algorithm even is and what is its complexity.

https://arxiv.org/pdf/2504.17033 - Linked from the second sentence of the submission, not hard to track down. And the complexity (by which I presume you mean algorithmic complexity) is stated in the submission and in the PDF linked by the submission and that I just shared with you.

moffkalast18 minutes ago
I did eventually find that yes, after sifting through the rest of the useless links, the quantamagazine article that says jack shit, the link to the ACM symposium call for submissions (lmao). Like come on, why label that "underlying research"?

And all of that was wasted time since it seems that this just isn't at all applicable to A* heuristics the way Dijkstra's is. It's only an improvement in a very specific case.

shiandow3 hours ago
Interestingly sorting is O(N) for a surprisingly large class of datatypes. Anything that behaves well with lexicographic sorting really.

Supposing one uses a 'trie' as a priority queue, the inserts and pops are effectively constant.

qsort2 hours ago
Only if you assume a finite alphabet and bounded length. Relax either and you're back to O(n log n) for a fully general solution. Examples of both: tuples and strings.

(There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)

shiandow10 minutes ago
Both are true in practice, so not unreasonable. For graph weights that is, not sorting.

That said the finite alphabet and bounded length requirements can be softened a bit. Even for general sorting algorithms.

I mean, for the kind of lexicographic sotable data we're talking about you can basically pick a convenient alphabet size without cost.

And unbounded length is not that big an obstruction. Sure you are going to need O(n log(n)) comparisons. But you can't compare data of unbounded length in constant time anyway. In the end you end up taking an amount of time that is at least proportional to the amount of data, which is optimal up to a constant factor. And if you fiddle with radix sort enough you can get it within something similar.

Basic ASCII strings and tuples aren't that big an obstruction. Fractions are more complicated.

Really the O(n log(n)) for comparison based sorts and O(N) for radix sort mean something different. One is the number of comparisons to the number of elements, and the other closer to the number of operations per amount of data. Though that assumes O(1) swaps, which is technically incorrect for data that doesn't fit a 64 bit computer.

SkiFire132 hours ago
> Only if you assume a finite alphabet and bounded length

You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.

If you have an unbounded length then you can make sorting O(l n) where `l` is a bound on the lengths of your input. It's still linear in n, and also better than the O(l n logn) you would with traditional comparison based algorithms once you factor in the O(l) complexity of the comparison function for such elements.

CaptainNegative18 minutes ago
> You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.

You can generally sort any array in constant time by taking that constant to be the time it takes to sort the array using bubble sort.

amluto2 hours ago
> O(l n)

If you don’t have large numbers of repeats of each element, then l needs to scale like O(log n), so O(l * n) is at least O(n log n).

Fundamentally, what’s going on here is that switching between computation models can easily add and remove log factors.

SkiFire131 hour ago
I think you're making some assumptions on n that I'm not making. I'm considering it to be the number of elements to sort, not the size of the input.
robotpepi1 hour ago
> so O(l * n) is at least O(n).

I guess you mean "at least O(n*log(n))".

amluto1 hour ago
Indeed, and thanks. I edited it :)
ogogmad2 hours ago
In the most extreme case of this, you can use Counting Sort. Tangential to this, Spaghetti Sort makes me wonder about which parts of CS (especially the data structures, like arrays) are objective or just accidental.

The transdichotomous model looks interesting.

qsort2 hours ago
If we're taking the pure math view, complexity analysis is agnostic to this. Strictly speaking you'd have to define how your abstract machine works: O(n log n) is (again, strictly speaking) literally just a set of functions. In practice we usually hand-wave this away (not without problems: arithmetic taking time O(log n) and hash table lookups taking time O(1) are not consistent with each other!)

The unstated implication is that the theory tracks with real world behavior. This is more or less the case for simple problems: your O(n^2) algorithm won't scale, your database query without an index will take forever and so on, it's definitely a useful and high-signal way of thinking about computational problems.

Obviously modern hardware isn't much like a 70s machine and things like "all memory accesses are O(1)" are so wrong it's not even funny.

It's a pure thought experiment with no possible counterfactual, but I think if you tried to develop basic CS theory from a purely mathematical angle (e.g. consider a machine defined so and so with basic operations costing 1, let's run with this ball without caring much about the real world) you'd naturally end up with some (but not all) of the concepts we're used to. For example, arrays and buffers are very natural. We need more space than our basic unit of memory can accomodate, let's use several in a row. Pointers also follow very nicely, and with them structures like linked lists. Other stuff like B-trees and to some extent hash-tables and friends very much less so, they're definitely "imported" from real-world usage.

jason_s3 hours ago
Intriguing article. Sometimes practical issues override theoretical ones, and it would be interesting to see which one dominates in networking.

(side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)

K0IN2 hours ago
I can only recommend (for all Germans here) this video from "dorfuchs":

https://youtu.be/3ge-AywiFxs?si=TbcRsBNkzGhpOxQ4&t=842

(timestamped=) He shows a derivation that at best, a sorting algorithm can do is O(n log(n)) for n real positive numbers.

drfuchs1 hour ago
From who now?
qsort4 hours ago
I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.

It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.

Also please, please, can we stop with the "eww, math" reactions?

> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)

I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."

yborg4 hours ago
The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.
tialaramex2 hours ago
Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that.

e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).

tzs40 minutes ago
> Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that

That's actually given as a reason to study NP-completeness in the classic 1979 book "Computers and Intractability: A Guide to the Theory of NP-Completeness" by Garey & Johnson, which is one of the most cited references in computer science literature.

Chapter one starts with a fictional example. Say you have been trying to develop an algorithm at work that validates designs for new products. After much work you haven't found anything better than exhaustive search, which is too slow.

You don't want to tell your boss "I can't find an efficient algorithm. I guess I'm just too dumb".

What you'd like to do is prove that the problem is inherently intractable, so you could confidently tell your boss "I can't find an efficient algorithm, because no such algorithm is possible!".

Unfortunately, the authors note, proving intractability is also often very hard. Even the best theoreticians have been stymied trying to prove commonly encountered hard problems are intractable. That's where the book comes in:

> However, having read this book, you have discovered something almost as good. The theory of NP-completeness provides many straightforward techniques for proving that a given problem is “just as hard” as a large number of other problems that are widely recognized as being difficult and that have been confounding the experts for years.

Using the techniques from the book you prove the problem is NP-complete. Then you can go to your boss and announce "I can't find an efficient algorithm, but neither can all these famous people". The authors note that at the very least this informs your boss that it won't do any good to fire you and hire another algorithms expert. They go on:

> Of course, our own bosses would frown upon our writing this book if its sole purpose was to protect the jobs of algorithm designers. Indeed, discovering that a problem is NP-complete is usually just the beginning of work on that problem.

...

> However, the knowledge that it is NP-complete does provide valuable information about what lines of approach have the potential of being most productive. Certainly the search for an efficient, exact algorithm should be accorded low priority. It is now more appropriate to concentrate on other, less ambitious, approaches. For example, you might look for efficient algorithms that solve various special cases of the general problem. You might look for algorithms that, though not guaranteed to run quickly, seem likely to do so most of the time. Or you might even relax the problem somewhat, looking for a fast algorithm that merely finds designs that meet most of the component specifications. In short, the primary application of the theory of NP-completeness is to assist algorithm designers in directing their problem-solving efforts toward those approaches that have the greatest likelihood of leading to useful algorithms.

shermantanktop4 hours ago
I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.
gowld4 hours ago
More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.

If m > n (log n)^{1/3}

Then this algorithm is slower.

for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)

bee_rider3 hours ago
Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.
usrusr3 hours ago
"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)
gowld1 hour ago
A linked list is sparse by the metric of minimum maximum degree (2).

A maximally sparse connected graph by mean (degree edge/node ratio) is any tree (mean degree ~ 1), not necessarily a linked list.

mightyham3 hours ago
> I struggle to see the point. The paper in question doesn't claim to be practically faster...

I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?