Oh jeez now I have to read the rest.
More people need to read Blellochs PH.D Thesis. Vector models for data-parallel computing. It's a mind blowing way to think of parallel computation.
This is perhaps one of the best parallel programming / parallel data structures professors on the planet.
------
Awwww it's not so much about Blellochs work but I steady he's probably the guy ACM had to help explain and understand this new paper on the Bookshelf problem. Still great read though, but I was hoping for some crazy parallel programming application here.
"Even though many real-world data settings are not adversarial, situations without an adversary can still sometimes involve sudden floods of data to targeted spots, she noted."
This is pretty neat. I bet this will find practical applications.
Adjacent to this topic is algorithms for two-player games, like minimax, which depend on imagining an adversary that plays perfect counter moves.
In a similar vein, in ML, there is a model called generative adversarial networks (GANs) in which 2 networks (a generator and discriminator) play a minimax game against each other, improving the capability of both models at once.
As a general rule, any algorithm that involves a hash or a random/arbitrary choice has historically been based on "assume no adversary" and even now it has only advanced to "assume an incompetent adversary".
By contrast, most tree-adjacent algorithms have always been vigilant against competent adversaries.
Quicksort, mergesort and heapsort are commonly analyzed with worst case / adversaries based decisions.
I know that binary trees (especially red-black trees, AVL trees and other self balancing trees) have huge studies into adversaries picking the worse case scenario.
And finally, error correction coding schemes / hamming distances and other data reliability (ex: CRC32 checks) have proofs based on the worst case adversary bounds.
-------
If anything, I'm struggling to think of a case where the adversary / worst case performance is NOT analyzed. In many cases, worst case bounds are easier to prove than average case... So I'd assume most people start with worst case analysis before moving to average case analysis
For some types of problems, identifying worst-case behavior is straightforward. For example, in a hash table lookup the worst-case is when all keys hash to the same value. To me, it seems like overkill to think in terms of an intelligent adversary in that case.
But in the problem described here, the worst-case is harder to construct. Especially while exploring the solution space given that slight tweaks to the solution can significantly change the nature of the worst-case. Thinking of it as adversarial implies thinking in terms of algorithms that dynamically produce the worst-case rather than trying to just identify a static worst-case that is specific to one solution. I can imagine that approach significantly speeding up the search for more optimal solutions.
Here is a 2011 article about DOS attacks against web apps enable by hash table-based dicts: https://www.securityweek.com/hash-table-collision-attacks-co...
djb has long advocated “crit bit trees”, ie tries: https://cr.yp.to/critbit.html