Some estimates are that 11% of the world's population was killed during this time.
My first thought when I started reading TFA was that the list of catastrophes to consider would be biased because more recent events have better records. Maybe that's why the author decided on the 1500 cut off?
If you restrict to the single "worst" decade then the Mongol invasions would have been high enough to make the list, but I didn't want to start making too many manual adjustments to the data, so I left it as-is.
Leaving out wars, there's a lot of nasty that happened to humanity before 500BCE of course, including countless wars, but also including the Late Bronze Age collapse (around 2200BCE), which it looks like might have been worldwide. Dating tech has advanced greatly recently, so physical causes might eventually be known (quakes, volcanoes, climate change, floods...). But estimating deaths is hard.
P.S. Those who've avoided book 'Earth in Upheaval' for some reason (FO Velikovsky?) might consider checking out the first few chapters full of voluminous evidence of past, non-human catastrophes. Regardless of 'explanations', it's an impressive laundry list. (Now available in AI-read audiobook form.)
From the paper: "Accordingly, the public intellectual arena has witnessed active debates, such as the one between Steven Pinker on one side, and John Gray on the other concerning the hypothesis that the long peace was a statistically established phenomenon or a mere statistical sampling error that is characteristic of heavy-tailed processes, 16] and [27]–the latter of which is corroborated by this paper."
Scaling theory of armed-conflict avalanches https://www.santafe.edu/news-center/news/avalanche-violence-...
Podcast: https://complexity.simplecast.com/episodes/39-9ugXDtkC
Pinker has been called out for cherry picking by numerous other authors, and particularly Graeber/Wengrow who are a duo of academic anthropologist and archaeologist respectively. Another is Christopher Ryan. In both cases well reasoned counterarguments are poised against Pinker's reasoning.
However, it also has ameliotating effects that are barely touched on here. Vaccines are the most obvious example. And as the author mentioned, the near elimination of famines is largely the result of technology.
While I can absolutely see the potential for AI to precipitate a catastrophe, to me it has more in common with technologies that have prevented or ameliorated them.
The idea of ai as a novel self replicator is cool and appears in movies and books, but doesn’t seem to exist outside of fiction. The other article referenced seems to dream of a future 2030 AI with all the capabilities one can imagine isn’t supported by any reasonable projections for AI technology. It might as well be a warning about all the dangerously weaponizable portable fusion reactors that could exist if ITER development is super successful. In this respect, AI seems like an unlikely driver of catastrophe as defined in the near term.
Putting even a 5-10% increase in rates of calamity due to this technology, which has no evidence to support it, while discounting all other technologies including nuclear weapons claiming there’s too little data is not reasonable. The reality is, we don’t know what risk value to assign. We won’t know for some time.
Just leave out the AI bit from the otherwise reasonable looking statistical analysis, and you’ll be left with a more intellectually rigorous and useful work.
No / smaller catastrophes: do the math, cash the premiums, do the payouts.
Big catastrophe: insurer would go under, but as long as it doesn't happen they can still cash the insurance premiums & everything is a-okay.