1) His user numbers are off by an order of magnitude at least, as other comments have mentioned. Even a VM/VPS should handle more, and a modern bare-metal server will do way more than the quoted numbers.
2) Autoscaling is a solution to the self-inflicted problem of insanely-high cloud prices, which cloud providers love because implementing it requires more reliance on proprietary vendor-specific APIs. The actual solution is a handful of modern bare-metal servers at strategic locations which allow you to cover your worst-case expected load while being cheaper than the lowest expected load on a cloud. Upside: lower prices & complexity. Downside: say goodbye to your AWS ReInvent invite.
3) Microservices. Apparently redeploying stateless appservers is a problem (despite the autoscaling part doing exactly this in response to load spikes which he's fine with), and his solution is to introduce 100x the management overhead and points of failure? The argument about scaling separate features differently doesn't make sense either - unless your code is literally so big it can't all fit in one server, there is no problem having every server be able to serve all types of requests, and as a bonus you no longer have to predict the expected load on a per-feature basis. A monolith's individual features can still talk to separate databases just fine.
"grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
seem very confusing to grug"
I've seen engineering orgs of 10-50 launch headlong into microservices to poor results. No exaggeration to say many places ended up with more repos & services than developers to manage them.
I ran a 10k user classic ASP service on a VPS from Fasthosts, with MySQL 5.6 and Redis, and it was awesome.
Author having this on his github makes me even more suspicious: https://github.com/ashishps1/learn-ai-engineering
Twitter famously had a "fail whale" but it didn't stop the company from growing. If you have market demand (and I guess advertising) then you can get away with a sub-optimal product for a long time.
Agreed, but there's still an element of survivorship bias there. Plenty of companies failed as they couldn't keep up with their scaling requirements and pushed the "getting away with a sub-optimal product" for too long a time.
Friendster might fit though: https://highscalability.com/friendster-lost-lead-because-of-...
If it’s just “sign up any time you want and go”, yes, it can go that way.
If it’s “join that waiting list” or “book a call” (for KYC purposes or whatever), you have a buffer.
If user count is more or less constant (most internal websites, for example), it’s probably not an issue.
And so on.
Modern hardware is fast, if you cannot fit more than 100 users (not even 100 concurrent users) on a single $50/month server, you're doing something very very wrong.
Even repurposed 10 years old fairphone[1] can handle more than that.
[1]: https://far.computer
Thank you stranger.