True - but the datacenter burning still hits you where it hurts if your entire Kubernetes cluster is living in that datacenter.
Whereas if you make a point of spinning up each Linux box on different datacenters (and even perhaps different providers) you are at least resilient against that.
Using advanced tooling doesn't remove the need to do actual planning beyond "oh I have backups" - you should be listing the risks and recoveries for various scenarios - though being able to handle "AWS deleted all my servers because I returned too many things via Amazon Prime" means you can also handle any lesser problem.
IME you'll hit it around $500,000 USD ARR. But again, situations vary. I've worked projects where hours long scheduled downtime at the weekend was acceptable. So easy!
Also, I assume sales force engineers aren’t SSHing into each shard and running docker-compose manually on each one.
Obviously you need Goldilocks infrastructure - not too little, but not too much.
All I’m saying is it doesn’t go ‘one Linux box, one Linux box, one Linux box, reach Facebook size, build your own data center’.