How will client libs that do connection pooling handle this? If a client opens a pool of 50 connections, will they be identified as 50 different clients, and receive 50 different pushes for invalidation? I guess the client libs will have to build a layer to balance caching on top of the pool.
Eh. Sorta. The small form factor of pocket computers belies their functionality, and makes most people treat them as if they are much simpler devices than they really are.
The fact that there isn't a manual of operations included with it doesn't help - when was the last time a manual on settings and their ramifications was included with the device?
Reasonable defaults, I guess?
The lack of a manual feels like a red herring only a small fraction of user actually reads the manual. And among those that do they read it to find a specific thing (like wifi) not to understand the product as a whole.
Homogenization is not limited to software UI design. Consumer electronics and appliances, retail fashion etc. are globally homogenized. That's just how the world works.
Whoa, haven’t heard that name in 15 years. Remember downloading an ISO over slow DSL at an Internet cafe to (of course) do a live boot and rescue files from a botched HD.
Hardcoding "aws" in the core seems like an odd choice though. Wouldn't it be better to make it agnostic and provide some sort of a trigger where a an external script or util to handle backups? That is, why S3 and why not any other service?
Try inserting more that 30K elements in a key which is a sorted set and watch the insertion time, memory & cpu usage. Now try doing this to millions of keys simultaneously.