Hacker Newsnew | past | comments | ask | show | jobs | submit | lessclue's commentslogin

How will client libs that do connection pooling handle this? If a client opens a pool of 50 connections, will they be identified as 50 different clients, and receive 50 different pushes for invalidation? I guess the client libs will have to build a layer to balance caching on top of the pool.


Just curious, why would you pool redis connections?


High throughput. Pooling is the default behaviour for many Redis clients too, isn't it? eg: https://github.com/gomodule/redigo


Veronica could’ve just turned Airdrop off and moved on with her life.


Eh. Sorta. The small form factor of pocket computers belies their functionality, and makes most people treat them as if they are much simpler devices than they really are. The fact that there isn't a manual of operations included with it doesn't help - when was the last time a manual on settings and their ramifications was included with the device? Reasonable defaults, I guess?


Isn’t the default “contacts only”?


The lack of a manual feels like a red herring only a small fraction of user actually reads the manual. And among those that do they read it to find a specific thing (like wifi) not to understand the product as a whole.


Great insight. I used to really enjoy reading the manuals or instructions of things I would buy or receive as gifts.

Technical writing in print-form is a lost art today.


But what about having an angle to create some fake outrage...


Homogenization is not limited to software UI design. Consumer electronics and appliances, retail fashion etc. are globally homogenized. That's just how the world works.


You should really include some screenshots on the README of the UI. I'd give it a shot if I had a clear picture of what to expect.


Whoa, haven’t heard that name in 15 years. Remember downloading an ISO over slow DSL at an Internet cafe to (of course) do a live boot and rescue files from a botched HD.


This is a nice project.

Hardcoding "aws" in the core seems like an odd choice though. Wouldn't it be better to make it agnostic and provide some sort of a trigger where a an external script or util to handle backups? That is, why S3 and why not any other service?


Redis was/is open source. The licensing thing was with Redis labs modules (plugins) for Redis.


Can you elaborate on that please?


Try inserting more that 30K elements in a key which is a sorted set and watch the insertion time, memory & cpu usage. Now try doing this to millions of keys simultaneously.


We run several million sorted sets, but they are all short sets (100s), but do thousands of writes/sorts per second without issue.

From memory - there was a setting to turn on/off gzip compression for a list once it went beyond a certain size - do you have this enabled?


Huh, what? Never heard of this DB replication "PC" nonsense before.


80% is thinking and figuring things out, 20% is writing code; always.


THis.

Solve your problem then write code. Problems solved by writing code almost always ends up with crappy / messy code / difficult to maintain code.

The coding should be the easy bit, it's just implementing your solution in a language you know. It's basically just typying.


It gets lower and lower the more senior you get because you add an entire "obtain requirements and develop consensus" step.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: