Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But the workload matters. Even the comment in the article doesn't completely make sense for me in that way -- if your workload is 50 operations per byte transferred versus 5000 operations per byte transferred, there is a considerable difference in hardware requirements.


Exactly. "a properly-configured Kafka cluster" implies you have very properly configured your clients too, which is almost never the case because it's practically very hard to do in the messy reality of a large-scale organization.

Even if you somehow get everyone to follow best-practices, you most likely still won't get to saturate the network on "minimal hardware". The number of client connections and requests per second will likely saturate your "minimal CPU".

It's true that minimal hardware on Kafka can saturate the network, but this mostly happens in low-digit client scenarios. In practice, orgs pushing serious data have serious client counts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: