Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kubernetes Documentary: A Critical Review (cloudcritical.org)
117 points by riccardomc on Feb 21, 2022 | hide | past | favorite | 122 comments


I'm not sure this was authored in full good faith. Things like dismissing Kubernetes attempts to "abstract away the servers" as being pointless because we already had virtualisation is at best ignorance of the differences, and at worst, gaslighting.

It's clear that the author doesn't like Kubernetes, and to be honest this documentary is unlikely to change their mind because they are not the target audience.

All the "criticism" about Kubernetes being a Google strategy could well be valid, but it's not exactly new, unique, or particularly impactful here.

(Disclaimer, I work at Google, nothing to do with cloud/k8s)


What does gaslighting mean in this context? I thought it meant making the "victim" question their own sanity/perception of reality?


Outside of gaslighting's original context in a relationship, I often see it used to refer to a form of strawman argument. I read the gaslighting comment as saying that the person knows that virtualization in the form before Kubernetes was not sufficient, but tries to make people who express that thought feel like they're wrong at a fundamental level. So the implication is that the author knowingly constructs a strawman argument and then tries to make it sound like anyone who recognizes it for what it is, is "crazy".


Gaslighting has almost entirely lost its original meaning because people use it for anything now. Drives me crazy.


Same with "toxic" which now just means "any speech I don't like"


Calling something toxic, problematic, or gaslighting is a great way to silence further discussion


“Labels like that are probably the biggest external clue. If a statement is false, that's the worst thing you can say about it. You don't need to say that it's heretical. And if it isn't false, it shouldn't be suppressed. So when you see statements being attacked as x-ist or y-ic (substitute your current values of x and y), whether in 1630 or 2030, that's a sure sign that something is wrong. When you hear such labels being used, ask why.”

Paul Graham - What You Can’t Say

http://www.paulgraham.com/say.html


Toxic or problematic, perhaps, but gaslighting is a specific term that means more than "bad" (or toxic), and I intended it in that very specific way. I think it's a useful term because of this specificity, and because it helps to explain the very specific feeling of confusion that it instils in those it is used against.


In this case it's essentially claiming that Kubernetes is the same as virtualisation or has no other benefits. It's (arguably) gaslighting because it doesn't make clear that it is making that claim, it instead implies it as fact in the way it talks about virtualisation.


Before I learnt what it really means I always imagined it as a blowtorch heating things up, which sort of fits the original but obviously has a much wider scope. So maybe GP has the same idea I had.


Hi danpalmer, I kind of hit on this in the article. How do you think abstracting away the server adds any value in "public cloud"? Where you can get bespoke VMs, with no concern for the underlying hardware? Can you elaborate?


There are several layers of abstraction here, all bringing benefits.

Layer 1: bare metal servers, bare metal routers, hardware appliances, etc.

Layer 2: virtual servers, VLANs/security groups/etc, some appliances.

Layer 3: containers, container networks, storage resources, etc. Kubernetes, arguably Heroku.

Layer 4: functions as a service?

In the article the suggestion, or how I read it, is essentially "Kubernetes is nothing new, we've had [layer 2] for years". However I see Kubernetes as sitting squarely in "layer 3" here.

These layers are pretty subjective, all of the boundaries are blurred, but when I'm working these do require fairly different skills sets. I've never had the skillset to work at "1", I was fairly good at "2" for a while, but spent too much of my time working on that and too little shipping software, so my workplace moved to Kubernetes and it allowed me as an application engineer to do much less infra work, and for it to be at "3" when necessary.

To just write off everything in "3" as being "virtualisation" is ignoring the significant step in level of abstractions and the benefits brought by that.


Process boundaries do imply a kind of virtualization, OP is not wrong there. What containers add as a feature though is comprehensive namespacing for the resources that the OS manages on behalf of "virtualized" processes.


But the initial question was how it differs from public cloud. This is not a difference. You can define your kubernetes or your terraform and have whichever brand of logical isolation you prefer


I can tell you what it does for me: Full disk on your VM? Nope. Storage is abstracted away. Your server will not fill up anymore. Only one service might break which might heal itself.

Is your VM/node broken? It will heal itself because you throw it away and the new VM/node is fixed.

It enforces the abstraction of Service and VM. You will not install normal software on that VM just because you can. You don't need to give access to a VM to a developer who then needs root access and has dependencies and doesn't update the VM.

You no longer have dependencies to your VM because you can't have dependencies on your VM OS.

Abstracting it away from your VM also streamlines things like logfiles. You no longer need to collect all logfiles from VMs because you do it for your services and for your services, you only do it once (if even, log to stdout and be done with it)


None of this is done by k8s. Network storage, hardware virtualization, network virtualization? The all existed before.


None of this is true.


I'm describing my real life issues i have and had.

Feel free to actually write more than 'None of this is true.' in a way that a discussion is actually possible.

Tx :)


I run k8s on bare-metal, and I can say a full disk is certainly possible if you have a service logging a few mb/s. Things will break in fun and interesting ways, data will get irrecoverably corrupted, etc. Your entire cluster will probably even break if said node was the etcd leader. This is pretty easy to reproduce by simply saturating a network and then watching the etcd leader spill its guts in your logs once the network buffers fill up.

> You no longer have dependencies to your VM because you can't have dependencies on your VM OS.

Your containers rely on the OS's kernel and whatever features it was compiled with.

> You will not install normal software on that VM just because you can

If you're paying through the nose for managed k8s, this is true. If not, you'll eventually need to login to a node and diagnose some issue, which means installing things on the node.

> You no longer need to collect all logfiles from VMs because you do it for your services and for your services

Whatever you installed to collect logfiles is getting them from the VM's disk (in /var/log/pods in k3s), unless your container is redirecting them somewhere that isn't stdout.


> If you're paying through the nose for managed k8s, this is true. If not, you'll eventually need to login to a node and diagnose some issue, which means installing things on the node.

Managed Kubernetes on Amazon (EKS) is quite inexpensive: $0.10/hr * 24 hrs/day * 30 days/month = $72/month. Other costs are VMs, networking, and storage, which you would have allocated anyway. There are some downsides like forced upgrades, but cost is not of them for our use cases.

We incidentally don't ever login to Kubernetes nodes using tools like ssh. It's asking for security trouble to have those ports open.


It’s all true though.


With Kubernetes you don't have to configure log exfiltration, process management, SSH, host metrics, etc. You don't have to touch Ansible--there's no host management at all.

The stuff that you still have to configure (e.g., firewalls, NFS) is all configured through a consistent, declarative interface (Kubernetes manifests) rather than a dozen bespoke, byzantine formats or imperative commands.


Kubernetes is not quite that easy though. Out of the box, you get basically no isolation between anything, and you still have to deal with security contexts and have processes in place for keeping your container images secure. If you use community Helm charts your services may end up running with essentially random privileges that may easily conflict.

The declarative interface is going on the right direction (as far as yaml can be) but configuration management for it is still unsolved. Backups are also often forgotten; they're very easy with virtual machines.


I suspect you may be confusing "cloud provider Kubernetes" (the topic at hand) with running your own Kubernetes on bare metal. The bare metal Kubernetes story still has a long way to go, but we're talking about public cloud providers.

> Out of the box, you get basically no isolation between anything

I'm pretty sure AWS Fargate and GCP's GVisor solve (or attempt to solve) isolation. Not sure about other cloud providers.

> you still have to deal with security contexts and have processes in place for keeping your container images secure

How do VMs help secure software artifacts beyond the security practices in the container ecosystem? And I would argue that "dealing with security contexts" is strictly better in Kubernetes than the equivalent in VMs if only because of the unified interface (Kubernetes manifests).

> If you use community Helm charts your services may end up running with essentially random privileges that may easily conflict.

You can run into the same issue with Ansible scripts on VMs. This isn't a Kubernetes specific issue--ultimately, all system administrators need to take care to run secure software on their systems. Neither Kubernetes nor VMs offer a silver bullet here.

> configuration management for it is still unsolved

If "configuration management" refers to configuration of the hosts, then yes, public cloud provider Kubernetes offerings solve for this--you don't have to manage the host configuration at all (unless you want to opt into it).

> Backups are also often forgotten; they're very easy with virtual machines.

The etcd backups are managed by the cloud providers, as are backups for mounted volumes. Not sure what backups you're thinking about.


They are as easy on k8s as they are on VMs.

Or 'can':

If you use a VM on AWS, you also need to know that you need to configure a vm snapshot (very easy, totally agreeing here with you).

But you can also use a managed k8s from AWS which you can also backup as they are all on PV and they have snapshotfeatures.

I don't want to compare a VM + Snapshotting 1:1 with kubernetes though. It wouldn't be fair to k8s and it wouldn't be fair for all usecases which work very very well on one VM.


It's not just "abstracting the server". Kubernetes abstracts more than just "a server", it works on a level higher. It does this for storage, networking, compute, services, workloads, scaling, ... and all this is done through a standardised API. This APIforces you to standardise application deployments, making centrally managed logging, monitoring, tracing, ... a breeze. Once you have it working for one application, it'll work for all of them.

And do you need to run this on Amazon, Gcloud, Azure,one of the smaller cloud providers like DigitalOcean or locally on Kind/k3s? It'll require very little work to get them working on any of these - if any. Cloud specific services and persistent storage will be the main issues, but that's something you can't really get around.

Now is it perfect? Absolutely not, as with any tech, there will always be problems and bottlenecks. But it allows development to scale, not just the workloads, and the skills required are transferable, which makes it a much easier sell.


I completely agree with everything you said, except:

> Cloud specific services and persistent storage will be the main issues, but that's something you can't really get around.

That isn't wrong necessarily, but products like OpenShift Container Storage (OpenShift Data Foundations now actually) can provide a common API to erase that problem. ODF uses Ceph under the hood so you can get block, file, and (s3 compatible) object storage no matter where you are.

Cloud specific services are indeed a problem, but many of them have open source/portable solutions that you an choose that can run everywhere. Such as Fission, RabbitMQ, Kafka (not my favorite), Argo CD, etc. Really the things I run into most now are things like AWS machine learning services.


You don’t really have no concern for the underlying hardware. You pick and choose the CPU/GPU horsepower, memory, storage type and storage class, network transfer speed, and many other things.

It’s at the point of needing to scale horizontally where I begin to disagree with your premise. This is where you’ll typically get into proprietary and/or ugly offerings.


>You pick and choose the CPU/GPU horsepower, memory, storage type and storage class,

You still do with kubernetes

> network transfer speed, and many other things.

No, you don't. There's no slider for "network performance" on GCP, Azure or AWS.


>No, you don't. There's no slider for "network performance" on GCP, Azure or AWS.

Who said there was a slider? Network performance is one of the key filters on every major cloud provider. Not every instance type has the same network performance.

>You still do with kubernetes

Sort of. It can be way more abstracted away. What's the minimum I need to run a node? Okay great, now run as many nodes as needed, when needed. To set that kind of thing up with bare metal on AWS, for example, would require getting into some proprietary offerings and/or absurd complexity.

I had to actually go and look up with our default instance type was for our cluster. It's a rather useless fact since it doesn't much matter compared to the number of active nodes. That's not true at all if you're directly managing VMs.

I'd absolutely never trade the complexity of kubernetes for the complexity of self-managing a horizontally scalable bare metal VM implementation. However, some people (obviously) disagree with that. To each their own.


Network performance is tied to the instance type on GCP, AWS and presumably Azure.


But there’s no slider. Typically you slide the instance size for other reasons.

Here’s the thing: either you care about it (and you can game the sizing of the instance) or you dont and you run kubernetes.

But you if you don’t care then it doesn’t matter. There’s no slider for you to care about. It is not extra overhead.


why is this slider important? You’ve picked something very arbitrary here


Uh. Because the parent said that kubernetes and VMs are different because "with VMs you have to configure things [..] like networking performance".

But you configure the exact same things as with VMs and kubernetes.

Network performance (as per OP) is not configurable on either.

You just accept whatever accidental default you happen to have, it's not a conscious decision people are making, and it's an awkward assumption to say that you have to think about it.

Because if you have to think about it: that doesn't go away with kubernetes anyway: if anything it probably gets worse.


Uh, no. If network performance is an issue then you configure some labels (slow, medium, mega-fast). You then launch your services on “hardware with network:slow” labels, and new instances will be brought up based on the labels it needs.

Even better: you define a “bandwidth” resource and each service requests a slice of that. Kubernetes takes care of the box packing. If you care so much about it you can then enforce that in a number of different, flexible ways depending on your infrastructure or requirements. At the end of the day it’s no different to CPU, memory or GPU requests.


Leaving aside the fact that I don’t believe anyone does this.

You just maybe proved the point that I originally asserted: the things you configure on kubernetes are the same things you configure on cloud VMs.


> Leaving aside the fact that I don’t believe anyone does this.

Everyone who uses GPUs with Kubernetes does exactly this. GPUs are not a native thing to Kubernetes.

> You just maybe proved the point that I originally asserted: the things you configure on kubernetes are the same things you configure on cloud VMs

You are of course entirely missing the point, and I’m not sure if you’re doing it on purpose or not.

You have 100 units of work that you need to run. A unit of work is some “thing” that needs a certain number of CPU cores, memory, GPUs and other user-defined resources. Each unit of work also needs an individual identity, distinct from other units of work.

Go and code something to run that workload on the minimum number of cloud VMs, taking into account cost and your own user-defined scaling policies, minimizing the amount of unused resources. Now make it handle adapting to changes in the quantity and definitions of those units of work. Now make it handle over-committing, allowing units of work to have hard and soft limits that depend on the utilization of the underlying hardware. Now make it provision some form of secure identity per unit of work.

After you’ve spent time coding that, you’ll realize that:

1. It’s hard

2. You’ve re-invented part of Kubernetes

3. Your implementation is shit

4. It’s very much not “the same things you can configure on cloud VMs”


Except 2 things:

1. Kubernetes manifests require "requests" to be specified. (mem/CPU allocation)

2. Getting 100 VMs identical is not difficult on the cloud.

The point I'm making is that you've already abstracted a lot of the things away with Cloud, and we abstract the same exact things even more on top of kubernetes.

If K8S was running on bare metal I'd agree with you though.


If you can’t understand why it’s more expensive and less efficient to run 1 unit of work per server on 100 servers as opposed to fitting them into 20 larger servers then I’m not sure what to say.


I'm not sure why we're shifting around so much, I never claimed "efficiency" and especially not "efficiency of micro workloads".

This whole thread is discussing the "mental overhead" of managing VMs vs Kubernetes.

if you have to define the "size" of your workload, it hardly matters if it's a VM or k8s. You need to define the size.

Kubernetes can be more fine-grained (I want 1/4th of a CPU!) but you still define it.

I'm not talking about cost, or really anything, only that the original claim I originally responded to: "hurr durr but with VMs I have to configure everything!" but that is the same on kubernetes.

VMs are already a pretty good abstraction if you're looking at carving up compute resources. My "frustration" if I even have one is that we are doing both, one on top of the other. Which feels extremely wasteful.

But like everything it depends on your workloads, and I'm used to having things that consume entire CPU cores, not 1/4th of one. (I'm also not used to making web services these days, and kubernetes is optimised primarily for that kind of stateless workload)


I'm also used to having workloads that consume entire CPU cores, and as such I'd like the number of CPU cores dedicated to log aggregation, system monitoring, metrics etc to be as reduced as possible. I'd also like to not spin up a bunch of new VMs to do a rollout, and I'd also like to run all those small satellite workloads that always appear on the same platform. Oh, and I'd not like to have to run something that needs 3gb of memory and 3 cores on a machine with 4gb of memory and 4 cores because I'm constrained by AWS instance sizes.

Mixed workloads on smaller, larger machines are great for this.

With VMs you do need to configure everything, compared to a baseline stripped down AMI/image that runs nothing but docker and a Kubernetes daemon.

Yes, you can enumate Kubernetes with a bunch of custom tooling. No, it's not better. Yes, it is harder.


There might not be on GCP, but there are on other providers (alibaba cloud comes to mind)


The principled way of "abstracting away the server" is in fact namespacing of all OS-managed resources ala containers. This opens up possibilities like automated process checkpointing and migration, or even seamless vertical scaling to a multi-node cluster (as opposed to a single server node) via a SSI (single system image) environment.


Horizontal scaling of systems is not a way that kubernetes or containerization benefits over the cloud. Resource utilization on those horizontally scaled workers can be, however.


Wouldn't your example be horizontal scaling?


I think part of the confusion is that server can refer to both a physical box and a VM. Abstracting away servers is about not having to think about VMs. You have a service which requires some amount of compute, memory, and storage and you just want that service to run. There is value to not having to worry about provisioning a VM or administering it.


But you still have to do that in Kubernetes, unless you are running Fargate. Someone has to provision and maintain that machine, and in the process introduce a ton of administrative overhead.


That's true, but not a particularly interesting fact.

For example, if you use GKE (Google Cloud's K8s offering), you attach your K8s cluster to an auto-scaling node pool and it handles (de)provisioning of your VMs for you. You essentially don't care about the VMs, there's essentially no overhead.

If you are in a private cloud, this also creates a good "API boundary" between the team responsible for running hardware, and the team responsible for shipping software to run on that hardware. On the former side you can essentially just adopt a machine into the cluster and leave it, and on the latter side K8s lets you programmatically reference resources, but you don't need to know how/where they came from.


I would say it is an interesting fact, since it's this "good 'API boundary'" that, as you said, enables one to separate concerns, be it between different teams in an organisation or between a service provider and its users.

Yes, you don't need Kubernetes to come up with your own implicit or explicit API boundaries, and these might not be needed for smaller projects. I agree that Kubernetes is often used where it's not strictly needed.

There are things which strict abstraction, and with it, separation of concerns provide. The crucial point is that certain things are enforced.


Sorry, I meant not an interesting fact in public clouds where they take care of all the VMs for you, even without Fargate.

You're right though, the abstraction layer is very interesting for _enabling_ public clouds, and for private cloud team ownership.


Its not the same. You can easily run node pools automatically because your abstraction layer is k8s with containerd or docker.

You also know that you can throw away VMs because they don't contain any state. You are not losing data just because you kill a VM or a VM breaks.

It is way easier to just spin up n nodes and provision them all equally than whatever you did before.

In my team, we can manage way way way more nodes than we ever could. We spin up 100 VMs and destroy them on a regular basis automatically.

Gardener for example supports autoscaling on bare metal. The whole ecosystem is providing tons of great options.


I think there is a healthy dose of anti-vm bias in your viewpoints.

Lauding Kubernetes because ops work took too much of your time is just shifting your burden elsewhere, even if that means paying a bit more for an offering like ECS fargate.

Any environment with configuration management can treat instances as ephemeral. It’s a best practice.

I view docker more as a package manager, no more dependency hell.

In any event K8s is sprawling, it will soon be too complex for its own good. Assuming it’s not already.


I only answered the question why it is different with kubernetes.

I dont have anything against VMs. Feel free to click yourself a VM on any cloud provider, use it however you like.

K8s abstracts VMs away and i have and had real issues with maintaining VMs. Docker filling up the node with logs. Unable to upgrade the BaseOS due to python dependencies. Managing the same VM stack through ansible and everything ansible or chef brings to the table.

There have been plenty of self healing mechanism in place which do solve unfortunate issues. Memory? The service restarted, was offline for 3 minutes and is now working again. Node disk full? Pods get scheduled away, new node comes up, done. Update/upgrade of nodes? Nodepool does it for me.

For me, k8s has 2 real issues like memory (swap support is wip finally!) and stateful workloads like a database. But the concept of an operator shows a bright future.

k8s also does one thing very nice: It enforces certain aspects which are a pain in the ass later. That VM which wasn't updated for years and run just fine? Now there is an issue and it needs to be fixed asap. But now the debian repositories are no longer available. I have to fix apt srce list first, then i need to fix dependencies and then i need to restart it.


That's a PAAS and they existed long before k8


A good mental model is to think of K8s as a portable PaaS with a more well-defined API. That's a good thing, not a criticism of K8s.


Slightly biased?

This is the way they present the extract of the dotScale conference in this K8s documentary:

https://youtu.be/BE77h7dmoQU?t=185

And the source media:

https://youtu.be/3N3n9FzebAA?t=30

They've altered the quality of the audio and video to make it look old. They even cropped it to 4:3! This is preposterous.


I mean, the title of the website is "Cloud Critical - Your service mesh is garbage".

And the author says

>No critical conversations about Kubernetes are taking place, aside from outspoken critics on HackerNews or Slashdot, but those are few and far between.

The author's agenda is about highlighting that "Kubernetes is not a good solution for most if not all cloud deployments".

I'd say there's quite a decent chance that bias is involved. At least a chance bigger than Kubernetes not being a good solution for any cloud deployment.

I don't think anyone is arguing that Kubernetes is the best solution for all cloud deployments. From my POV it might not even be the best solution for most cloud deployments. But I think there's bias involved if one is even considering the possibility that Kubernetes is always a bad solution.

Regarding modifying the source material and cropping it to 4:3, that's something I find odd as well and I personally find is not fitting.


I actually believe that the k8s api could be the best abstraction for services we ever had.

While i'm running k3s at home (which is very nice to be honest) and big instances at work, i would prefer to have more managed k8s offerings but many already exist. They exist from DigitalOcean, Google, Azure, AWS and its probably way easier for smaller service providers to make a managed solution available. You can also use rancher or gardener to create and manage k8s clusters 'raw' by yourself.


I'm with you.

Even for tiny personal deployments, I find it a very compelling experience (I'm running micro-k8s at home).

With just a little bit of setup (configuring NAS storage, handing MetalLB an ip range, and installing cert-manager) I get a setup that is robust to any single machine failure (I run a 3 node system on old desktops/laptops), handles a bunch of previously manual tasks (backups, cert updates, auto-restarts) and gives me a wonderful host of tools I now use for personal projects, such as

- CI/CD (DroneCi)

- Private docker registry (Registry:2)

- Dashboard for service availability (kubernetesui/dashboard)

- A TON of personal hosted projects and tools (from bookstack to jellyfin to whatever project I'm working on now)

And for the most part - I just don't have to think about it all that much. Sure, the initial setup took a few hours. But I've torn down and rebuilt the entire thing from scratch in less than an hour from backups, and my README for "starting with nothing but a blank disk in an old machine to fully functional" is about 2 pages long, and the vast majority of it is local router config and DNS entries (still manual).

I'm easily replacing hundreds of dollars a month in SaaS/Cloud charges with it, and it's still just not taking up all that much of my time.


> No critical conversations about Kubernetes are taking place

And this is spot on.


The one thing I did not appreciate with the documentary was how "historic" footage was edited to be 4:3 aspect ratio with artificial grain. This made it look like these segments were filmed in the 90's rather than 2013. I can understand how this can emphasise the difference between footage shot now and 9 years ago, but it made the documentary feel inauthentic somehow.


I find ‘heterodox blogger’ to be one of the more annoying business personas.

When I had to work at IBM in the early 2000s because of an acquisition, we met a lot of ‘corporate edgelords’ whose personal brand was built on eloquently explaining to IBM audiences how every cloud computing innovation coming out of West Coast tech companies already existed on the mainframe since the 1970s, and was therefore stupid to invest in. Their big closer was usually some version of how ‘Silicon Valley is so far behind IBM they think they’re ahead’.

These were the same guys who ran Lotus Domino in their home lab for fun because gmail was stupid.

Very tedious.


Sorry, I am not sure when heterodox is a good thing or bad thing. Maybe you could qualify that for me.

Well Gmail is kinda stupid. I mean they are reading your emails.


They really aren’t.


https://policies.google.com/privacy#infocollect

"We also collect the content you create, upload, or receive from others when using our services. This includes things like email you write and receive, photos and videos you save, docs and spreadsheets you create, and comments you make on YouTube videos." Their words not mine.


They have to to store it. Every third party email service collects the content you create, upload, or recieve. Even, under this definition, the "private" encrypted ones.


> especially [a code base] as shit as Kubernetes

I completely agree that Kubernetes is best understood through the lens of Google's corporate interests. But 1. resorting to language like "shit" makes me question the writer's credentials and reliability and 2. the code base may not be exemplary but its quality is far from belonging to the poorer end of the known spectrum.


Given that contributors have been referring to the kubernetes codebase as "the clusterfuck" in talks I think it's fine to say "shit".

https://archive.fosdem.org/2019/schedule/event/kubernetesclu...

It's a great talk to figure out how the kubernetes codebase works


This was 3 years ago and it even provides a solution in that talk already. Have you rechecked your assumption or are you only arguing from a talk from 2019?

Índependent of this, the abstraction layer (k8s api, kinds etc.) is independent of the code base btw. You can easily refactor and fix all issues in k8s. The abstraction is already there.


Hmm yes, but when Kris Nova says it it's different


Why? Our field is plagued by Authority Bias and this is one fine example.

“You should, in science, believe logic and arguments, carefully drawn, and not authorities.” [1]

[1] https://twitter.com/anammostarac/status/1495594139865731074


But that's exactly my point, Kris carefully draws her arguments. Which is what gives her her authority, not the other way around.


Thanks for the response. I knew of this talk, and had delved into the code base years ago. I did not want to deal with that specific issue in the article because it diverted from the main discussion. My assumption this was a known aspect of the code base, from that talk to the security review Kubernetes received. Obviously it not a known, when I roll out an update to the webpage I will link to that fosdem talk.


Even then there's arguably a stark contrast between

>The clusterfuck hidden in the Kubernetes code base and the brilliant refactoring techniques developed to fix it

and saying

>[...] forking a large code base, especially one as shit as Kubernetes, and still getting community support can be daunting without some sort of titanic change in the market.

especially since the former talks about techniques that fixed the "clusterfuck".

I know you didn't bring that talk into the discussion, however you didn't cite any sources at all. From my POV simply stating that the Kubernetes code base is "shit" without extending on it comes off as anything but neutral and fair.

If you want to sound reasonable, I would either make clear that that sentiment is your personal opinion and/or comes from your experience, or at least cite some sources.


I think it's fair to call it a clusterfuck with skin in the game.


Yeh I think I understand what you mean. The context is different when somebody says it as the perspective of a contributor. Then just a sideline comment without a 1 hour talk of context to contextualize the remark


Even then the talk is about a clusterfuck _within_ the code base and how it got fixed.

It clearly doesn't label the current state of the code base as a clusterfuck in general.


Agree with this. I've had to delve into the guts of Kubernetes and it's not perfect for sure but far better than the crap we deploy onto it :)


"Shit" is so commonly used in developer/tech circles that I'm more surprised someone would zero in and take issue with it. I would not question their credentials and reliability over a word.


> I would not question their credentials and reliability over a word

In more or less informal discussions, I agree. From my POV it's not about a singular word though, it's about the context: The author labelled the (entire) code base of Kubernetes as "shit":

>[...] forking a large code base, especially one as shit as Kubernetes, and still getting community support can be daunting without some sort of titanic change in the market.

It's not that it's not "nice" language, it's about the statement. It could very well be that the code base of Kubernetes is in a bad state, but with this wording and without any references, it doesn't come off as neutral or particularly reasonable.


On one side there are the Kubernetes corporate enthusiasts that are probably exactly doing what this post i saying: try to remove the advantage that AWS has over all the other vendors. On the other, we are also plagued by AWS shills/fanbase that hate Kubernetes exactly because of that (they are trying to push you toward serverless with Lambda). In the middle ground there is probably the truth, where AWS had a programmatic API that enabled automation for years, and that Kubernetes *is* hip, it adds even more automation to the game and helps attracting/retaining talent in the current hot market. But technically you could totally have a working startup, mid-size or big company without using Kubernetes.


> they are trying to push you toward serverless with Lambda

With bare metal code written in modern memory-safe languages, serverless/FaaS could easily become a cross-cloud abstraction much like k8s. AIUI, there are already some experiments along these lines. It turns out that the main cloud services differ slightly in how they account for resource utilization, etc. in serverless deployments, but not in a way that would make a cross-cloud abstraction useless.


I don’t think k8s is the draw that you say it is. Anecdotally my wife and I heavily penalize jobs that have migrated from serverless to k8s. My current work uses kubernetes but it’s a step in the right direction from where they were before.


I think the format of this review is rather strange, a little hard to get into. Coming from the humanities, when I think "critical review" I think a wholistic and encompassing argument about the foundations of the subject, and it is not determining of whether the writer is going to critique the subject at hand, or agree with it. Examples and textual analysis are used carefully in order to speak to the grander mechanism and assumptions at play.

This feels more like MST3K than any kind of critical review of the subject. The author is passionate and has a lot to say, thats for sure, but I see little substance here beyond definite wit.


I think they put the word Critical in there because it is the name of the website.


Ah.. My bad I did not get that, I guess then this is merely an issue of semantic overlap then. Either way I get that I was pendantic.


Even with a CS background, I would expect a blog post that is supposed to be a "A Critical Review" to at least try to be fair and reasonable.

It doesn't have to be neutral in my opinion. But it should clearly separate personal judgement or opinion from facts and overall rational.

In the "About me" section the author says:

>Originally I started this site attacking the big corporate interests (Google, IBM and VMWare) that drove this effort forward, but that really doesn’t speak to my intended audience. It also came off as acerbic, which I am fine with in regards to corporations and their CEOs, it does not do justice to all the engineers that slaved over the Kubernetes project. So I deleted those posts. Now, I just want to start, provoke, prod, or push a dialog forward, because if you just follow the tech blogs, you would think we should be running a Kubernetes cluster on our lawn mowers.

Even if the blog is not about simply "attacking the big corporate interests" anymore, the language clearly puts forward the post's agenda.

In my opinion, the post should have a different title, one that is clearly indicating the author's intention to highlight the negative things about Kubernetes and the documentary.


> It also makes me think if OSS is created solely to promote profit margins, is that really good OSS, or just a tactic to wrap strategy in a thin veneer of altruism?

Is that not a false dichotomy?

Kubernetes could promote profit margins for Google whilst simultaneously being good for OSS.

Sure, if Kubernetes was touted purely as being about what's good for OSS, then that would be disingenuous. But throughout the documentary they allude to Google wanting to look after its bottom line.

So for me personally, I think that the quoted passage above (and several other parts of the write-up) are a tad biased.

Interesting read nonetheless.


> Kubernetes could promote profit margins for Google whilst simultaneously being good for OSS.

Yup, a typical "commoditize complements" dynamic. Google/Azure's cloud services do not implement AWS's existing API's and ways to interact with the platform, so they address this by making an alternative available.


Yup 100%.

Every couple of years I re-read Spolsky’s excellent article on this dynamic.

For those that are curious: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/



Its just a very bad and uneducated stand from some person.

I'm running a small k8s instance at home, for a small startup and at my job in a big version.

Abstraction of VMs is a real benefit: Have you ever had to restart a VM because of some security issues? Yes? Were you worried that your server comes up again?

With k8s, you know that 1. its cloud native to a certain extend. It will come up again because it came up before. 2. you have more nodes available. Either to surge or because you have more than just one node running.

Your pod will be scheduled away from your node, thats it.

you have a very stable and smart abstraction layer for sooo many features you get as soon as you configure them ONCE centrally:

- LoadBalancing

- certificate management

- Volume abstraction -> making snapshots from your PV? yes!

- Rollout strategies

- health checks (readiness and liveness probes)

- declaritive style (setup a prometheus, every service can be autoscraped due to convetion over configuration)

- Certified opensource abstraction layer! (get yourself a certified k8s distribution and stop worrying about vendor lock in)

- Unified setup for plenty of apps (monitoring, logging, app store, tracing, storage systems, iam etc. etc. etc.) We had deb before and rpm and whatnot. Now you have a helm chart for a certified k8s platform)

- Already quite small -> there is k3s. ubuntu supports it also with not that much overhead

- IaC as first class citizen. Due to k8s being declarative, IaC is much easier than it was before.

- FOSS

- Central easy policy implementation and management. Write your central policies, allow your teams to manage their own namespace and make sure to allow only certain registries etc.

- ArgoCD / GitOps (a dream come true srsly!)

I cant understate how much i love k8s and how much better it is then everything i have seen before. This is the main reason why i even spend the time writing here because that çritical review' is just utterly bullshit.

Did we had similiar things somehow before? yes. So whats new on k8s? K8s unites across companies and just drives this further. For me k8s is the winner of this race which happened in parallel (mesos, docker, nomad etc. etc.)


Agreed, I have a small 3 node cluster at home and I use all of those things you listed. I had to dive very deep in the details and learn a ton of new things to get it right, and I had all the time I wanted because it was just for fun and learning. It's almost like having my open source self-hosted AWS (in terms of abstraction from infra, not in reliability)

Would I host any of my critical side projects on my cluster? Probably not. Kubernetes was made with large organizations (google made it after all) in mind. As a solo developer, it's better for me to host my apps on a VM and move to AWS/Azure/GCP if I need to scale.


I am sorry you feel that way. The point of the article was not really attempting to address the feature sets of Kubernetes (which I also have issues with), it was really about the sales pitch being delivered by Google.

I have A LOT of issues with the things you posted above, and I hope to address them in future articles. Stay tuned for more, and thanks for reading.


I still don't get your motivation on writing your criticism.

What is your endgoal? Getting people not to like k8s? Because you don't like to work with it?

To push people away from k8s?

How do you add value to the current infrastructure/platform ecosystem by 'hating' on it without providing something different?

Of course companies present this k8s story as a successful thing. Why would that documentary be negative?

And while you have 'A LOT of issues with the things you posted above' just to be clear: For me and a lot of other people who like kubernetes, it solves real problems, its a great choice and there are of course things which need to be optimized. But if you only rant about it in the next blog post from you, i'm not seeing any value you really add to the ecosystem.

For me, i never seen anything like kubernetes in the last 12 years. I can get certified k8s from many companies in many different forms (gke, aks, aws, digitalocean, ranger, rke2, k3s, minicube, microk8s). ArgoCD is a dream come true.

Can you do it differently with other tools? Yes sure, did we ever had something like k8s before? no. We never had that holistic view on Infrastructure in such a FOSS project.

Again what do you want to achieve? A real discussion on specific issues or just hating against something? Or doyou have the feeling that the blog posts writing about k8s are to one sided?


> I still don't get your motivation on writing your criticism.

OP is an AWS consultant. Kubernetes is/was designed to make his skillset irrelevant. And he alludes to it in several places.

This is nothing but a poorly researched hit piece on K8s completely detached from reality


I think the emperor is wearing no clothes. I want to move that discussion forward. I feel it is inevitable.


K8s doesn't solve problems which haven't been solved before. It doesn't do any particular magic in itself. The handful of things kubernetes does, are easy to explain but the impact is big nonetheless.

It is trustworthy because it is FOSS, certified and lots of companies use it because of this.

Lets take Java vs. PHP: PHP is developed by one group of people. Thats it. There was facebook hhvm/php alternative which then became something independent of php. Quite frustrating if you were hoping that Facebook gives back to the community.

Then take java: you have a spec, you have a reference implementation and then you have validated alternatives of it. At least you had this for a long time on the JavaEE area and with the oracle support thing, you also now have independent JVMs. This makes Java, in my opinion, better. This makes it a great platform, easy to migrate out of one ecosystem and it prevents 'vendor lock-in'.

Nomad is from hashicorp. You have mesos which works well as well. But no normal cloud provider provides nomad or mesos as a service. They provide their own thing. App Engine, Heroku etc.

Kubernetes broke through this. Lots of smaller cloud providers provide a managed kubernetes. You can see kubernetes here as the universal appengine if you like. Google provides Autogke. Their managed kubernetes service which abstracts away k8s even further. This interface, k8s provides, allows you to run your k8s based workloads at home, onprem, in private, in any other cloud provider AND on Google.

Instead of having Vendor lock-in it switches the operation model of those companies: They can't lock you in as easily before so they need to make the best offering for it. It switches the mental model and the level of competition to a more consumer/customer focused level.

It is very similar on a mental model switch as what Microsoft did with linux: Instead of hating against linux, they embrase it now and incorporate it. I never considered windows as a good developer OS just because of the missing shell support or the required workarounds or the non native cli feeling it gave you. Now i can use WSL2 and it becomes a real option.

For me, k8s is THE FOSS infrastructure abstraction layer. Protected and aligned through the CNCF and certification process.

Btw. the CNCF is from the Linux Foundation.


Hi you are getting into details out of the scope of this article. I want to address your points but in an article, then we can link it on HN and discuss it there.

The CNCF is an entirely different beast... which I have already started writing about already. It is the Mos Eisley of Open source. I am just kidding, it is not that bad.

You asked why I was writing this, and I told you why.


A bulk of this criticism seems to rely on the author's understanding that somehow Google app engine and AWS were competitors before Google seriously realised that AWS was a high margin business that was bankrolling all of Amazon.

I remember those days and Google app engine was trying to compete with Heroku.

Google is also known to exist in markets in the form of 20% projects and not take put serious muscle behind those efforts: Take Orkut vs Facebook as an example vs the Google+ effort in 2010-11 when Facebook seemed like it was going to eat the world.

The documentary's narrative seems more accurate.


Google released VMs in 2013, the year of this documentary, which means they were working on it for some time prior to this discussion.


The tipping point was NASA dropping out of openstack to sign a contract with AWS.

You should checkout Azure presentations from 2012 to find out how nobody saw the "cloud" coming.

Kubernetes was a very effective strategy to make AWS knowledge irrelevant by providing a layer on top/alternative interface and it succeeded. Obviously AWS consultants would hate it for it.


I agree with the take, but I don't think it's a bad thing. Sometimes you want to abstract away the cloud provider and you are willing to pay the price for that abstraction. Sometimes it's easier/simpler to duplicate a bit more know-how and code.

In terms of consumers Kubernetes may have saved the cloud from becoming an AWS monopoly the way the PC got coupled to Microsoft Windows and Office.

Previously every Cloud provider no matter how small had their copy of EC2,IAM,VPC,S3,ELB and such, but struggled copying the dosens of other commonly used services. It is kind of like the saying that everyone uses 20% of Excels features, but for each user is a different 20%, and you want to keep your options open. Or you are fine with running Linux, but you need this one app that doesn't work on it.

Now all the minor clouds need to catch up is to add Managed Kubernetes and they are competitive with AWS. And vendors like VMWare can also hop on the bandwagon without having to deal with AWS.


> Now all the minor clouds need to catch up is to add Managed Kubernetes and they are competitive with AWS.

On one axis, perhaps. This is patently absurd though.

"Minor" clouds still need a hypervisor and control plane, still need IAM, still need VPC-style networking, and still need load balancers and object storage to be technically competitive with AWS - just as a starting point.

All Kubernetes means here is that there is a common API style for these things.


As a freelancer focusing on k8s, and who has quite a few clients running OpenShift on-prem or outside of cloud providers, his analysis of RedHat's need for OpenShift shows he does not understand RedHat's biggest customers.

They run OpenShift because they want Kubernetes with it's organisational advantages on-prem, while having the support they're used to. With the exception of Azure, none of the cloud providers can offer this.

RedHat saw this coming, and understood that if k8s became big in the cloud-space, someone needed to address that enterpricy market. If they could get there first, AND at the same time put a second big name behind this new tech giving it more viability, it also gave them more push.

But then again, the rest of the article showed the author understands very little of Kubernetes at all.


This is a personal message and off-topic for HN, but I didn't see any contact info in your profile and decided it was worth a public comment. There's a chance I'll need a freelancer to help with some (IMO fun) work that needs to be done (company info in profile). If you're interested, we should talk more. My public-ish email address is in my HN profile, although I get a ton of email there so if you email me please reply to this comment and let me know so I can look for it. Alternatively, Keybase.io (also in profile) is a good way to contact me if you already have an account (or are willing to create one).


https://www.infoworld.com/article/2626313/why-red-hat-should...

This was 2010, so I think I may understand better than you think.


Except you don't.

Openshift started as a paas that ran on... AWS.


Disclaimer: Former Red Hat employee working on OpenShift

Correct, OpenShift has been around longer than k8s. Red Hat identified that K8s was a remarkable platform for platforms, and rewrote OpenShift on top it.

OpenShift is good on clouds, but the real shine is on-prem. For anybody with on-prem hardware, OpenShift gives you an abstraction that you can use to make it indistinguishable to your users where something is running. This means you can go cloud when scaling quickly is required, and you can build the foundation on-prem where you save a ton of money. The apps won't have to change at all (as long as they're using the OpenShift abstractions rather than say an EBS storage operator that requires AWS).

Considering K8s (and/or OpenShift) only in a cloud context, is a huge error that will lead one to completely miss the "why" behind why they're so important.


I'm surprised that neither the documentary nor the review gets into the legacy of OpenStack. I may be biased, but it seems to me that a huge amount of the success of kubernetes is directly attributable to OpenStack.

First, OpenStack paved the way for a bunch of companies to invest real money in working together to compete with AWS. Second, there was massive turnover in ~2013 in open source contributors from OpenStack to Kubernetes. I wouldn't be surprised if a good 50% of the kubernetes community was inherited directly from OpenStack.


I suspect the documentary might have not wanted to thrash talk OpenStack.

Let's be honest, a non trivial push for k8s was just how badly Open Stack sucked


I regularly listen to the kubernetes podcast run by Google employees. Through time and through various guest interviews I've gotten the impression that some (most?) who created borg/kubernetes did basically all of the work that Docker did creating containers but are upset Docker has all that recognition. I can't quite articulate it but it seems like the Googlers are super judgmental and want to snub their noses at Docker.


Most of the heavy lifting to implement containers was done by the kernel developer community and projects like LXC.

Yet, docker and kubernetes are very much hype/marketing driven and took away a lot of recognition from the kernel developers.


A non trivial chunk of kernel work for containers was upstreaming of Google kernel patches.

To the point of a throwdown on LKML when Lennart tried to "lay claim" to cgroups v2 as "owned solely by systemd", and got a rather... Funny response.


cgroups came later on after tons of work on virtualization, isolation and sandboxing was done.

And only because google really needed it for internal use.

Many other companies, orgs and individual contributors contributed bigger chunks of work but you don't hear people praising them.


> kernel developer community

This effort was funded in part by Google.


funded != did the work


The developers who implemented cgroups worked at Google and upstreamed them.


The kernel is developed by Linux foundation and a variety of companies that have a vested interest in Linux like Intel, AMD, SUSE, RedHat, Google, Canonical. It's not a team of volunteers like it's popularly portrayed.

The foundation itself is also primarily funded by such companies.


> It's not a team of volunteers like it's popularly portrayed.

This is not correct. There is such thing as a *paid volunteer*.

Additionally, a large percentage of contributors are unaffiliated, independent or part of small companies:

https://www.cnet.com/tech/services-and-software/paid-develop...


> I feel like we were all talking about microservices, and chaos engineering, I personally wanted decent service discovery (I still do).

One day a startup will bring CORBA Name Service or Web Service Repository, in yaml, and it will be great.


   The K8S took my server away
   They took her away
   Away from me
[Apologies to The Ramones]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: