Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It appears to me that people don't really understand kubernetes here.

Kubernetes does not mean microservices, it does not mean containerization and isolation, hell, it doesn't even mean service discovery most of the time.

The default smallest kubernetes installation provides you two things: kubelet (the scheduling agent) and kubeapi.

What do these two allow you to do? KubeApi provides an API to interact with kubelet instances by telling them to do with manifests.

That's all, that's all kubernetes is, just a dumb agent with some default bootstrap behavior that allows you to interact with a backend database.

Now, let's get into kubernetes default extensions:

- CoreDNS - linking service names to service addresses.

- KubeProxy - routing traffic from host to services.

- CNI(many options) - Networking between various service resources.

After that, kubernetes is whatever you want it to be. It can be something that you can use to spawn few test databases. Deploy an entire production-certified clustered databases. A full distributed fs with automatic device discovery. Deploy backend services if you want to take advantage of service discovery, autoscaling and networking. Or it can be something as small as deploying monitoring (such as node-exporter) to every instance.

And as a bonus, it allows you to do it from the comfort of your own local computer.

This article says that figma migrated necessary services to kubernetes to improve developer experience and clearly said that things that don't need to be kubernetes aren't. For all we know they still run their services in raw instances and only use kubernetes for their storage and databases. And to add to all of that, kubernetes doesn't care where it runs, which is a great way to increase competition between cloud providers lowering the costs for all.



Kubernetes absolutely means containerisation in practice. There is no other supported way of doing things with it. And "fake" abstraction where you pretend something is generic but it's actually not is one of the easiest ways to overcomplicate anything.


If you disable security policy you and remount to pid 1 you escape any encapsulation. Or you can use a k8s implementation that just extracts the image and runs it.

But that's assuming you're running containerd or something similar. There are dozens of k8s implementations some as light as only providing you with manifests which then you have external schedulers which subscribe (called controllers) that execute on these manifests.


Mainly I don't want the overhead of building and managing container images at all. My apps all run the same way, I have a fleet of servers that are set up to run them, something that can manage distributing apps onto servers and routing with load balancers etc. would be cool just without the docker/container piece. This was a main selling point for Nomad so I'm pretty sure there's no way to run k8s to do that, or at least it's not supported/first-class.


Is it very common to use it without containers?


I run it that way on my windows machines, the image is downloaded and executed directly.

This ties into a funny example: k8s manages my vm's via kubevirt, those then have a minimal k8s version installed that runs my jobs. The implementation simply mounts the extracted image to a virtual fs and executes it there, then deletes the file system.


You can use VMs instead. I don't think the distinction matters very much though.


It is impossible.


it's possible with virtlet or kubevirt


To be fair, at the small scales you're talking about (maybe 1-2 machines) systemd does the same stuff, just better with less complexity. And there's various much simpler ways to automate your deployments.

If you don't have a distributed system then personally I think k8s makes no sense.


How do you deploy to systemd? How do you run a container in systemd? Now you need a second and third system, perhaps Ansible and docker-compose, which is simple on the surface but quickly grows in complexity with home made glue to keep all the loose components together.

I agree that for a handful of pet-servers for a team with more existing Linux experience than k8s experience, this is a better starting point, because of the shorter learning curve. Just not kid ourselves that the end product has any less complexity, it's only a different skill set.


> How do you deploy to systemd?

... write a unit file and put it in your CI?

> How do you run a container in systemd?

systemd-nspawn config, which you put in your aforementioned unit files.

> perhaps Ansible and docker-compose

Definitely don't need docker compose and, in my opinion, don't need Ansible. It's trivial to deploy systemd config and it's trivial to automate.

I'm not kidding myself - the complexity is significantly lower. But it only works if you're deploying to one, or two, machines. This won't make a distributed system and I acknowledge that.

Not to mention I'm only scraping the surface of what systemd can do here. Containers and automating services is just part of it. There's also remoting logging, monitoring and email alerts, periodic health checks.


I did say with few machines it can be overkill, but when you have more than a dozen of 2-3 machines or 6+ machines it gets overwhelming really fast. Kubernetes in it's smallest form is around 50MiB of memory and 0.1cpu.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: