God damn, I love this comment. We manage 8 Linux servers total for an application that is lightly used by our customers but heavily used by our internal workers that constantly reach out to internet services and check for things. Everything we've done has been in Go and bash, and our deployments are easy and take seconds.
I'm tired of seeing so many posts for Docker, Kubernetes, terraform, and even so many of Amazon's service offerings. Guess what? All these tools are abstractions, and it's insane to me that so many would rather learn to work with these abstractions, as opposed to learning some Linux basics, which would allow them to solve the same problem with much less overhead.
We have two webservers, four workers, and two database servers. All of our code is written in Go so our dependencies are limited, and this has been my reason for not needing Docker. However, we use Docker locally to spin up our local dev environment, so developers don't need to have MySQL installed on their machines. The primary benefit I see with Docker is dependency management, and when your dependencies are minimal, it's not worth the overhead.
All of our servers are built from bash scripts that are executed on the server on the first run. This bash script creates folder structures and user accounts and sets permissions. If a server needs to be destroyed for whatever reason, it can be rebuilt as new from these scripts in < 10 minutes.
Code deployments are done using SCP and rsync and systemctl to stop and start services remotely. If we were using an interpreted language where dependencies are a bit trickier to manage in prod, certainly we'd use Docker then. Still, for the time being, this Go \ bash ecosystem works for us, is easy to manage, and limits dependencies overall. Debian & Go all the way, baby.
The longer I work in this industry, the more I realize a bigger hindrance to accomplishing goals is actually overthinking and over-engineering. I see startups with no customers using kubernetes and terraform. People building noahs ark when they've never experienced rain. The pain points in your process will become obvious. Let the process tell you it's time for these more advanced tools. If you can manage production code and deploy updates easily without them, then don't waste your time. If you don't know any other way, then maybe it's worth spending the time to learn Linux a little bit better.
I agree with you entirely, nearly all of the companies I've encountered that are trying to get kubernetes working have:
1. Made poor language choices that require them to over complicate their architecture to scale things up. Places where a single golang binary would work and instead you have queues and callbacks and 5 container images because at each road block they just added some new "thing" to make it work.
2. do not even have the base infrastructure automated for any of this to run on, so instead they throw kubernetes on top of it and cross their fingers that the underlying hosts will just magically stay up. And every email about degraded hardware from AWS nearly gives you a heart attack.
I guess overall I'm sad that instead of trying to make smaller and tighter machine images that boot quickly in order to automate availability, we (the industry) added another layer (containers) to just complicate things more.
I'm tired of seeing so many posts for Docker, Kubernetes, terraform, and even so many of Amazon's service offerings. Guess what? All these tools are abstractions, and it's insane to me that so many would rather learn to work with these abstractions, as opposed to learning some Linux basics, which would allow them to solve the same problem with much less overhead. We have two webservers, four workers, and two database servers. All of our code is written in Go so our dependencies are limited, and this has been my reason for not needing Docker. However, we use Docker locally to spin up our local dev environment, so developers don't need to have MySQL installed on their machines. The primary benefit I see with Docker is dependency management, and when your dependencies are minimal, it's not worth the overhead.
All of our servers are built from bash scripts that are executed on the server on the first run. This bash script creates folder structures and user accounts and sets permissions. If a server needs to be destroyed for whatever reason, it can be rebuilt as new from these scripts in < 10 minutes.
Code deployments are done using SCP and rsync and systemctl to stop and start services remotely. If we were using an interpreted language where dependencies are a bit trickier to manage in prod, certainly we'd use Docker then. Still, for the time being, this Go \ bash ecosystem works for us, is easy to manage, and limits dependencies overall. Debian & Go all the way, baby.
The longer I work in this industry, the more I realize a bigger hindrance to accomplishing goals is actually overthinking and over-engineering. I see startups with no customers using kubernetes and terraform. People building noahs ark when they've never experienced rain. The pain points in your process will become obvious. Let the process tell you it's time for these more advanced tools. If you can manage production code and deploy updates easily without them, then don't waste your time. If you don't know any other way, then maybe it's worth spending the time to learn Linux a little bit better.