Completely agree on just using a server instead of the various lambda-style systems. All modern languages have great web/app frameworks that make it incredibly easy to build, whether it's a single endpoint or a giant app. The ability to just include whatever code you need and deploy it atomically is massively underrated. Also agreed on scale, servers are fast and cheap and the savings from Lambda rarely pays off in the extended effort. When you do scale, Lambda becomes more expensive anyway.
I do recommend using Docker though. Containers are more portable and easier to deploy and replace on a running server, along with the ability to run multiple instances, mount volumes, setup ports and local networks, and eventually migrate to something like ECS/K8S if you really need it.
Extended effort to push up a lambda function and not have to worry about automating deployment and configuration and patching and monitoring and upgrading and fail over-ring and, yes, scaling? Maybe its just me but I'd rather not see the backend of a server ever again for anything other than development.
That's why I recommend containers, because automating deployment and config would be the same regardless of destination, right? Monitoring also seems to be the same if you're using built-in cloud stuff.
As for scale, I think that's massively overstated. Servers are really fast and most apps aren't anywhere near capacity. Even a $10 digitalocean server is plenty of power, and there's no cold starts. Even YC's advice is to focus on features and dev speed, and worry about scaling when it truly becomes an issue.
But a lambda is just a container that you don’t have to manage.
I don’t get this sort of anti-serverless sentiment. If you have even one good SRE, then it’s an absolute breeze. Writing a lambda function is writing business logic, and almost nothing else. I can’t see how you could possibly do any better in terms of development velocity. I don’t get this ‘testing functions is hard’ trope either. Writing unit test that run on your local is easy.
Not really, aside from the other AWS services you consume (KMS, parameter store...). A cloud function takes an event, executes your business logic, and returns a response. The structure of the event can change slightly, but they’re remarkably portable, and I’ve moved them before. If you’re doing it right, most of your API gateway config will be an OpenAPI spec, and equally portable.
> it is more expensive if you need to scale
This is context specific.
> it has higher latency
Again context specific, and likely not something actually worth caring about.
> it is harder to test locally
This is one I simply cannot understand. You can run your functions locally, they’re just regular code. I’ve never had a problem testing my functions locally. If anything I’d say it’s easier.
There’s upsides and downsides to any architecture design. Serverless models have their downsides, but these anti-serverless discussions tend to miss what the downsides actually are, and kinda strawman a bunch of things that aren’t really.
I’d say the most common downside with serverless is that the persistence layer is immature. If you want to use a document database, it’s great, if you want to use a relational one, you might have to make a few design compromises. But that said, this is something that’s improving pretty quickly.
Focus on features and dev speed by managing a container mesh, the underlying server, system libraries, patching for security, handling a potential spike, solving each problem with your architecture as if it were novel, etc.?
There are times to go serverless and times to avoid it, but with what you're saying you want to optimize for, serverless is the answer.
I guess you can make either one as complicated as you want, but surely just putting a container on a server is rather simple? There's no mesh for a single server, and is a potential spike a realistic concern?
I get your point but I think with products like Knative/Cloud Run everything will converge on a lambda-for-containers model eventually which combines the best of both worlds.
There's still a mesh. Containers need to know which containers to communicate with and across which ports.
If putting a container on a service at scale were simple then services like Lambda would have never been popularized and orchestration frameworks like kubernetes wouldn't exist
I don't think popularity means it's the best option. That's the point of the blog post.
I'm also the first to recommend Kubernetes as soon as you need it as it's a solid platform, but most apps stay small and don't need all that upfront complexity. However I stand by Knative being the best of both, have you had a chance to look at that?
However, once you are running enough containers, your server bill becomes something you can't ignore...
Making enough servers with enough memory capacity to keep all or containers running with fail over support was $400/month.. and that was just 60 containers (an easy number to hit in micro-services architectures).
And your right, we never got near server capacity by CPU usage .. completely agree there, we ran out of memory to keep the containers in memory ready for use.
> not have to worry about automating deployment
How do you validate your code before deploying to production? If you test in environments besides production how do you manage configuration settings for the different environments (i.e. a db connection string)? How do you avoid patching? Almost any code I've written takes dependencies on 3rd party libraries and those will often have security vulnerabilities (usually some time after I wrote the code).
I mean of infrastructure, servers etc. I.e. the days of Puppet and Ansible are behind me. And patching a Lambda is as simple as pushing up a new version. No downtime, restarting services etc. And no patching of OS or Docker containers. Or building them. Configuration is as simple as environment vars or SSM for secrets.
How coupled is your code to your host? You should always design to lift-and-shift between these as easily as possible. That way you keep the benefits and aren't locked in.
Cloud functions products have been incredibly useful trade off for me. Whenever I want to process a lot of small files (read: tens or hundreds of thousands), I'll wrap some JavaScript into a function and it'll magically horizontally scale a function for a huge amount of tasks at the same time.
Yes, they do make it incredibly easy for you to do node, you know what else they do?! They make it completely insecure by default.
If you don't know what I mean by that, then you should probably go with a serverless architecture instead of whatever your company has going right now.
Can you please explain what you do mean by that? Are you talking about node/js apps being insecure by default? I guess that's a fault of the specific app framework, rather than an inherent issue with running on a server.
Our company runs services written in C# running on .NET Core in containers. It's fast, secure, and makes development simple.
It's if you're using on the server side, all packages/dependencies included and running on your nodejs-backend have full access to your network and filesystem, even if it's just a css styles library, if updated it there's no permissions stopping it from grabbing files or monitoring the network.
Instead of wrapping security layers around it ourselves with docker, selinux configs etc, it's safer to let gcp or aws filter that out for you because they're likely to have way better security.
Serverless ( there's still servers/containers ) just means that you don't touch the devops and scaling. You can still have your DB and APIs separately in order to be cost effective.
In your case your servers are not using node on the backend to run the servers thus you don't have this vulnerability.
I do recommend using Docker though. Containers are more portable and easier to deploy and replace on a running server, along with the ability to run multiple instances, mount volumes, setup ports and local networks, and eventually migrate to something like ECS/K8S if you really need it.