Hacker Newsnew | past | comments | ask | show | jobs | submit | csweichel's commentslogin

You run the agent in a tightly controlled remote environment / VM designed for this use-case (at least the SSH/command piece).

Ona (https://ona.com) is a great choice.

(full disclosure: Ona co-founder here)


Gitpod (https://www.gitpod.io) is a great option here. It aims to remove the ops burden, provide a great developer experience and give you tools to manage your spend (eg automatic timeouts, suspend/resume, standardized creation of environments).

Full disclosure: I’m one of Gitpod’s co-founders.


OP here. There definitely is a place for running things on your local machine. Exactly as you say: one can get a great deal of consistency using VMs.

One of the benefits of moving away from Kubernetes, to a runner-based architecture , is that we can now seamlessly support cloud-based and local environments (https://www.gitpod.io/blog/introducing-gitpod-desktop).

What's really nice about this is that with this kind of integration there's very little difference in setting up a dev env in the cloud or locally. The behaviour and qualities of those environments can differ vastly though (network bandwidth, latency, GPU, RAM, CPUs, ARM/x86).


> The behaviour and qualities of those environments can differ vastly though (network bandwidth, latency, GPU, RAM, CPUs, ARM/x86).

For example, when you're running on your local machine you've actually got the amount of RAM and CPU advertised :)


"Hm, why does my Go service on a pod with 2.2 cpu's think it has 6k? Oh, it thinks it has the whole cluster. Nice; that is why scheduling has been an issue"


Something that's not clear from the post is whether you're running these environments on your own hardware, or layering things on top of something from a cloud provider (AWS, etc)?


Hi Christian. We just deployed Gitpod EKS at our company in NY. Can we get some details on the replacement architecture? I’m sure it’s great but the devil is always in the details.


Need middleware libs that react to eBPF data and signal app code to scale up/down forks in their own memory VM, like V8

Kubernetes is another mess of userspace ops tools. Userspace is for composable UI not backend. Kube and Chef and all those other ops tools are backend functionality being used like UI by leet haxxors


OP here. The Kubernetes community has been fantastic at evolving the platform, and we've greatly enjoyed being in the middle of it. Indeed, many of the things we had to build next to Kubernetes have now become part of k8s itself.

Still, some of the core challenges remain: - the flexibility Kubernetes affords makes it hard to build and distribute a product with such specific requirements across the broad swath of differently set up Kubernetes installations. Managed Kubernetes services help, but come with their own restrictions (e.g. Kernel versions on GKE). - state handling and storage remains unsolved. PVCs are not reliable enough, subject to a lot of variance (see point above), and depending on the backing storage have vastly different behaviour. Local disks (which we use to this day), make workspace startup and backup expensive from a resource perspective and hard to predict timing wise. - user namespaces have come a long way in Kubernetes, but by themselves are not enough. /proc is still masked, FUSE is still not usable. - startup times, specifically container pulls and backup restoration, are hard to optimize because they depend on a lot of factors outside of our control (image homogeneity, cluster configuration)

Fundamentally, Kubernetes simply isn't the right choice here. It's possible to make it work, but at some point the ROI of running on Kubernetes simply isn't there.


Thanks!

AFAICT, a lot of that comes down to storage abstractions, which I'll be curious to see the answer on! Pinned localstorage <> cloud native is frustrating.

I sense another big chunk is the fast secure start problems that firecracker (noted in the blogpost) solve but k8s is not currently equipped for. Our team has been puzzling that one for awhile, and part of our guess is incentives. It's been 5+ years since firecracker came out, so likewise been frustrating to see.


no plans.

(but then, what else should I say :D)


Not sure this statements holds in this general form - it's a very good idea to be cautious what you execute, but curl | sh is not much different from running npx for example. It's difficult to know know what will actually be executed on your machine, but at least shell scripts can be inspected/audited (as compared to packages with 100 dependencies).

That said, prior to piping anything to a shell it's advisable to inspect what is about to be executed. That's why the lama.sh script is super simple, as is the code of the web server it downloads and executes.


Just in case you don't have Python :)

I got a tad fed up with having to remember the line for Python, Node and Ruby. This one works as long as you have bash and curl.


That's exactly right.

However, it does the downloading/starting in a really convenient manner. I wanted something I could just quickly type into a terminal without having to browse some GitHub release page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: