Hacker Newsnew | past | comments | ask | show | jobs | submit | ushakov's commentslogin

what does Katakana add on top of Kata?


Katakate is built on top of Kata, and sets up a stack combining Kubernetes (K3s), Kata, Firecracker, and devmapper snapshotter for thin pool provisioning. Combining these tools together is highly non-trivial and can be a headache for many, especially for AI engs who are often more comfortable with Python workflows. The stack gets deployed with an Ansible playbook. It implements a CLI, API and Python SDK to make it super easy to use. A lot of defense in depth settings are also baked-in so that you don't need to understand those systems at a low level to get a secure setup.


hey, I work at E2B, anything we can do to improve the setup for you?


I dig E2B, it's a great service and very cost effective. Thanks for all your hard work!


the browser looks like it is based on Chromium indeed


That's because it is.

Just do this on the navigation bar: atlas://extensions


congrats on the launch!

why do I need a specialized platform to deploy MCP instead of just hosting on existing PaaS (Vercel, Railway, Render)?

also if you're not using VMs, how do you isolate per-user servers?


Great questions!

If you want to run your own remote servers (for your product/company) Railway or Render work great (Vercel is a bit more difficult since Lambdas are very expensive if you run them over long periods of time). Metorial targets developers who build their own AI agents and want to connect them to integrations. Plainly, we do a lot more then running MCP servers; we give you monitoring, observability, handle consumer-facing OAuth, and give you super nice SDKs to integrate MCP servers with your agent.

Regarding the second question, Metorial has three execution modes depending on what the server supports: 1) Docker - this is the most basic one which any MCP server should support. We did some heavy optimizations to get those to start as fast as possible and our hibernation system supports stopping and resuming them while restoring the state. 2) Remote MCP - we connect to remote MCP servers for you, while still giving you the same features and ease-of-integration you get with any Metorial server (I could go more into detail on how our remote servers are better than standard ones). 3) Servers on our own lambda-based runtime. While not every MCP server supports this execution mode, it's what really sets us apart. The Lambdas only run for short intervals, while the connection is managed by our gateway. We already have about 100 lambda-based servers and working on getting more on to that execution model.

There's a lot about our platform that I haven't included in this. Like our stateful MCP proxy, our security model, our scalable SOA, and how we transform OAuth into a single REST API calls for our users.

Let me know if you have any additional questions, always happy to talk about MCP and software architecture.


thanks for explaining, especially the runtimes part!

i am currently running Docker MCP Containers + MCP Gateway mixed with Remote MCPs in microVMs (aka. Sandboxes).

seems to be the most portable setup, so you don't have to worry about dealing with different exec like uvx, poetry, bun, npx and the whole stdio/streamable http conversion.

lambdas sound interesting, esp. if you have figured out the way to make stateful work stateless, but comes with the downside that you have to maintain all the integrations yourself + the environment itself might have compatibility issues. i've seen someone also using cloudflare dynamic workers for similar use-case (disco.dev), but they're maintaining all the integrations by hand (or with Claude Code rather). more extreme version of this would be writing custom integration specific to the user by following very strict prompt.

anyways, i'll look into Metorial as am curious about how the portable runtimes work.

i am also maintaining a list of MCP gateways, just added you there as well: https://github.com/e2b-dev/awesome-mcp-gateways

thanks for building this, looking forward to checking it out!


Thanks for sharing and adding us to your list. The point about the lambdas is fair, though we do support other execution modes to combat this. Please let me know if you have any feedback or encounter hiccups :)


Looks awesome :)

We're doing something similar at E2B, we should chat!



Do you think you can make it run in Jupyter as a Kernel?


There are Jupyter kernels for Swift that uses REPL mode of Swift lldb. I used to do it but obvious right now it is not a priority: https://github.com/liuliu/swift-jupyter


That's not really been something we'd been considering, but yeah I think we probably could. We're primarily using the interpreter to render SwiftUI views, but it supports running arbitrary Swift expressions or statements.


i work on E2B, we are open-source sandbox runtime used by Perplexity, Manus, Hugging Face among others.

check it out: https://e2b.dev


we offer this with E2B Desktop

Demo: https://surf.e2b.dev

SDK: https://github.com/e2b-dev/desktop


hi Peter!

how long does 221(g) administrative processing take to complete in your experience? anything one can do besides waiting (Russian citizen working in tech, almost 1yr without adjustments)?


This really depends on the reason for the 221(g) and the applicant's country of citizenship or birth. Unfortunately, for those from certain countries, such as Iran and Russia, it has not been uncommon for such applications to go into a black hole and take 1-2 years. For those not from such countries, the process is relatively quick, from a couple of week to a couple of months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: