Katakate is built on top of Kata, and sets up a stack combining Kubernetes (K3s), Kata, Firecracker, and devmapper snapshotter for thin pool provisioning. Combining these tools together is highly non-trivial and can be a headache for many, especially for AI engs who are often more comfortable with Python workflows. The stack gets deployed with an Ansible playbook. It implements a CLI, API and Python SDK to make it super easy to use. A lot of defense in depth settings are also baked-in so that you don't need to understand those systems at a low level to get a secure setup.
If you want to run your own remote servers (for your product/company) Railway or Render work great (Vercel is a bit more difficult since Lambdas are very expensive if you run them over long periods of time). Metorial targets developers who build their own AI agents and want to connect them to integrations. Plainly, we do a lot more then running MCP servers; we give you monitoring, observability, handle consumer-facing OAuth, and give you super nice SDKs to integrate MCP servers with your agent.
Regarding the second question, Metorial has three execution modes depending on what the server supports: 1) Docker - this is the most basic one which any MCP server should support. We did some heavy optimizations to get those to start as fast as possible and our hibernation system supports stopping and resuming them while restoring the state. 2) Remote MCP - we connect to remote MCP servers for you, while still giving you the same features and ease-of-integration you get with any Metorial server (I could go more into detail on how our remote servers are better than standard ones). 3) Servers on our own lambda-based runtime. While not every MCP server supports this execution mode, it's what really sets us apart. The Lambdas only run for short intervals, while the connection is managed by our gateway. We already have about 100 lambda-based servers and working on getting more on to that execution model.
There's a lot about our platform that I haven't included in this. Like our stateful MCP proxy, our security model, our scalable SOA, and how we transform OAuth into a single REST API calls for our users.
Let me know if you have any additional questions, always happy to talk about MCP and software architecture.
thanks for explaining, especially the runtimes part!
i am currently running Docker MCP Containers + MCP Gateway mixed with Remote MCPs in microVMs (aka. Sandboxes).
seems to be the most portable setup, so you don't have to worry about dealing with different exec like uvx, poetry, bun, npx and the whole stdio/streamable http conversion.
lambdas sound interesting, esp. if you have figured out the way to make stateful work stateless, but comes with the downside that you have to maintain all the integrations yourself + the environment itself might have compatibility issues. i've seen someone also using cloudflare dynamic workers for similar use-case (disco.dev), but they're maintaining all the integrations by hand (or with Claude Code rather). more extreme version of this would be writing custom integration specific to the user by following very strict prompt.
anyways, i'll look into Metorial as am curious about how the portable runtimes work.
Thanks for sharing and adding us to your list. The point about the lambdas is fair, though we do support other execution modes to combat this. Please let me know if you have any feedback or encounter hiccups :)
There are Jupyter kernels for Swift that uses REPL mode of Swift lldb. I used to do it but obvious right now it is not a priority: https://github.com/liuliu/swift-jupyter
That's not really been something we'd been considering, but yeah I think we probably could. We're primarily using the interpreter to render SwiftUI views, but it supports running arbitrary Swift expressions or statements.
how long does 221(g) administrative processing take to complete in your experience? anything one can do besides waiting (Russian citizen working in tech, almost 1yr without adjustments)?
This really depends on the reason for the 221(g) and the applicant's country of citizenship or birth. Unfortunately, for those from certain countries, such as Iran and Russia, it has not been uncommon for such applications to go into a black hole and take 1-2 years. For those not from such countries, the process is relatively quick, from a couple of week to a couple of months.