I think I see what you mean about 9p not being that special, it doesn't seem much different than if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.
> if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.
The difference is that in Plan 9, there is no 'if', and there's no other option for accessing resources. All programs interface with the OS and other programs via 9p, more or less: Notable exceptions
are process creation calls like rfork() and exec().
> but 9p the protocol appears not to have any concept of mmap:
Correct. Mmap is a kernel feature -- and mmap style stuff is only really done for demand paging of binaries at the moment. You get a cache miss and a page fault? Backfill with a read. Backfilling IO on page fault is really all mmap does, conceptually.
That seems like it would create difficulties in porting software there. Please correct me if I'm wrong but the original plan9 appears to also have no support for shared memory or for poll/select.
>Backfilling IO on page fault is really all mmap does, conceptually.
For read-only resources yes, for handling writes to the mmapped region, that seems quite broken.
Right, I get that's what you meant, it doesn't seem to really change much versus NFS, or DCOM, or whatever. So it's unclear what benefit is being provided by 9p here.
Also upon further research I am not sure what you mean by this is the only option, plan9 seems to suggest use of channels for other types of IPC interfaces, which seem to not be the same as 9p and are not necessarily network serializable. (Or are they?)
Channels are not IPC -- they're a libthread API that stays within a shared-memory thread group.
There are few magic kernel devices that don't act like 9p, like '#s' which implements fd passing on a single node. And the VGA drivers expose a special memory segment on PCs to enable configuring VGA devices.
But the exceptions are very few and far in between, and affect very few programs.
> So it's unclear what benefit is being provided by 9p here.
A uniform and simple API for interacting with out-of-process resources that can be implemented in a few hundred lines of code.
How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel. At least that part seems similar to an Unix X11 setup where this would be done over a socket.
I guess I just don't see what is conceptually the difference here versus something like doing basic HTTP over a TCP socket, it seems like the same kind of multiplexing. Either way, you still have to deal with the same issues: can't pass pointers directly, need to implement byte swapping, need another serialization library if you want the format to be JSON/XML or if you want a schema, etc... So in cases where that stuff isn't important, channels would come in handy, but of course that is now getting closer to a local Unix IPC. Am I getting this right?
> How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel.
A thread reads them from a file descriptor and writes them to a channel. You can look at the code which gets linked into the binary:
And yes, once you have an open FD, read() and write() act similar to how they would elsewhere. The difference is that there are no OTHER cases. All the code works that way, not just draw events.
And getting the FD is also done via 9p, which means that it naturally respects namespaces and can be interposed. For example, sshnet just mounts itself over /net, and replaces all network calls transparently for all programs in its namespace. Because there's no special case API for opening sockets: it's all 9p.
Ok I see, that helps, thank you. That seems to be mostly similar to evdev on Linux after all, except it requires you to use coroutines instead of having an option for a poll/select type interface.
To me the problem with saying "no special cases" seems to make it quite limited on the kernel side and prevent optimization opportunities. For example if you look at the file node vtables on Linux [0] and FreeBSD [1] there are quite a lot of other functions there that don't fit in 9p. So you lose out on all that stuff if you try to fit everything into a 9p server or a FUSE filesystem or something else of that nature.
Yes, that's the meaning of no special cases*: it means you don't add special cases. But this is why plan 9 has 40-odd syscalls instead of 500, and tools can be interposed, redirected, and distributed between machines.
I don't have to use the mouse device from the server I logged into remotely, I can grab it from the machine I'm sitting in front of and inject it into the program. VNC gets replaced with mount.
I don't have to use the network stack from my machine, I can grab it from the network gateway. NAT gets replaced with mount.
I don't have to use the debug APIs from my machine, I can grab them from the machine where the process is crashing. GDB remote stubs get replaced with mount.
You see the theme here. Resources don't have to be in front of you, and special case protocols get replaced with mount; 9p lets you interpose and redirect. Without needing your programs to know about the replacement, because there's a uniform interface.
You could theoretically do syscall forwarding for many parts of unix, but the interface is so fat that it's actually simpler to do it on a case by case basis. This sucks.
* In kernel devices can add some hacks and special magic, so long as they still mostly look as if they're speaking 9p. This is frowned upon, since it makes the system more complex -- but it's useful in some cases, like the '#s' device for fd passing. This is one of the abstraction breaks that I mentioned earlier.
That's what I mean though, I see the theme, but it seems to me to be about the same as trying to fit everything into an HTTP REST API, it all falls apart when something comes along that breaks the abstraction. For example if you have something that wants to pass a structure of pointers into the kernel, you can't reasonably do that with 9p, so now you've got a special case. The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case, the normal case is doing copies of memory regions over the socket, no matter how large they are. If you want to add compression to your VNC thing, or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket, which is not really different from what you would be doing on a more traditional Unix. Or is there another way plan9 handles these?
Because it replaces ptrace, and seems to work perfectly fine when I mount it over 9p. It's used by acid, which needs no additional utilities: http://man.cat-v.org/plan_9/1/acid
> If you want to add compression to your VNC thing
Images may be sent compressed. More -- or at least better -- formats would be good, but this is done.
It's a bit complex because it needs to do more than just forward mouse, keyboard and drawing -- signals need to be interposed and forwarded, and there are a few other subtle things that need to happen in the namespace. And because it contains both the client and server code. Even so, it's still small compared to VNC.
And yes, shithub is hosted on plan 9.
> or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket
I'm guessing these applications do not have any kind of animations or smooth scrolling? That would be a simple test, make your web browser or your image viewer fullscreen in 4K and see if there is lag in the scrolling/panning/zooming.
I don't see what you mean. Qubes is great, but it is not the same thing as Docker, flatpak, or snap. Are you saying Qubes should somehow be changed so that it works similar to Docker? And if so, why wouldn't you just use Docker?
Then you need to understand how they work differently and their contrasting limitations. Qubes (based on Xen) is great for desktop users to segregate applications. Xen standalone is similarly great for general server containerization.
Docker doesn't work anything remotely like a hypervisor. It doesn't provide the much greater assurance of security, scalability, isolation, resource metering, accounting, or flexibility that a hypervisor does. Docker is a security disaster and it only runs Linux. Xen/Qubes runs Windows, BSDs, or any other OS. Docker seems "easy" but with many subtle costs that come later at scale. You can't live migrate a Docker container from one host to another, where you usually can with Xen using shared storage. There are many other gotchas in the lifecycle of Docker containers that are eliminated or mitigated by using hypervisor guests instead.
flatpak and snap are basically filesystem overlays. They're gross, poorly-managed, incompatible duplications of package management.
I am not sure I see what the significant difference is, I've heard of security escapes happening in both Docker and in various hypervisors. Either way there is a risk of some privilege escalation bug that allows access to the full RAM. I think if you want isolation, both of them lose out to having a separate firewalled off machine. Also I think the companies running heavy Linux workloads on Docker are probably not interested in the ability of Qubes to run Windows or Mac, just my read on the situation from talking to some of them.
I don't know about snap but from what I have seen of flatpak, it allows for different versions of the same package to be installed, not many package managers are supporting that currently. (Nix and Guix being some notable exceptions, and those should be able to re-use some of the sandboxing bits from flatpak if they need to) Of course, that is one of the main benefits to building this on top of filesystem overlays, and why it requires a different approach from a traditional package manager, i.e. it's not just a duplication.
Edit: Live migration actually does work for containers, take a look at CRIU. (I don't know the current status of this being integrated in Docker) I never even saw this as being opposing technology anyway, for example if you need to you could migrate a container in or out of a VM.
I know all about application file systems overlays, I was part of a startup that shipped a multiplatform one. They have many many problems because they blow in a fixed set of dependencies as a monolith. It's a similar to the difference between image-based deployment and configuration management-based deployment: granularity of lifecycle including updates, and management.
The nuances become readily-apparent running anything real at scale, especially if you're only given a herring (Docker) to chop down the mightiness tree in the forest when you need a harvester (hypervisor). Docker, flatpak, and snap are unnecessary other than as shiny, fragile toys that attempt to (poorly) replicate the functionality of other tools. Live migration for containers is like adding high availability to a solar calculator: completely pointless and inappropriate engineering. Just don't get attached to these limited "advances" / fads / religions because popularity and newness aren't the same as demonstrable progress.
It's better to use something like Nix or habitat for multiversion app dependencies or just privately vendor them. There is no need for snap or flatpak if a containerizable OS can choose the correct dependency constraints for an app. The problem of concurrent package versions in existing package management systems can be solved with naming and numbering standards rather than reinventing everything.
I am not sure why you would feel threatened even if there was overlap. Godot is MIT licensed, so if your customers started asking for features from Godot or for compatibility with Godot, you could just copy the code straight from them without any hassle.
As someone who's worked in AAA game engines this is like saying because C# is now open source it will be easy to just copy code into Swift without any hassle.
The part that makes game engines both interesting and difficult is they made a series of discrete trade-offs to support the use cases of the types of games they ship. This is true from tooling workflows to rendering stacks to core engine layout like if they do heavy arena allocation or more open world dynamic entities.
We extended the engine we licensed with some fairly reasonable features and even doing the uplevel was a brutal, 4-6 month process to reconsile those changes.
I am not saying it would be easy, it would definitely be work that someone would have to do. I'm more saying that the Godot authors would not try to threaten you or consider you a threat, it's more likely they would want to help you. Of course as a business you would not do it if your customers weren't going to pay the cost to make it worth it.
I don't know enough about the implementation of C# and Swift to say for sure, but it does seem like the open sourcing of that would making it easier to do things like port some standard library component or algorithm over from one to the other, or perhaps do something like building a Swift implementation for the CLR.
Let me try to be a bit more to the point, game engine features are not just "drop-in", by adding a feature to one part of the engine there's a high probability that you make another part of the engine worse.
There's a reason that they say performance is the most leaky abstraction.
Then that sounds like you would want it to be an optional plugin, or a compile time flag that could be enabled for customers who want it? Why not do it, if that's what the customers asked for?
I get what you are saying and it definitely applies to the core architecture of the product, but if you have a large number of customers each with their own needs then I would be surprised if there were zero parts of the product that were modular or interchangeable.
> then I would be surprised if there were zero parts of the product that were modular or interchangeable.
You'd be surprised. Game engines are extremely monolithic.
There are some modular components and techniques (eg pathfinding, rendering, physics engine), but most of the effort that goes into an engine like Godot is integrating these components to their specific architecture. While the component is transferable (eg you can always use the Bullet engine in your own project), the integration work isn't.
Sadly, software is not (usually) that composable. If godot is a better engine all around, it would mean that competitors could offer a better service by just running godot in the cloud and charge per access, which they may be able to do for a cheaper price as they had very low initial investment.
Well, of course it would take work to compose them together, but then the pay off is that you might be able to say customers are getting the "best of both worlds."
If their customers are also asking them for hosted Godot, maybe they should also offer that as another product offering, at a competitive price, and then use that as a sales funnel into their other products? That is usually the way it goes with these open source bits.
They may be able to respond positively to a threat, but it can still be a threat. It may pay off to try to compose godot's code into theirs, but it may very well be cheaper to just rewrite things within their own framework.
Anyway, I'm just saying that free software can be a competitor and that you can lose to it even if you can, technically, embed their source code. Even if software was perfectly composable that would be true, but it's even more possible given that you can't always just plug in any new features godot releases. They may even be implemented in different languages, for all we know.
I don't see how it is a threat. Assuming Godot obsoleted all their code entirely, that would still be a boon -- that's now code they don't have to spend time maintaining anymore, and they can just reuse that and focus on their core competency. (Maybe it's hosting, I don't know enough about this business)
Different languages actually isn't as bad an issue with this type of thing, as the idea with running it in the browser is that it all compiles down to Javascript or WASM.
You are not taking the whole market into consideration. Maybe they are good at writing the game engine and then hosting, but others may be better at just hosting. So, they could be outcompeted by people who don't want to pay the cost/risk of building an engine. Others may be better at hosting godot than they ever will, although those people would not be there if there was no godot or if godot was not free. Free software (copyleft licenses in special) can be a threat for commercial software in two ways: a. users may just jump to the free alternative and leave yours b. it levels the field so new competitors can come in without paying the initial investment you made.
Just to be clear, I'm not saying that they are incorrect in assessing that godot is not a threat. They seem to consider they have other features beyond godot's scope which is what differentiate them in the market. What I am saying is that free software can certainly be a threat to a business. In fact, it can be even a larger threat than a single competitor, because it can turn your product into a commodity.
I still don't see what you mean. It sounds like you are saying the real threat would be if they had no other features that could let them stand out in the market, at which point a competitor would be able to beat them by lowering the price, possibly to zero. That can be done by any competitor and has very to do with the license -- my point is that the open source license on that "competing product" actually helps them, by allowing them to make use of the same thing without having to pay that initial investment again. And the first initial investment you made isn't lost as long as you keep a path to retaining those customers.
To put it another way, if the actual problem to the business is that they are falling behind on feature velocity and don't have the head count to keep up, re-using some features from open source code could actually help there.
Yes -- so if the company falls behind, maybe using some open source code could help you keep up. At least that's what I've felt looking at things on github/gitlab/etc has helped with, if done the right way :)
Let me try to draw a picture. Say you have a web based game engine. It is the only game engine as a service in the market which also runs in a browser. Then, a popular open source game engine becomes web based too. Before, you had no competitors, now you have to compete against a product that's free.
You may lose some costumers that only cared about it being web based and who are willing to learn another engine. Not only that, a single guy decides to host the game engine in the cloud and charge really cheap for it. Now, if you want to compete in price, you probably have to fire all your game engine devs and significantly downsize. Either that or have vastly superiour features to justify your higher price in the eyes of the costumers. All that threats your company's existence.
I'm not saying it's impossible to compete, but the more overlap between your product and a free software product, the bigger the threat will be.
From that perspective, moving to OpenBSD seems mostly pointless as currently the best practice there if file permissions are too strict seems to be "comment out some unveil lines and recompile the program." Not really an improvement IMO.
From that angle if the permission dialogs bothered you then you could just recompile flatpak to unconditionally approve all dialogs. (Maybe there is even a setting for this already?) Of course as a sibling comment has said, this would be pretty dangerous, almost equivalent to using windows without UAC, or sudo with NOPASSWD.
>If you're expecting a blocking system call, and actually get a brand new background thread that's polling, it's quite reasonable to be frustrated.
It really isn't if the documentation doesn't outright say that it's single threaded and not thread safe. For a lot of simpler use cases where you just want to ship a thread-safe API (e.g. application does not have its own thread pool) then it just makes sense in a lot of cases to use some kind of automatic thread pooling. The caller does not have to know or care how the internal state machine is implemented.
If you have implemented your own thread pool it seems you should know enough to dig down enough to the lower layers to where you can get to that blocking syscall, or least to the point where you can strip off the O_NONBLOCK flags yourself.
In those cases it would be the company that is volunteering, the contribution is still voluntary, and the company is paying someone to volunteer on their behalf.
The company is paying an employee to work on a product which their business probably uses in return for publicity, bigger voice in the direction or prioritization of bugs and features, and to ensure stability of the product they depend on
That doesn't seem to be related to async? I don't know the details of rust's async implementation but that sounds like a problem with your application's setup -- you should be able to have a single threaded async executor that uses an event loop, or in simple cases, just calls poll/select directly?
To put it another way, it's unfortunate that particular synchronous API is implemented using threads, but there is nothing about async that implies one way or another that a synchronous method will be implemented using threads -- I've seen plenty of (questionable) C functions that do similar things like using pthread_create and then pthread_join immediately after to fake a blocking task.
Er? No, the point is that threads are what you want for cpu-bound tasks. Async does not deal well with long running cpu intensive jobs that hog the cpu without yield points.
But regardless, the GP post was not taking about matrix math, it seems it was talking about sending an HTTP request and waiting for a response, which is something that actually is I/O bound on the TCP socket.
The systems that use it as a native threading model are obsolete, but there's also this sentence there:
>Cooperative multitasking is used with await in languages with a single-threaded event-loop in their runtime, like JavaScript or Python.
There's no reason rust can't have an executor that does the same, and you only use that within the event loop on your one or two HTTP worker threads. If you're waiting in a thread for an HTTP request to return, that's never going to be CPU-bound. I still am failing to see what the problem here is besides a complaint about some rust crate only supporting a multi-threaded executor, which again is a different problem than whether it's done with async futures or not. One could just as easily write some C code that forces the use of threads.
Those languages are also well known for not handling multiple cpu bound threads well. (And for that matter, it's simply wrong about Python, which uses native threads, but locks very heavily: you need to write native code to use more than one core effectively from one process.)
The goal is to NOT do what you're suggesting. It's a holdover from when native threads were much more expensive than they are today, and multiple cores on a single cpu were rare.
To the contrary, if Amazon is providing a good hiring funnel for the developers/maintainers, regularly contributing patches back upstream, providing funding to the project's non-profit, and generally respecting the license, then what's the problem? I'm no fan of Amazon but how can I complain about them having a right to profit in cases where they actually are being good open source citizens? Are they really any different from any other cloud provider in that respect?
Disclosure: I work for AWS, but I am not speaking for my employer. This post is based on my personal workplace experience.
The policy exists to enable collaboration and contribution, not to restrict it. These types of policies are common at companies like Amazon. Google has posted theirs publicly [1], and Amazon policies are similar. I have used the policy to contribute to more than one "upstream" open source software package, for example the Xen hypervisor [2].
Though I wish I had more recent commits, this should demonstrate that even in 2012 patches were flowing to Xen. More work on Xen by others can be found by searching for "amazon.co" in the commits [3].
I am not sure what the problem there is, that is a patch that carries a non open source license, which is allowed by the original ISC license of pgbouncer.
They contributed a tiny fraction of development resources and they reap a disproportionate amount of the profit.
Letting the status quo continue would just ensure that Amazon gets even richer off the hard work done by Elastic while development dries up. We all lose while they suck profit out of this product.
It's not like they require elastic to survive. They're too big to fail. By being a dominant cloud provider they have a massive inbuilt advantage
Yes they are different to other cloud providers. They're big enough to throw their weight around and they do.
How does that work? I don't know the details of any implementation, but 9p the protocol appears not to have any concept of mmap: https://9fans.github.io/plan9port/man/man9/intro.html
I think I see what you mean about 9p not being that special, it doesn't seem much different than if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.