Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel. At least that part seems similar to an Unix X11 setup where this would be done over a socket.

I guess I just don't see what is conceptually the difference here versus something like doing basic HTTP over a TCP socket, it seems like the same kind of multiplexing. Either way, you still have to deal with the same issues: can't pass pointers directly, need to implement byte swapping, need another serialization library if you want the format to be JSON/XML or if you want a schema, etc... So in cases where that stuff isn't important, channels would come in handy, but of course that is now getting closer to a local Unix IPC. Am I getting this right?



> How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel.

A thread reads them from a file descriptor and writes them to a channel. You can look at the code which gets linked into the binary:

    /sys/src/libdraw/mouse.c:61
Essentially, the loop in _ioproc is:

    while(read(fd, event)){
       parse(event);
       send(mousechan, event);
    }
And yes, once you have an open FD, read() and write() act similar to how they would elsewhere. The difference is that there are no OTHER cases. All the code works that way, not just draw events.

And getting the FD is also done via 9p, which means that it naturally respects namespaces and can be interposed. For example, sshnet just mounts itself over /net, and replaces all network calls transparently for all programs in its namespace. Because there's no special case API for opening sockets: it's all 9p.


Ok I see, that helps, thank you. That seems to be mostly similar to evdev on Linux after all, except it requires you to use coroutines instead of having an option for a poll/select type interface.

To me the problem with saying "no special cases" seems to make it quite limited on the kernel side and prevent optimization opportunities. For example if you look at the file node vtables on Linux [0] and FreeBSD [1] there are quite a lot of other functions there that don't fit in 9p. So you lose out on all that stuff if you try to fit everything into a 9p server or a FUSE filesystem or something else of that nature.

[0]: https://elixir.bootlin.com/linux/v5.11.8/source/include/linu...

[1]: https://github.com/freebsd/freebsd-src/blob/master/sys/kern/...


Yes, that's the meaning of no special cases*: it means you don't add special cases. But this is why plan 9 has 40-odd syscalls instead of 500, and tools can be interposed, redirected, and distributed between machines. I don't have to use the mouse device from the server I logged into remotely, I can grab it from the machine I'm sitting in front of and inject it into the program. VNC gets replaced with mount.

I don't have to use the network stack from my machine, I can grab it from the network gateway. NAT gets replaced with mount.

I don't have to use the debug APIs from my machine, I can grab them from the machine where the process is crashing. GDB remote stubs get replaced with mount.

You see the theme here. Resources don't have to be in front of you, and special case protocols get replaced with mount; 9p lets you interpose and redirect. Without needing your programs to know about the replacement, because there's a uniform interface.

You could theoretically do syscall forwarding for many parts of unix, but the interface is so fat that it's actually simpler to do it on a case by case basis. This sucks.

* In kernel devices can add some hacks and special magic, so long as they still mostly look as if they're speaking 9p. This is frowned upon, since it makes the system more complex -- but it's useful in some cases, like the '#s' device for fd passing. This is one of the abstraction breaks that I mentioned earlier.


That's what I mean though, I see the theme, but it seems to me to be about the same as trying to fit everything into an HTTP REST API, it all falls apart when something comes along that breaks the abstraction. For example if you have something that wants to pass a structure of pointers into the kernel, you can't reasonably do that with 9p, so now you've got a special case. The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case, the normal case is doing copies of memory regions over the socket, no matter how large they are. If you want to add compression to your VNC thing, or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket, which is not really different from what you would be doing on a more traditional Unix. Or is there another way plan9 handles these?


These things have already been done with 9p.

> The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case

Can you point to the special case here?

http://man.cat-v.org/plan_9/3/proc

Because it replaces ptrace, and seems to work perfectly fine when I mount it over 9p. It's used by acid, which needs no additional utilities: http://man.cat-v.org/plan_9/1/acid

> If you want to add compression to your VNC thing

Images may be sent compressed. More -- or at least better -- formats would be good, but this is done.

http://man.cat-v.org/plan_9/3/draw

For a full implementation of remote login using these interfaces, here's the code:

http://shithub.us/ori/plan9front/fd1db35c4d429096b9aff1763f2...

It's a bit complex because it needs to do more than just forward mouse, keyboard and drawing -- signals need to be interposed and forwarded, and there are a few other subtle things that need to happen in the namespace. And because it contains both the client and server code. Even so, it's still small compared to VNC.

And yes, shithub is hosted on plan 9.

> or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket

Here are the network APIs.

http://man.cat-v.org/plan_9/3/ip

What kind of complex routing are you talking about, and why would it be impossible to implement using those interfaces?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: