Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was looking a bit at the code for the shared memory implementation in https://github.com/3tilley/rust-experiments/tree/master/ipc and the dependency <https://github.com/elast0ny/raw_sync-rs.

My last systems programming class was already a few years ago and I am a bit rusty, so I got some questions:

1. Looking at the code in https://github.com/elast0ny/raw_sync-rs/blob/master/src/even...) it looks like we are using a userspace spinlock. Aren't these really bad because the mess with the process scheduler and might unnecessarily trigger the scaling governor to increase the cpu frequency? I think at least on linux one could use a semaphore to inform the consumer that new data has been produced.

2. What kind of memory guarantees do we have on modern computer architectures such as x86-64 and ARM? If the producers does two writes (I imagine first the data and then the release of the lock) - is it guaranteed that when the consumer reads the second value that also the first value has been synchronized?



I'm not sure I fully understand what you mean? Do you assume we implemented the same approach for shared memory communication like described in the blog post?

If that’s the case, I want to reassure you that we don’t use locks. Quite the contrary, we use lock-free[1] algorithm to implement the queues. We cannot use locks for the reason you mentioned and also for cases when an application dies while holding the lock. This would result in a deadlock and cannot be used in a safety critical environment. Btw, there are already cars out there which are using a predecessor of iceoryx to distribute camera data in an ECU.

For hard realtime systems we have a wait-free queue. This gives even more guarantees. Lock-free algorithms often have a CAS loop (compare and swap), which in theory can lead to starvation but it's practically unlikely as long as your system does not run at 100% CPU utilization all the time. As a young company, we cannot open source everything immediately, so the wait-free queue will be part of a commercial support package, together with more sophisticated tooling, like teased in the blog post.

Regarding memory guarantees. There are essentially the same guarantees like what you have when sharing an Arc<T> via a Rust channel. After publishing the producer releases the ownership to the subscriber and they have read-only access for as long as they hold the sample. When the sample is dropped by all subscriber, it will be released back to the shared memory allocator.

Btw, we also have an event signalling mechanism to not poll the queue but wait until the producer signals that new data is available. But this requires a context switch and it is up to the user to decide if it is desired to have.

[1]: https://www.baeldung.com/lock-free-programming




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: