Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is also why function coloring is not a problem, and is in fact desirable a lot of the time.




The problem with function coloring is that it makes libraries difficult to implement in a way that's compatible with both sync and async code.

In Python, I needed to write both sync and async API clients for some HTTP thing where the logical operations were composed of several sequential HTTP requests, and doing so meant that I needed to implement the core business logic as a Generator that yields requests and accepts responses before ultimately returning the final result, and then wrote sync and async drivers that each ran the generator in a loop, pulling requests off, transacting them with their HTTP implementation, and feeding the responses back to the generator.

This sans-IO approach, where the library separates business logic from IO and then either provides or asks the caller to implement their own simple event loop for performing IO in their chosen method and feeding it to the business logic state machine, has started to appear as a solution to function coloring in Rust, but it's somewhat of an obtuse way to support multiple IO concurrency strategies.

On the other hand, I do find it an extremely useful pattern for testability, because it results in very fuzz-friendly business logic implementation, isolated side-effect code, and a very simple core IO loop without much room in it for bugs, so despite being somewhat of a pain to write I still find it desirable at times even when I only need to support one of the two function colors.


My opinion is that if your library or function is doing IO, it should be async - there is no reason to support "sync I/O".

Also, this "sans IO" trend is interesting, but the code boils down to a less ergonomic, more verbose, and less efficient version of async (in Rust). It's async/await with more steps, and I would argue those steps are not great.


> there is no reason to support "sync I/O"

I disagree strongly.

From a performance perspective, asynchronous IO makes a lot of sense when you're dealing concurrently with a large number of tasks which each spend most of their time waiting for IO operations to complete. In this case, running those tasks in a single-threaded event loop is far more efficient than launching off thousands of individual threads.

However, if your application falls into literally any other category, then suddenly you are actually paying a performance penalty, since you need the overhead of running an event loop any time you just want to perform some IO.

Also, from a correctness perspective, non-concurrent code is simply a lot less complex and a lot harder to get wrong than concurrent code. So applications which don't need async also end up paying a maintainability, and in some cases memory safety / thread safety, penalty as well.


The beautiful thing about the “async” abstraction is that it doesn’t actually tie you to an event loop at all. Nothing about it implies that somebody is calling `epoll_wait` or similar anywhere in the stack.

It’s just a compiler feature that turns functions into state machines. It’s totally valid to have an async runtime that moves a task to a thread and blocks whenever it does I/O.

I do agree that async without memory safety and thread safety is a nightmare (just like all state machines are under those circumstances). Thankfully, we have languages now that all but completely solve those issues.


You surely must be referring to Rust, the only multithreaded language with async-await in which data races aren't possible.

Rust is lovely and all, but is a bad example for the performance side of the argument, since in practice libraries usually have to decide on an async runtime, so in practice library users have to launch that runtime (usually Tokio) to execute the library's Futures.


Sure, but that’s a library limitation (no widespread common runtime interface that libraries such as Tokio implement), not a fundamental limitation of async.

Thread safety is also a lot easier to achieve in languages like C#, and then of course you have single-threaded environments like JS and Python.


Exactly, there is nothing wrong with function coloring. It's a design choice.

Colored functions are easier to reason about, because potential asynchronicity is loudly marked.

Colorless functions are more flexible because changing a function to be async doesn't virally break its interface and the interface of all its callers.

Zig has colored functions, and that's just fine. The problem is the (unintentional) gaslighting where we are told that Zig is colorless when the functions clearly have colors.


As mentioned, the problem with coloring is not that you see the color, the problem is that you can't abstract over the colors.

Effectful languages basically add user-definable "colors", but they let you write e.g. a `map` function that itself turns color based on its parameter (e.g. becoming async if an async function is passed).


I think talking about colouring often misses the point. Sync & async code are fundamentally different; languages without coloured functions make everything async. Everything in go (for instance) is running in an async runtime, and it's all preemptable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: