Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you saying BEAM introduces too much latency and/or jitter for telco workloads? This surprises me. I would have thought that if anything BEAM should be pretty efficient for network I/O. Also what would be the causes of unacceptable jitter? The runtime itself is designed to avoid things like long gc pauses - is it that you end up having to migrate elixir processes between BEAM scheduler cores?


I don't think latency and jitter are big issues. If you're imagining VOIP packets flowing through the Erlang app, it's more likely that they are routed by hardware at a lower level. The Erlang app controls the lower level switches. I don't know about Ericsson switches in particular, but I've worked on telecom stuff (not programmed in Erlang, unfortunately) and that's how it worked. The low level routing was done by FPGA's controlled by a C program. The C program could have been written in Erlang instead, and that would have made life a lot nicer for the programmers.


You'd be surprised exactly how much functionality is written in vanilla C and pushed down the stack to raw packet handling. There is hardware acceleration in NICs, but voice is actually a tiny, almost insignificant amount of the workloads involved in some parts of the world.

Think hyperscaler-style SDN, but largely on-premises.


I said no such thing. Merely that there is a higher reliance on hardware features even if the workloads are virtualized - more so than on your vanilla K8s cluster in any other industry (except perhaps trading).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: