With my mindset, you have a gigantic chunk of data. Especially if you're recording multiple streams per machine. The immediate thought is that you want to avoid copying as much as possible. If you really, really have to, you can copy it once. Maybe even twice, though before moving from 1 to 2 copies you should spend some time thinking about whether it's possible to move from 1 to 0, or never materializing the full data at all (i.e., keep it compressed, which could apply here but only as an optimization for certain video applications and so is irrelevant to the bootstrapping phase).
WebSockets take your giant chunk of data and squeeze it through a straw. How many times does each byte get copied in the process? I don't know, but probably more than twice. Even worse, it's going to process it in chunks, so you're going to have per-chunk overhead (maybe including a context switch?) that is O(number of chunks in a giant data set).
But the application fundamentally requires squishing that giant data back down again, which immediately implies moving the computation to the data. I would want to experiment with a wasm-compiled video compressor (remember, we already have the no GPU constraint, so it's ok to light the CPU on fire), and then get compressed video out of the sandbox. WebSockets don't seem unreasonable for that -- they probably cost a factor of 2-4 over the raw data size, but once you've gained an order of magnitude from the compression, that's in the land of engineering tradeoffs. The bigger concern is dropping frames by combining the frame generation and reading with the compression, though I think you could probably use a Web Worker and SharedArrayBuffers to put those on different cores.
But I'm wrong. The data isn't so large that the brute force approach wouldn't work at all. My version would take longer to get up and running, which means they couldn't move on to the rest of the system.
With my mindset, you have a gigantic chunk of data. Especially if you're recording multiple streams per machine. The immediate thought is that you want to avoid copying as much as possible. If you really, really have to, you can copy it once. Maybe even twice, though before moving from 1 to 2 copies you should spend some time thinking about whether it's possible to move from 1 to 0, or never materializing the full data at all (i.e., keep it compressed, which could apply here but only as an optimization for certain video applications and so is irrelevant to the bootstrapping phase).
WebSockets take your giant chunk of data and squeeze it through a straw. How many times does each byte get copied in the process? I don't know, but probably more than twice. Even worse, it's going to process it in chunks, so you're going to have per-chunk overhead (maybe including a context switch?) that is O(number of chunks in a giant data set).
But the application fundamentally requires squishing that giant data back down again, which immediately implies moving the computation to the data. I would want to experiment with a wasm-compiled video compressor (remember, we already have the no GPU constraint, so it's ok to light the CPU on fire), and then get compressed video out of the sandbox. WebSockets don't seem unreasonable for that -- they probably cost a factor of 2-4 over the raw data size, but once you've gained an order of magnitude from the compression, that's in the land of engineering tradeoffs. The bigger concern is dropping frames by combining the frame generation and reading with the compression, though I think you could probably use a Web Worker and SharedArrayBuffers to put those on different cores.
But I'm wrong. The data isn't so large that the brute force approach wouldn't work at all. My version would take longer to get up and running, which means they couldn't move on to the rest of the system.