> TCP without congestion control isn't particularly useful.
Thats why it has a congestion control mechanism, its just opinionated.
> until packets are dropped, leading to undesirable buffer bloat issues.
because people have bigger buffers than needed, which means saw tooth throughput.
> TCP wasn't designed for hardware offload,
neither is quic.
> TCP's three-way handshake is costly for one-shot RPCs,
Agreed
> A connection breaks when your IP address changes, and there is no easy way to migrate it.
Also agreed, but then TCP wasn't designed for this.
> TCP is poorly suited for optical/WDM networks,
I don't think thats the case. Loads of 10gig networks are optical, and 40 gig fibre used to be DWDM (ie 4 10 gig channels in one fibre, 100gig might be the same, I've not checked) [Again, this depends on the connection type and SPF pluggy thing. It could be twinax.]
Thats what what RDMA over Converged Ethernet is for, because supposedly its cheaper than inifinband.
Thats why it has a congestion control mechanism, its just opinionated.
> until packets are dropped, leading to undesirable buffer bloat issues.
because people have bigger buffers than needed, which means saw tooth throughput.
> TCP wasn't designed for hardware offload,
neither is quic.
> TCP's three-way handshake is costly for one-shot RPCs,
Agreed
> A connection breaks when your IP address changes, and there is no easy way to migrate it.
Also agreed, but then TCP wasn't designed for this.
> TCP is poorly suited for optical/WDM networks,
I don't think thats the case. Loads of 10gig networks are optical, and 40 gig fibre used to be DWDM (ie 4 10 gig channels in one fibre, 100gig might be the same, I've not checked) [Again, this depends on the connection type and SPF pluggy thing. It could be twinax.]
Thats what what RDMA over Converged Ethernet is for, because supposedly its cheaper than inifinband.