Ha! Been running these for years on both linux and windows (on lenovo x1 laptops). Using cheap chinese thunderbolt-to-nvme adapters + nvme-to-pcie boards + mellanox cx4 cards (recently got one cx5 and a solarflare x2).
If you don’t mind me asking, what are you using these for? Saturating these seems like it would have reasonably few workloads outside of like cdn or multi-tenant scenarios. Curious what my lack of imagination is hiding here.
Officially: to access NAS, get raw market-data files (tens to hundreds of gigabytes a day), not needed on laptop every day, but only once in a while to fix or analyze something.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
"because I can, and it is fun." The best answer! I am most of the way done with upgrading most of my homelab to 100G from 10G, but there really isn't a practical reason for it. 100G has dropped in price so much as datacenters are all about 400/800G now.
I'm using ConnectX5s for most, and some ConnectX4s in the older servers. Both of those cards have really come down in price in the used/ebay market. I have been playing around with some different optics - I have a bunch of CWDM4s which are very inexpensive and use a single SingleMode pair.... but of course they run hot so if you have them in servers without good air flow you might have problems.
I'm using mostly fiber just because the servers are connected to Cisco 9305 with 72 100g ports.
Oh they are both loud and eat a ton of power. I think the 9305 is at least 800 watts at idle. That is the biggest downside of the retired datacenter gear... yo u really need a dedicated room with power, cooling, and sound isolation.
I wouldn't really call 100Gbit overkill, if you compare it to modern disk drives is about where we should be relative to shared storage/NAS/etc infrastructure people used to run. So yes, being able to share my /home directory across a few dozen machines at my house without a huge perf impact vs using a local drive seems a pretty reasonable use case. Sure its faster than my WAN access, but who cares?
Frankly, 10Gbit is fully 25 years old with, 10GbaseT being 20 years old this year.
Thats ridiculously ancient technology. There is/was a 25/40GbaseT spec too (now 10 years old), which basically no one implemented because like ECC ram (and tape drives, and seem to be trying to do it with harddrives and GPUs) the MBA's have taken over parts of the computer industry and decided that they can milk huge profit margins from technologies which are incrementally more difficult because smaller users just don't matter to their bottom lines. The only reason those MBAs are allowing us to have it now, is because a pretty decent percentage of us can now get 5Gbit+ internet access and our wifi routers can do 1Gbit+ wireless, and the weak link is being able to attach the two.
I did a bit of back of the napkin math/simulation about a possible variable rate Ethernet (ex like NBbaseT, where it has multiple speeds and selects faster one based on line conditions), and concluded that 80+Gbit using modern PHY/DSP's and high symbol rate, multiple bands, techology which is dirt cheap thanks to wifi/bt/etc on fairly short cable distances (ex 30-50M) on CAT8 is entirely possible. And this isn't even fantasy, short cat7 runs are an entire diffrent ballpark from a phone pair, and these days mg.fast/etc have shown 10Gbit+ over that junk.
Agreed - the big thing is 100g is much much cheaper now as so much 100g gear is coming out of datacenters. So many of those older ConnectX4s and 5s, plus lots of switches and optics. 100g really is the new 10g for homelabs.
I do media production, and sometimes move giant files (like ggufs) around my network, so 25 Gbps is more useful than 10 Gbps, if it's no too expensive.
Pic of a previous cx3 (10 gig on tb3) setup: https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...
10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.