You said “legal” risk, not “security” risk. You’ll need to get more information on what risks they are trying to mitigate and talk to a “legal” expert rather than engaging on a technical or security basis.
“in practice FDIC usually will bail out the full balances even over the nominal limit”
That’s not true. It takes the systematic risk exemption and agreement between the fdic/fed reserve board and the president to make that happen. I think it’s happened like 4 times out of the thousands of bank bailouts that have happened.
There are other cases where the acquiring bank took on uninsured funds (like jpmc did for first republic) but in that case your gamble is that the other depositors on the banks balance sheet are desireable to the acquirer. Which presumably isn’t the case for your hypothetical max risk run bank.
But didn't it technically not even apply at the end of the day for SVB? They sold the bank to another bank, which is what usually happens, and that other bank assumed all its deposits and liabilities. The FDIC didn't have to pay out any deposits and thus the limit didn't come into play.
Everyone I know who is using AI effectively has solved for the context window problem in their process. You use design, planning and task documents to bootstrap fresh contexts as the agents move through the task. Using these approaches you can have the agents address bigger and bigger problems. And you can get them to split the work into easily reviewable chunks, which is where the bottleneck is these days.
Plus the highest end models now don’t go so brain dead at compaction. I suspect that passing context well through compaction will be part of the next wave of model improvements.
If you don’t mind me asking, what are you using these for? Saturating these seems like it would have reasonably few workloads outside of like cdn or multi-tenant scenarios. Curious what my lack of imagination is hiding here.
Officially: to access NAS, get raw market-data files (tens to hundreds of gigabytes a day), not needed on laptop every day, but only once in a while to fix or analyze something.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
"because I can, and it is fun." The best answer! I am most of the way done with upgrading most of my homelab to 100G from 10G, but there really isn't a practical reason for it. 100G has dropped in price so much as datacenters are all about 400/800G now.
I'm using ConnectX5s for most, and some ConnectX4s in the older servers. Both of those cards have really come down in price in the used/ebay market. I have been playing around with some different optics - I have a bunch of CWDM4s which are very inexpensive and use a single SingleMode pair.... but of course they run hot so if you have them in servers without good air flow you might have problems.
I'm using mostly fiber just because the servers are connected to Cisco 9305 with 72 100g ports.
Oh they are both loud and eat a ton of power. I think the 9305 is at least 800 watts at idle. That is the biggest downside of the retired datacenter gear... yo u really need a dedicated room with power, cooling, and sound isolation.
I wouldn't really call 100Gbit overkill, if you compare it to modern disk drives is about where we should be relative to shared storage/NAS/etc infrastructure people used to run. So yes, being able to share my /home directory across a few dozen machines at my house without a huge perf impact vs using a local drive seems a pretty reasonable use case. Sure its faster than my WAN access, but who cares?
Frankly, 10Gbit is fully 25 years old with, 10GbaseT being 20 years old this year.
Thats ridiculously ancient technology. There is/was a 25/40GbaseT spec too (now 10 years old), which basically no one implemented because like ECC ram (and tape drives, and seem to be trying to do it with harddrives and GPUs) the MBA's have taken over parts of the computer industry and decided that they can milk huge profit margins from technologies which are incrementally more difficult because smaller users just don't matter to their bottom lines. The only reason those MBAs are allowing us to have it now, is because a pretty decent percentage of us can now get 5Gbit+ internet access and our wifi routers can do 1Gbit+ wireless, and the weak link is being able to attach the two.
I did a bit of back of the napkin math/simulation about a possible variable rate Ethernet (ex like NBbaseT, where it has multiple speeds and selects faster one based on line conditions), and concluded that 80+Gbit using modern PHY/DSP's and high symbol rate, multiple bands, techology which is dirt cheap thanks to wifi/bt/etc on fairly short cable distances (ex 30-50M) on CAT8 is entirely possible. And this isn't even fantasy, short cat7 runs are an entire diffrent ballpark from a phone pair, and these days mg.fast/etc have shown 10Gbit+ over that junk.
Agreed - the big thing is 100g is much much cheaper now as so much 100g gear is coming out of datacenters. So many of those older ConnectX4s and 5s, plus lots of switches and optics. 100g really is the new 10g for homelabs.
I do media production, and sometimes move giant files (like ggufs) around my network, so 25 Gbps is more useful than 10 Gbps, if it's no too expensive.
We didn’t do it because money was cheap we did it because there are tons of benefits to not having to inventory your own compute. Everything from elastic scaling to financial engineering was improved via the hyper scalar options and it’s ridiculous to act like those options aren’t valuable post hoc because Europe doesn’t have a native one.
I think the Heztners and their ilk are coming along nicely and probably can support a lot of Europes cloud computing needs, but they aren’t in the same league as the hyper scalars when it comes to capabilities currently. It would be great if they got there for everyone though.
Left leaning in the US has not meant international trade friendly, historically it’s the opposite. The Clinton/Obama branch of the democrats who were pro free trade are really the exception.
That the Republicans sold out their business branch for cronyism and populism with MAGA may end up being the negative outcome of that movement with the longest negative ramifications (my thinking being administrations can change immigration policy easily and Trump is more the final nail in the rules based international order than the initiator of its demise)
reply