Then I'm mistaken. I thought the law only demanded that piracy sites were blocked, and then ISPs made life easier for themselves by blocking all of Cloudflare.
At any rate, this behavior isn't befitting a serious country like Spain.
You're not wrong but we've never really tried the combination of modern CSS with no JS. It could produce elegant designs that load really fast... or ad-filled slop but declarative.
Yes to the modern CSS. To go as far back as suggested would mean using frames again and table based layouts with 1x1 invisible gifs to use for spacing layouts. Never again!
I could be wrong but it seems like in the case of a crash no one will be buying new GPUs and thus the existing ones could hold their value longer. Of course that value will no longer be massively inflated by bubble FOMO.
>in the case of a crash no one will be buying new GPUs and thus the existing ones could hold their value longer.
No, because no one has any use for those monstrous GPUs outside of ML and some research projects. They can't even be dropped onto the consumer market because a SOHO is not equipped to house devices like that. The best case scenario is that the boards get dismantled and the VRAM gets salvaged for refurbishing. They've built these machines so specialized that they're essentially disposable.
What are you basing that on? Some of the demand that currently exists, exists because of all the money sloshing around the AI ecosystem (i.e. people using AI to sell AI solutions to other people), so how are you so sure demand can fully utilize all existing compute even after a crash?
It isn’t about holding value, the cards are going to burn up. If they don’t, in 5 years one could run a rack of 4 cards at home at an affordable rate. Either the cards become affordable again and the datacenter is useless, or they don’t, and nobody can fucking afford to rent them.
GPUs definitely have higher failure rates than CPUs but I'm not sure what the absolute rates will turn out to be. If 10% of GPUs die within 5 years that's very high but also probably economically fine. If 50% die that's a disaster.
Sorry, I meant at some point the current cards in he data centers will be obsolete, financially. They’ll be sold on the secondary market. Buying 8 h200s at $150 a pop will either be a real thing, or they all burn up and capex explodes again, which would be a death knell.
They mostly relied on OS/Toolbox implementation quirks though, not hardware implementation quirks, because applications that relied on the latter wouldn’t run on the Macintosh XL and that mattered to certain market segments. (Like some people using spreadsheets, who were willing to trade CPU speed for screen size.) Similarly anything that tried to use floppy copy protection tricks wouldn’t work due to the different system design, so that wasn’t common among applications.
So even things that wrote directly to the framebuffer would ask the OS for the address and bounds rather than hardcode them, copy protection would be implemented using license keys (crypto/hashes, not dongles) rather than weird track layouts on floppies, etc. It led to good enough forward compatibility that the substantial architectural changes in the Macintosh II were possible, and things just improved from there.
Eh, there were plenty of games that were coded for a particular clock speed, and then once the SE came out, had an update that included a software version of a turbo button, let you select which of two speeds to run at. They run FAST on an SE/30 or Mac II and unusably fast on anything newer.
I didn’t encounter too many of those back in the day, I think because there was the VBL task mechanism for synchronizing with screen refresh that made it easy to avoid using instruction loops for timing.
Much more common in my experience was the assumption that the framebuffer was 1-bit, but such games would still run on my IIci if I switched to black & white—they’d just use the upper left 3/4 of the screen since they still paid proper attention to the bytes-per-row in its GrafPort.
Could be that by the time I was using a Mac II though that all the games that didn’t meet that minimum bar had already been weeded out.
Indeed, even ones from companies as big as Microsoft!
There is a story in Writing Solid Code by Steve Maguire [1] where Apple asked Microsoft to fix hacks in its Mac apps that didn't conform to the developer docs in Inside Macintosh because such workarounds were required when the apps were first developed alongside the original Macintoshes. However, Microsoft's workarounds would be broken by a major System Software update under development at Apple, which naturally wanted to avoid having to permanently add back the implementation bugs and quirks that the workarounds either relied on or were meant to avoid.
As Maguire told it, removing one such workaround in Microsoft Excel was hotly debated by the Excel team because it was in a hot-path 68k assembly function and rewriting the code to remove it would add 12 CPU cycles to the function runtime. The debate was eventually resolved by one developer who ran Excel's "3-hour torture test" and counted how many times the function in question was called. The total: about 76,000 times, so 12 more cycles each time would be about 910,000 cycles total... which on the Macintosh 128k's ~7 MHz 68000 CPU would be about 0.15 seconds added to a 3-hour test run. With the slowdown from removing the workaround thus proven to be utterly trivial, it was indeed removed.
Out of curiosity, what app are you thinking of? Of all of types of software used with classic Mac OS (INITs, CDEVs, FKEYs, Desk Accessories, Drivers, etc.), apps would be the least likely to rely on implemention quirks.
Macintosh Common Lisp - at least the versions floating around Mac Garden and such - seems to refuse to run on anything besides accurate emulators and real hardware.
I'm not "refusing to add TLS support" I insist that the certificate is safely isolated in a separate process for security reasons. There are many ways to skin that cat.
the whole point of varnish software keeping a public version of "vinyl cache" as "varnish cache" with TLS is to give people a way to access a FOSS version with native TLS.
I think TLS is table-stakes now, and has been for the last 10 years, at least.
haproxy supports both the offload (client) and onload (backend) use case. This is the main reason for why I personally prefer it. I can not comment on how well hitch works in comparison, because I have not used it for years.
fwiw; Varnish Software still maintains and supports hitch, but we can't say we see a bright future for it. Both the ergonomics and the performance of not being integrated into Varnish are pretty bad. It was the crutch we leaned as it was the best thing we could make available.
I would recommend migrating off within a year or two.
To claim "the ergonomics and the performance of not being integrated into Varnish are pretty bad" you would need to show some numbers.
In my view, https://vinyl-cache.org/tutorials/tls_haproxy.html debunks the "ergonomics are bad" argument, because using TLS backends is literally no different than using non-TLS.
On performance, the fundamentals have already been laid out in https://vinyl-cache.org/docs/trunk/phk/ssl.html - crypto being so expensive, that the additional I/O to copy in and out another process makes no difference.
We've been pushing 1.5Tbps with TLS in lab settings. I've yet to see any other HTTP product being able to saturate these kind of networking. There is lots to be said about threading, but it is able to push a lot bandwidth.
And yes, I think the ergonomics are bad. Having varnish lose visibility into the transport means ACLs are gone, JA3 and similar are gone and the opportunity to defend from DoS are much more limited.
Crypto used to be expensive in 2010. It is no longer that expensive. All the serialization, on the other hand, that is expensive and latency is adding up.
Every single HTTP server in use out there has TLS support. The users expectation is that the HTTP server can deal with TLS.
Thanks for the info, but I'm a bit confused, sorry.
The reason for hitch was that tls and caching are a different concern, and the current recommendation is to use haproxy, which also isnt integrated into varnish/vinyl.
But you say that the reason to migrate off hitch is that its not integrated?
But what happend to separation of concerns, then? Is the plan to integrate tls termination into vinyl? Is this a change of policy/outlook?
So because perbu was clearly talking with his varnish software hat on, here's the perspective from someone working on Vinyl Cache FOSS only:
I already commented on the separation of concerns in the tutorial, and the unpublished project which one person from uplex is working on full time will have the key store in a separate process. You might want to read the intro of the tutorial if you have not done so.
But the main reason for why the new project will be integrating TLS more deeply has not been mentioned: It is HTTP/3, or rather QUIC. More on that later this year.
Varnish Software released hitch to facilitate TLS for varnish-cache.
Now that Varnish has been renamed, Varnish Software will keep what has been referred to as a downstream version or a fork, which has TLS built in, basically taking the TLS support from Varnish Enterprise.
This makes Hitch a moot point. So, I assume it'll receive security updates, but not much more.
Wrt. separation of concerns. Varnish with in-core TLS can push terabits per second (synthetic load, but still). Sure, for my blog, that isn't gonna matter, but having a single component to run/update is still valuable.
In particular using hitch/haproxy/nginx for backend is cumbersome.
Totally agree. But, if i may, the docs on varnish and tls are hella confusing. I just re-read the varnish v9 docs, and its not clear at all that/if it supports tls termination.
Literally every doc, from the install guide to the "beef in the sandwich" talks about it NOT supporting tls termination... then one teeny para in "extra features in v9.0" mentions 'use -A flag'...
This is cool! But also, worth mentioning. Sure I know its an open source project so you don't owe anyone anything, but also one with a huge company behind it - and this is a huge change of stance and also, sounds cool.
Not the original poster but I do have some ideas. Official Bluesky clients could randomly/round-robin access 3-4 different appview servers run by different organizations instead of one centralized server. Likewise there could be 3-4 relays instead of one. Upgrades could roll across the servers so they don't all get hit by bugs immediately.
This is why I'm hoping fiatjaf has a recommendation here. I have a feeling he might have a proposal that solves this. But doesn't solve all of it, just some of it.
reply