In the same vein as adblocking, the fundamental question here is, does a service have the right to control how you DON'T use their service? Are you legally obligated to be mentally influenced by adverts and cannot close your eyes or look away?
I'd love to see the EFF or similar take on Big (Ad)tech and settle this in court.
They've gone after youtube-dl and lost, Invidious is still there, etc.
It might not be illegal (criminal) to use a tool like Dull or an ad-blocker, but it is almost certainly a violation of the platform tos. This means the platform (Instagram/YouTube) can legally ban your account or block your IP address for using such tools, even if they can't successfully sue the tool's creator in court.
Given how broad the CFAA is, Instagram/YouTube could just try framing it as accessing their systems without proper permission, as the ToS disallow such usage.
How can you be sure that they “will not even consider” doing that? (That’s a disinformation from your side!)
If this app were to gain traction and start to be seen as a real problem by IG/YT, they would have all legal grounds to act. They can totally sue the app creator and they would very likely win the case under the CFAA.
How exactly is this disinformation?
It is speculative, but calling it disinformation is dishonest, especially since you then presented your completely unargumented claim that they somehow won’t even consider it. It is totally in the realm of possibilities and hence IMO something to keep in mind when considering selling this sort of app/service.
The problem (or not depending on POV) is that TOS are subject to legal constraints. As the dominant platform YT in a critical service area needs to maneuver carefully.
> My DNS "server" is a router which can "add" static entries...won't work with dynamic addresses.
Sounds like a pretty poor setup, systems which could auto-add DHCP'd or discovered entries have been around for literally decades. You're choosing to live in that limitation.
> What redundancy, multiple servers?
Multicast name resolution is a thing. Hosts can send out queries and other devices can respond back. You don't need a centralized DNS server to have functional DNS.
Oh yes, that's really convenient for home users. "Install this thing on several computers and keep it in sync or you're not qualified to have a network"
Home users would ideally be served by things like mDNS and LLMNR, which should just work in the background. If I want to connect to the thermostat I should be able to just go to http://honeywell-thermostat and have it work. If I want to connect to the printer it should just be ipp://brother and I shouldn't even need to have a DNS server.
Your interface has a default gateway configured for it, doesn't it? Isn't that default gateway the router? NDP should show the local routers through router advertisements. There is also LLDP to help find such devices. LLMNR/mDNS provides DNS services even without a centralized nameserver (hence the whole "I shouldn't even need to have a DNS server"). So much out there other than just memorizing numbers. I've been working with IPv6 for nearly 20 years and I've never had an issue of "what was the IP address of the local router", because there's so many ways to find devices.
Even then nobody is stopping you from giving them memorable IP addresses. Giving your local router a link-local address of fe80::1 is perfectly valid. Or if you're needing larger networking than just link-local and have memorable addresses use ULAs and have the router on network one be fd00:1::1, the router on network two be fd00:2::1, the router on network three be fd00:3::1, etc. Is fe80::1 or fd00:1::1 really that much harder to memorize than 192.168.0.1 or 192.168.1.1 or 10.0.0.1, if you're really super gung-ho about memorizing numbers?
I expect the build for Windows 2000 may work on NT 4.0, because it has OpenGL, but Windows 3.11 with Win32s will never work - because it does not have OpenGL(I think...).
This suggests the checksum is used to identify whether the binary is known to BOT, and thus whether BOT can optimize the binary.
I do wonder what this "optimize" step actually entails; does it just replace the binary with one that Intel themselves carefully decompiled and then hand-optimised? If it's a general "decompile-analyse-optimise-recompile" (perhaps something similar to what the https://en.wikipedia.org/wiki/Transmeta_Crusoe does), why restrict it?
Speaking of recent x86 processors with an FPU, it's notable that some variants of the https://en.wikipedia.org/wiki/Intel_Quark were basically a 486SX core with some Pentium instructions "backported", and hence lack the FPU.
Lakemont doesn't actually share much history with the 486SX.
Hard to tell the exact ancestry of Lakemont. It might be a from-scratch scalar design that just so happened to target the i586 instruction set, but with a 486 style 5-stage scalar pipeline. Or it might be forked and hacked down from something more modern, like Atom.
It's a very configurable core. There are versions with just the FPU and NX bit turned off. MCU variant goes further, turning off segmentation, the BCD/string instructions, xchg and the AF/PF flags.
Intel did a talk on the MCU core at hot chips [1], which gives you a better idea of how they made/configured it (but doesn't give much insight into where the original Lakemont core came from).
But modern hardware design acts kind of like software these days. They write verilog code, they fork designs off, they share code or entire modules between teams. The code is often configurable. And then they throw the code + configuration at automated tooling which spits out reasonably good gate layouts.
They reused the diagrams (and much of the text) from the 486 era for the Quark documentation, which I think wouldn't happen if they weren't reusing the core design. I remember an early datasheet had "QuarkDX" show up a few times.
The docs are written by completely different Technical Writing teams, who don't always fully understand the design they are documenting. They are usually correct enough for someone using them normally, but I've learned that relying on manuals to actually understand the underlying implementation (or its history) is problematic.
So reuse of 486 documentation doesn't prove anything. IMO it's more likely that the Technical Writing team saw how close the design was to the 486 and decided to re-use that documentation as the starting point.
Interesting to look back on an era where companies could create an entire CPU and bring it to market with reasonably competitive performance.
CPUs today are so powerful and complex that it takes highly capitalized companies many years and generations to get something that approaches modern performance expectations. Arguably we’re still waiting for a RISC-V CPU to reach the market that has decent performance.
reply