Side channels that enable intended behavior, versus a flat-out bug like the above, though the line can often be muddied by perspective.
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
> Ultimately most fingerprinting technologies use features that are intended behavior
Strong disagree.
> IP address/cookies/useragent obviously are useful
Cookies are an intended tracking behavior. IP Address, as a routing address, is debatable.
> Canvas/font rendering is useful for some web features
These two are actually wonderful examples of taking web features and using them as a _side channel_ in an unintended way to derive information that can be used to track people. A better argument would be things like Language and Timezone which you could argue "The browser clearly makes these available and intends to provide this information without restriction." Using side channels to determine what fonts a user has installed... well there's an API for doing just that[0] and we (Firefox) haven't implemented it for a reason.
n.b. I am Firefox's tech lead on anti-fingerprinting so I'm kind of biased =)
The thing is, technology is either enabling something or not. The exploration space might be huge, but once an exploit is found, the exploitation code / strategy / plan can trivially proceed and be shared worldwide. So you have to deal with this when you design and patch systems.
Example: preserving paths in URLs. Safari ITP aggressively removes “utm_” and other well-known querystring parameters even in links clicked from email. Well, it is trivial to embed it in a path instead, so that first-party websites can track attribution, eg for campaign perfomance or email verification links etc. In theory, Apple and Mozilla could actually play a cat-and-mouse game with links across all their users and actually remove high-entropy path segments or confuse websites so much that they give up on all attribution. Browser makers or email client makers or messenger makers could argue that users don’t want to have attribution of their link clicks tracked silently without their permission. They could then say if users really wanted, they could manually enter a code (assisted by the OS or browser) into a website, or simply provide interactive permission of being tracked after clicking a link, otherwise the website will receive some dummy results and break. Where is the line after all?
I'm not sure I'd use "compromise" at all - these (or the ones I have) are purposefully designed with zero authentication or pairing, the ones that use apps are already "compromised" in the sense that I can walk past any windowsill with one in it, open it, and it will immediately connect to it. I really don't mind if someone walking by were to change the LED color patterns
This is a very common pattern; my university pushed through a ZeroEyes AI camera/open carry weapon detection contract within 2 weeks of a shooting at a nearby school, even though it’s trivial to bypass by hiding it; it’s most probably just (gruesome as it is to think about) a bad press insurance so if anything happened, they can say they had “state of the art AI detection” and they did all they could. No one wants to be the one caught not doing “all they could” against the media cacophony in the immediate aftermath.
Yep, here they admitted there were local revolutionary war re-enactors who were falsely flagged (although thankfully they didn't let it get past the first flag).
This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:
>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.
>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.
No conspiracy necessary. The CIA bought the rights to the 1954 film Animal Farm, modified the ending to fit propagandist ends, and it went undiscovered for four decades. The original Top Gun was intended to recover the image of the US Navy after the Vietnam War. Etc etc etc.
> No conspiracy necessary. The CIA bought the rights to the 1954 film Animal Farm, modified the ending to fit propagandist ends,
yea, I remember reading the book and then watching the movie and it had differences iirc, its available on youtube for free and I remember some comments talking about the different ending.
IIRC, in the movie, the animals finally kick the pigs out and everything. It was a good ending.
but in the book, there was not a good ending, the humans and the pigs were celebrating together and then ended up fighting in between each other
> Twelve voices were shouting in anger, and they were all alike. No question, now, what had happened to the faces of the pigs. The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.
This is the last paragraph I found from the book (had to download it via archive.org to find the last para)
Is there any evidence that going outside the scope of the agreement would amount to anything more than a contract violation? Are we really to expect that Anthropic general counsel sits at the API gates allowing or blocking requests?
More generally, are there any comparable contract requirements in the field of defense, for a company in the same position as Anthropic? I'm curious.
Yeah, the optimization is going to make or break it. I've heard people say that 8GB on their Air's with M chips are sufficient, but I do wonder if it will still be true now with MacOS - maybe we'll get a cleanup/performance release cycle?... With regards to AI I hope it's not a Gemini/Pixel situation where there's a lot of ram but 3.5GB are permanently reserved for the on-device model to be always-available.
I expect the customer of this product is not worried about repairability: to them, it's just an iPad with a keyboard. You're also citing 3x higher costs, so they're really not comparable.
The lack of upgradability is directly what provides a lot of benefits that I expect the average consumer vastly prefers: better performance with soldered memory and better battery life. It's not just to shaft you on prices (though that's definitely a big factor).
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
reply