Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.

Once the feature exists, it's much easier to use it by accident. A finger slip, a bug in a Windows update, or even a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded. And it's a silent failure: the security properties of the system have changed without any visible indication that it happened.





There's a lot of sibling comments to mine here that are reading this literally, but instead, I would suggest the following reading: "I never selected that option!" "Huh, must have been a cosmic ray that uploaded your keys ;) Modern OS updates never obliterate user-chosen configurations"

They just entirely ignore them instead.

This is correct, I also discovered while preparing several ThinkPads for a customer based on a Windows 11 image i made, that even if you have bitlocker disabled you may also need to check that hardware disk encryption is disabled as well (was enabled by default in my case). Although this is different from bitlocker in that the encryption key is stored in the TPM, it is something to be aware of as it may be unexpected.

If users are so paranoid that they worry about a cosmic ray bit flipping their computer into betraying them, they're probably not using a Microsoft account at all with their Windows PC.

If your security requirements are such that you need to worry about legally-issued search warrants, you should not connect your computer to the internet. Especially if it's running Windows.

In the modern political environment, everyone should be worried about that.

In all political environments everyone should be worried about that. The social temperature can change rapidly and you generally can't force a third party to destroy copies of your things in a reliable manner.

Right, this is just a variation on "If you have nothing to hide..."

ETA: You're not wrong; folk who have specific, legitimate opsec concerns shouldn't be using certain tools. I just initially read your post a certain way. Apologies if it feels like I put words in your mouth.


Because all cops are honest, all warrants are lawful and nothing worrying happens in the land of freedom right now.

And what's more, that perfect situation could never change in the future.

Me-30-years-ago would have called today's government crimes and corruption an implausible fever dream.


Appeal to the law fallacy.

and use ECC memory

>even a cosmic ray flipping the "do not upload" bit in memory

Stats on this very likely scenario?


> IBM estimated in 1996 that one error per month per 256 MiB of RAM was expected for a desktop computer.

From the wikipedia article on "Soft error", if anyone wants to extrapolate.


That makes it vanishingly unlikely. On a 16GB RAM computer with that rate, you can expect 64 random bit flips per month.

So roughly you could expect this happen roughly once every two hundred million years.

Assuming there are about 2 billion Windows computers in use, that’s about 10 computers a year that experience this bit flip.


> 10 computers a year experience this bit flip

That's wildly more than I would have naively expected to experience a specific bit-flip. Wow!


Scale makes the uncommon common. Remember kids, if she's one in a million that means there are 11 of her in Ohio alone.

~800 bit flips per year per computer. 2 billion computers with 800 bit flips each is 1,600,000,000,000 (one point six trillion) bit flips.

Big numbers are crazy.


I saw a computer with 'system33', 'system34' folders personally. Also you would never actually know it happened because... it's not ECC. And with ECC memory we replace a RAM stick every two-three months explicitly because ECC error count is too high.

Got any old microwaves with doors that don't quite shut all the way nearby? Or radiation sources?

Nah, office building. And memtest confirmed what that was a faulty RAM stick.

But it was quite amusing to see in my own eyes: computer mostly worked fine but occasionally would cry what "Can't load library at C:\WINDOWS\system33\somecorewindowslibrary.dll".

I didn't even notice at first just though it was a virus or a consequences of a virus infection until I caught that '33' thing. Gone to check and there were system32, system33, system34...

So when the computer booted up cold at the morning everything were fine but at some time and temp the unstable cell in the RAM module started to fluctuate and mutate the original value of a several bits. And looks like it was in a quite low address that's why it often and repeatedly was used by the system for the same purpose: or the storage of SystemDirectory for GetSystemDirectory or the filesystem MFT.

But again, it's the only time where I had a factual confirmation of a memory cell failure and only because it happened at the right (or not so, in the eyes of the user of that machine) place. How many times all these errors just silently go unnoticed, cause some bit rot or just doesn't affect anything of value (your computer just froze, restarted or you restarted it yourself because it started to behave erratically) is literally unknown - because that's is not a ECC memory.


Rounding that to 1 error per 30 days per 256M, for 16G of RAM that would translate to 1 error roughly every half a day. I do not believe that at all, having done memory testing runs for much longer on much larger amounts of RAM. I've seen the error counters on servers with ECC RAM, which remain at 0 for many months; and when they start increasing, it's because something is failing and needs replaced. In my experience RAM failures are much rarer than for HDDs and SSDs.

At google "more than 8% of DIMM memory modules were affected by errors per year" [0]

More on the topic: Single-event upset[1]

[0] https://en.wikipedia.org/wiki/ECC_memory

[1] https://en.wikipedia.org/wiki/Single-event_upset


At the time Google was taking RAM that had failed manufacturer QA that they had gotten for cheap and sticking it on DIMMs themselves and trying to self certify them.

> At google "more than 8% of DIMM memory modules were affected by errors per year"

That's all errors including permanent hardware failure, not just transient bit flips or from cosmic rays.


You are right. Apologies for spreading false information(

"We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode." [0]

"Memory errors can be caused by electrical or magnetic interference (e.g. due to cosmic rays), can be due to problems with the hardware (e.g. a bit being permanently damaged), or can be the result of corruption along the data path between the memories and the processing elements. Memory errors can be classified into soft errors, which randomly corrupt bits but do not leave physical damage; and hard errors, which corrupt bits in a repeatable manner because of a physical defect."

"Conclusion 7: Error rates are unlikely to be dominated by soft errors.

We observe that CE [correctable errors] rates are highly correlated with system utilization, even when isolating utilization effects from the effects of temperature. In systems that do not use memory scrubbers this observation might simply reflect a higher detection rate of errors. In systems with memory scrubbers, this observations leads us to the conclusion that a significant fraction of errors is likely due to mechanism other than soft errors, such as hard errors or errors induced on the datapath. The reason is that in systems with memory scrubbers the reported rate of soft errors should not depend on utilization levels in the system. Each soft error will eventually be detected (either when the bit is accessed by an application or by the scrubber), corrected and reported. Another observation that supports Conclusion 7 is the strong correlation between errors in the same DIMM. Events that cause soft errors, such as cosmic radiation, are expected to happen randomly over time and not in correlation.

Conclusion 7 is an interesting observation, since much previous work has assumed that soft errors are the dominating error mode in DRAM. Some earlier work estimates hard errors to be orders of magnitude less common than soft errors and to make up about 2% of all errors."

[0] https://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf


Given enough computers, anything will happen. Apparently enough bit flips happen in domains (or their DNS resolution) that registering domains one bit away from the most popular ones (e.g. something like gnogle.com for google.com) might be worth it for bad actors. There was a story a few years ago, but I can't find it right now; perhaps someone will link it.


Great, thanks. Here's a discussion on this site:

https://news.ycombinator.com/item?id=4800489


A very old game speedrun -- of the era that speedruns weren't really a "thing" like they are today -- apparently greatly benefited from a hardware bit flip, and it was only recently discovered.

Can't find an explanatory video though :(


The Tick Tock Clock upwarp in Super Mario 64. All evidence that exists of it happening is a video recording. The most similar recording was generated by flipping a single bit in Mario's Y position, compared to other possibilities that were tested, such as warping Mario up to the closest ceiling directly above him.

I'm pretty sure that while no one knows the cause definitively, many people agreed that the far more likely explanation for the bit change was a hardware fault (memory error, bad cartridge connection or something similar) or other, more powerful sources of interference. The player that recorded the upwarp had stated that they often needed to tilt the cartridge to get the game to run, showing that the connection had already degraded. The odds of it being caused by a cosmic ray single-event upset seem to be vanishingly low, especially since similar (but not identical) errors have already been recorded on the N64.

It's "HN-likely" which translates to "almost never" in reality.

Happens all the time, in reality (even on the darkside). When the atmosphere fails (again, happening all the time), error correction usually handles the errant bits.

Especially since HN readers are more likely to be using ECC memory

if cosmic ray bit flips were so rare then ecc ram wouldn't be a thing.

ECC protects against more events than cosmic rays. Those events are much more likely, for instance magnetic/electric interferences or chip issues.

In the 2010 era of RAM density, random bit flips were really uncommon. I worked with over a thousand systems which would report ECC errors when they happen and the only memorable events at all were actual DIMM failures.

Also, around 1999-2000, Sun blamed cosmic rays for bit flips for random crashes with their UltraSPARC II CPU modules.


> actual DIMM failures.

Yep, hardware failures, electrical glitches, EM interference... All things that actually happen to actual people every single day in truly enormous numbers.

It ain't cosmic rays, but the consequences are still flipped bits.


Those random unexplainable events are also referred to casually as "cosmic rays"

>A finger slip, a bug in a Windows update, or even a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded.

This is absurd, because it's basically a generic argument about any sort of feature that vaguely reduces privacy. Sorry guys, we can't have automated backups in windows (even opt in!), because if the feature exists, a random bitflip can cause everything to be uploaded to microsoft against the user's will.


Uploading your encryption keys is not just "any sort of feature".

You're right, it's less intrusive than uploading your files directly, like a backup does.

I’m still pissed about the third+ time one drive ‘helpfully’ backed up all my files after I disabled it.

So that may not be a great example of you’re trying to make people like Microsoft.


On the contrary: a backup can be fully encrypted by a key under the user's control that isn't available to the storage provider.

What part of "We can't have nice things" do you not understand?

The part where you're asking me about the phrase when it's not been used anywhere in this thread prior to your comment.

>This is absurd, because it's basically a generic argument about any sort of feature that vaguely reduces privacy. Sorry guys, we can't have automated backups in windows (even opt in!), because if the feature exists, a random bitflip can cause everything to be uploaded to microsoft against the user's will.

This is a dismissal of an objection to a software system implemented such that it performs in a discrete manner by default(no info leaves until I explicitly tell it to; this would be a nice thing, if you hadn't noticed). You repudiate the challenge on the basis of "we want to implement $system that escrows keys by default; a bad thing, but great for the company and host government in which said thing is widely adopted).

You may not have used the exact words; but the constellation of factors is still there. We can't have nice things (machines that don't narc, do what we tell them, etc.) because there are other forces at work in our society making these things an impossibility.

It is regrettable you do not see the pattern, but then again, that may be for the better for you. I wouldn't wish the experience of seeing things the way I do on anyone else. Definitely not a fun time. But it is certainly there.


[flagged]


I can't believe it took this long.

We have mandatory identification for all kinds of things that are illegal to purchase or engage in under a certain age. Nobody wants to prosecute 12 year old kids for lying when the clicked the "I am at least 13 years old" checkbox when registering an account. The only alternative is to do what we do with R-rated movies, alcohol, tobacco, firearms, risky physical activities (i.e. bungee jumping liability waiver) etc... we put the onus of verifying identification on the suppliers.

I've always imagined this was inevitable.


I don't think that's quite right. The age-gating of the internet is part of a brand new push, it's not just patching up a hole in an existing framework. At least in my Western country, all age-verified activities were things that could've put someone in direct, obvious danger - drugs, guns, licensing for something that could be dangerous, and so on. In the past, the 'control' of things that were just information was illusory. Movie theaters have policies not to let kids see high-rated movies, but they're not strictly legally required to do so. Video game stores may be bound by agreements or policy not to sell certain games to children, but these barriers were self-imposed, not driven by law. Pornography has really been the only exception I can think of. So, demanding age verification to be able to access large swaths of the internet (in some cases including things as broad as social media, and similar) is a huge expansion on what was in the past, instead of just them closing up some loopholes.

The problem is the implementation is hasty.

When I go buy a beer at the gas station, all I do is show my ID to the cashier. They look at it to verify DOB and then that's it. No information is stored permanently in some database that's going to get hacked and leaked.

We can't trust every private company that now has to verify age to not store that information with whatever questionable security.

If we aren't going to do a national registry that services can query to get back only a "yes or no" on whether a user is of age or not, then we need regulation to prevent the storage of ID information.

We should still be able to verify age while remaining psuedo-anonymous.


> If we aren't going to do a national registry that services can query to get back only a "yes or no" on whether a user is of age or not, then we need regulation to prevent the storage of ID information.

Querying a national registry is not good because the timing of the queries could be matched up with the timing of site logins to possibly figure out the identities of anonymous site users.

A way to address this, at the cost of requiring the user to have secure hardware such as a smart phone or a smart card or a hardware security token or similar is for your government to issue you signed identity documents that you store and that are bound cryptographically to your secure hardware.

A zero knowledge protocol can later be used between your secure hardware and the site you are trying to use that proves to the site you have ID that says you are old enough and it is bound to your hardware without revealing anything else from your ID to the site.

This is what the EU had been developing for a few years. It is currently undergoing a series of large scale field trials, with release to the public later this year, with smart phones as the initial secure hardware. Member starts will be required to support it, and any mandatory age verification laws they pass will require sites to support it (they can also support other methods).

All the specs are open and the reference implementations are also open source, so other jurisdictions could adopt this.

Google has released an open source library for a similar system. I don't know if it is compatible with the EU system or not.

I think Apple's new Digital ID feature in Wallet is also similar.

We really need to get advocacy groups that are lobbying on age verification bills to try to make it so when the bills are passed (and they will be) they at least allow sites to support some method like those described above, and ideally require sites to do so.


> If we aren't going to do a national registry that services can query to get back only a "yes or no" on whether a user is of age or not

And note that if we are, the records of the request to that database are an even bigger privacy timebomb than those of any given provider, just waiting for malicious actors with access to government records.


> When I go buy a beer at the gas station, all I do is show my ID to the cashier. They look at it to verify DOB and then that's it. No information is stored permanently in some database that's going to get hacked and leaked.

Beer, sure. But if you buy certain decongestants, they do log your ID. At least that's the case in Texas.


In PA they scan your ID if you buy beer. There could be a full digital record of all my beer purchases for past 15+ years, although I'm not aware of any aggregation of this data that is happening. Not that I expect anyone doing it would talk about it.

> But if you buy certain decongestants, they do log your ID.

Yeah, but many people don't actually think War on Drugs policies are a model for civil liberties that should be extended beyond that domain (or, in many cases, even tolerated in that domain.) That policy has been effective, I guess, in promoting the sales of alternative “decongestants” (that don't actually work), though it did little to curb use and harms from the drugs it was supposed to control by attacking supply.


My beard is more gray than not and they still not only ID me for beer, but scan my ID too.

Depending on the gas station... I've been to at least a dozen in Texas where the clerk scanned the back of my DL for proof of age. I'm assuming that something is getting stored somewhere..

> When I go buy a beer at the gas station, all I do is show my ID to the cashier. They look at it to verify DOB and then that's it. No information is stored permanently in some database that's going to get hacked and leaked.

That's how it should be, but it's not how it is. Many places now scan your ID into their computer (the computer which, btw, tracks everything you buy). It may not go to a government database (yet) but it's most certainly being stored.


> We should still be able to verify age while remaining psuedo-anonymous.

That would completely defeat the purpose. The goal is to identify online users, not protect children.


I definitely don't disagree that the implementation is problematic, I'm just surprised it took this long for it to happen.

We should easily be able to, but the problem of tech illiteracy is probably our main barrier. To build such a system you’d need to issue those credentials to the end users. Those users in turn would eagerly believe conspiracy theories that the digital ID system was actually stealing their data or making it available to MORE parties instead of fewer (compared to using those ID verification services we have today).

The problem is that there is nothing done to protect privacy.

There is already plenty of entities that not only have reliable way of proving it's you that have access to account, but also enough info to return user's age without disclosing anything else, like banks or govt sites, they could (or better, be forced to) provide interface to that data.

Basically "pick your identity provider" -> "auth on their site" -> "step showing that only age will be shared" -> response with user's age and the query's unique ID that's not related to the user account id


I don't disagree that the implementation is all kinds of wrong. I'm just surprised it took them this long to compel it.

> a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded.

Nah, no shot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: