> A 2017 report commissioned by the New Zealand Transport Agency found a wide variation in the best-fitting exponents for a power law on 4T axle loads vs 6T axle loads, depending on the current condition and type of the roading. As a very rough summary of its highly detailed findings: A 9th-power law is most predictive when the road is barely able to withstand the 6T load; and the per-crossing damage is roughly linear to axle-weight when the pavement is able to withstand much higher loads than 6T per axle.
Highways (which this link focuses on) are designed for a heavier load than, say, residential streets.
A mid-size SUV is, what, 1 ton per axle? And a semi is max about 10 tons per axle (I don't know the average). And there are more SUVs on the highway than commercial trucks.
And in any case, there's already a Heavy Vehicle Use Tax which is meant to fund the additional maintenance demands caused by vehicles over 55,000 pounds.
Why would they use it at all? Women have been US military soldiers for a long time. Every one of them could have had their grip strength, body strength, etc. measure - if those additional details were predictive of anything useful.
But why? Do drone controllers require massive amounts of grip? The keys for the transport coordinator keyboards require 20 pounds of pressure?
Few things in the military require brute strength. And those women who have that strength shouldn't be rejected simply because they are women.
Pragmatically, the main reason that has been true throughout all of history is that women are more valuable reproductively. A country can lose half its men in a war and still recover. The same is not true if it loses half its women.
Pragmatically, the main weapon in most wars were arrows and swords.
Pragmatically, most of the military is far from the battlefield - or the battlefield is on home territory, in which case everyone is involved anyway, so train 'em all and let the Night Witches fly, as the Soviets did when they needed more fighting forces against the Germans. "Some 400,000 women fought for the Red Army on the front lines"[1], and were not saved for later potential reproductive use.
Pragmatically, women are much more more than a baby gestation machines.
Since you have no problems with sterile women (tubes tied, no uterus, etc.) in the military, there's really no need to jump into a thread about rejecting ALL women from the military based on hand group strength.
I’m vehemently against the draft in general. I saw this war coming over a decade ago and live as an expat in part to avoid being press ganged into drone target duty.
Grip strength is a proxy for general strength, and I think it’s safe to assume strength is important in combat.
Yes, calling one's self "expat" instead of "immigrant" sounds exactly like what someone who goes elsewhere to avoid taxes and draft service, while driving up the local housing market and enjoying cheap labor, would do.
Again, if strength is important, then use strength as the draft criteria, not gender.
And, you do realize that the vast majority of the military aren't combat troops, right? Drone operator duty doesn't require high grip strength. Logistics managers don't require high general strength.
Is your sexism blinding you to the female soldiers who served in the Gulf War, Iraq, and Afghanistan? What do you think they were doing if not being soldiers?
A small set of counter examples do not invalidate broad generalizations. And if my state wants to commit economic suicide and there is no way for me to stop it I feel no need to join it.
Over 300,000 women were deployed to Iraq and Afghanistan.
You have no idea what their grip strength was. You have no idea what their overall strength was. You have no idea if their duties required that strength, or if endurance, focus for long periods of time, ability to work in a group, were more imports.
Did you learn your grip strength factoid on some men's rights podcast?
Not sure Iraq and Afghanistan are the best examples of success.
I do have a good idea what their grips strength were, the US armed forces do such studies all of the time, sometimes they publish them. The statistics around this are well known. Grip strength is used as it's a good proxy and easy to do in an informal setting.
I'm very interested in health and resistance training is a part of that. I'm also interested in the social phenomena of certain ideological groupings of thought, such as 'healthy at any size' and 'women are exactly equal to men'.
You're again giving some strong manosphere podcast vibes here.
The US failures in Iraq and Afghanistan are no more due to women than the US failure in Vietnam was due to men.
If grip strength is so important, then test for that. The military can easily do that at the recuitment center.
Otherwise it's the social phenomena known as sexism. That means rejecting a professional lumberjack simply because she's a woman, while accepting a less capable man because you've got a recuitment quota to meet.
The first thing I looked for was insulin. I was surprised that it's not there, even though the bottom points out "Insulin administered to a human. First peptide drug."
The entry for BPC-157 at https://www.whatthepeptide.org/peptide/bpc-157 should really include the FDA information, like "Why Not FDA-Approved: No IND filed, no human clinical trials completed, synthetic origin (not endogenous)" and "banned from compounding".
Yet the page says "Unverified Claims: No controlled human trials"?? How is this meant to be serious if literally the first peptide I looked at says what's listed at the FDA web site is "unverified".
It might sound outrageous but I guard against this sort of thing. When I write utility code in C++ I generally include various static asserts about basic platform assumptions.
> running on a real Sun3, compiled with a non-ANSII compiler (Sun cc 1.22)
> this is fatal in HP-UX 10 with the bundled compiler
> OpenWatcom 1.9 compiler
> OS/2 builds
> making sure that all functions are declared in both ANSI format and K&R format (so C-Kermit can built on both new and old computers)
Oooooh! A clang complaint: 'Clang also complains about perfectly legal compound IF statements and/or complex IF conditions, and wants to have parens and/or brackets galore added for clarity. These statements were written by programmers who understood the rules of precedence of arithmetic and logical operators, and the code has been working correctly for decades.'
There's platform and there's platform. I assume a POSIX platform, so I don't need to check for CHAR_BIT. My code won't work on some DSP with 64-bit chars, and I don't care enough to write that check.
Many of the tests I did back in the 1990s seem pointless now. Do you have checks for non-IEEE 754 math?
Using C++ under Clang 17 and later (possibly earlier as well, I haven't checked) std::numeric_limits<T>::is_iec559 comes back as true for me for x86_64 on Debian as well as when compiling for Emscripten. Might it be due to your compiler flags? Or is this somehow related to a C/C++ divergence?
The standard warns that macros and assertions can return true for this one, even if it isn't actually true. The warning, because that's what compilers currently do.
Its one of the caveats of the C-family that developers are supposed to be aware of, but often aren't. It doesn't support IEEE 754 fully. There is a standard to do so, but no one has actually implemented it.
Of course in my case what I'm actually concerned with is the behavior surrounding inf and NaN. Thankfully I've never been forced to write code that relied on subtle precision or rounding differences. If it ever comes up I'd hope to keep it to a platform independent fixed point library.
CPPReference is not the C++ standard. Its a wiki. It gets things wrong. It doesn't always give you the full information. Probably best not to rely on it, for things that matter.
But, for example, LLVM does not fully support IEEE 754 [0].
And nor does GCC - who list it as unsupported, despite defining the macro and having partial support. [1]
The biggest caveat is in Annex F of the C standard:
> The C functions in the following table correspond to mathematical operations recommended by IEC 60559. However, correct rounding, which IEC 60559 specifies for its operations, is not required for the C functions in the table.
The C++ standard [2] barely covers support, but if a type supports any of the properties of ISO 60559, then it gets is_iec559 - even if that support is _incomplete_.
This paper [3] is a much deeper dive - but the current state for C++ is worse than C. Its underspecified.
> When built with version 18.1.0 of the clang C++ compiler, without specifying any compiler options, the output is:
> distance: 0.0999999
> proj_vector_y: -0.0799999
> Worse, if -march=skylake is passed to the clang C++ compiler, the output is:
If I am not mistaken, is_iec559 concerns numerical representation, while __STDC_IEC_559__ is broader, and includes the behavior of numerical operations like 1.0/-0.0 and various functions.
I do, yes. I check that the compiler reports the desired properties and in cases where my code fails to compile because it does not I special case and manually test each property my code depends on. In my case that's primarily mantissa bit width for the sake of various utility functions that juggle raw FP bits.
Even for "regular" architectures this turns out to be important for FP data types. Long double is an f128 on Emscripten but an f80 on x86_64 Clang, where f128 is provided as __float128. The last time I updated my code (admittedly quite a while ago) Clang version 17 did not (yet?) implement std::numeric_limits support for f128.
Honestly there's no good reason not to test these sorts of assumptions when implementing low level utility functions because it's the sort of stuff you write once and then reuse everywhere forever.
And as I wrote, "There's platform and there's platform."
I don't support the full range of platforms that C supports. I assume 8 bit chars. I assume good hardware support for 754. I assume the compiler's documentation is correct when it says it map "double" to "binary64" and uses native operations. I assume if someone else compiles my code with non-754 flags, like fused multiply and add, then it's not a problem I need to worry about.
For that matter, my code doesn't deal with NaNs or inf (other than input rejection tests) so I don't even need fully conformant 754.
I say nothing about code which can support when char is 64-bit because my entire point was that my definition of "platform" is far more restrictive than C's, and apparently yours.
You wrote "I generally include various static asserts about basic platform assumptions."
I pointed out "There's platform and there's platform.", and mentioned that I assume POSIX.
So of course I don't test for CHAR_BIT as something other than 8.
If you want to support non-POSIX platform, go for it! But adding tests for every single one of the places where the C spec allows implementation defined behavior and where all the compilers I used have the same implementation defined behavior and have done so for years or even decades, seems quixotic to me so I'm not doing to do it.
And I doubt you have tests for every single one of those implementation-defined platform assumptions, because there are so many of them, and maintaining those tests when you don't have access to a platform with, say, 18-bit integers to test those tests, seems like it will end up with flawed tests.
> maintaining those tests when you don't have access to a platform with, say, 18-bit integers to test those tests, seems like it will end up with flawed tests.
No? I don't over generalize for features I don't use. I test to confirm the presence of the assumptions that I depend on. I want my code to fail to compile if my assumptions don't hold.
I don't recall if I verify CHAR_BIT or not but it wouldn't surprise me if I did.
1 gigahertz (GHz) or faster with 2 or more cores, 4 gigabytes RAM, 64 GB or larger storage device, UEFI, Secure Boot capable, (TPM) version 2.0, Windows 11 Pro for personal use and Windows 11 Home require internet connectivity and a Microsoft account during initial device setup.
2 GHz dual-core processor or better, a minimum of 6GB RAM and 25 GB of free hard drive space. For ISO-based installs, you will need a USB port or DVD drive for the installation media. An internet connection is ... not required for the initial installation.
No requirement listed for UEFI, TPM, or account with a company that the Human Rights Council and others describe as aiding and abetting atrocity crimes.
I can't imagine that Windows 11 would be usable with a 2 core 1GHz processor and 4GB of ram. It might install, but opening the start menu alone will fill up your memory (slightly hyperbole)
I don't see where the linked-to page discusses "rights".
The headline sounds like editorializing to get off-the-cuff remarks about treating synthetic text extruding machines, as Bender correctly describes them, as people.
Safety interlocks have long existed to say "no" to the owner of the device. Most smartphones have lots of systems to say "no" to the owner of the smartphone.
One of the linked to documents says "Every physical device has a creator." Who is the creator of the iPhone?
Similarly, "When a device is sold or transferred, ownership changes. From that moment, the device is no longer under the creator’s control." I'm really surprised to hear that the creator of the iPhone no longer has control of the device.
So when it gets to "AI must not infer what it does not own" - does that prohibit Google from pushing AI onto Android phones during an OS update?
I think you're reading it more strongly than I intended.
The point about "ownership" in that document is more about where authority over execution sits, not about restricting what AI is allowed to reason about.
So it's not saying "AI shouldn't reason about things it doesn't own," but rather asking who has the authority to define and enforce the conditions under which actions are allowed to execute.
I agree that in current systems (like smartphones), a lot of this is already handled through predefined constraints.
What I'm trying to explore is whether that idea needs to be extended or structured differently when the system has more autonomy and operates in less predictable environments.
Who is the creator of an iPhone device? I'm pretty there are many creators, not "a creator".
Does the creator of an iPhone device no longer control the device after someone has bought it?
I'll add a few more questions:
Can Apple have your device say "no" to something you want to do?
Can a government enforce Apple's ability to control what you do to your device?
Can a government force Apple to install software onto your device that you do not want?
Who owns an AI? Is it the copyright holder? Multiple copyright holders? Once the copyright expires, is there any ownership at all?
I like Charlie Stross' description of a company as an "old, slow, procedural AI". So when you ask a question about an "AI", think about the same question concerning a company.
Should a company have the right to say "no" to the owner of a hardware device running the company's software? The answer currently seems to be a resounding "yes". In which case, does it matter what an AI can or cannot do? It's someone else's programming limiting what you can do on your device, and we've established that that's already acceptable.
And the HN title is still clickbait - AI doesn't have "rights" in any meaningful sense, not even in the way that a company has rights, or animal rights, or the legal personhood to the Whanganui River.
Painkillers are a highly regulated market. Even the Sackler family had to work the system to make their ill-gotten profits.
Vitamins are not. Indeed, the highly criticized Dietary Supplement Health and Education Act of 1994, co-sponsored by Senator Orrin Hatch (in turn receiving financial support from supplement manufacturers), made vitamins and other dietary supplements far less scrutinized by the FDA than pain killers.
The vitamin advertisements are "so much more creative than the painkiller ones" because the painkiller ones get to say "we kill pain" while being restricted from broader claims, while the vitamin advertisements have to work to imply unproven health benefits.
There is no painkiller equivalent of GNC because the more effective pain killers require a prescription for legal sale, and a trained pharmacist on staff to oversee sales.
If you can make and sell OxyContin, then as the Sackler family shows, you can make bank.
It's time for the metaphor to die because it encourages making software which is equally as addictive as OxyContin. And we see that's happened.
Also, magnesium is not a vitamin, so if "vitamins" is extended to include other food supplements than I'll point out alcohol and marijuana are also used as painkillers and antidepressants, and there's a big underground market for opioids.
I'm pretty sure the lesson is that at the end of the day, it’s worth being aware of the risks of using git, as security issues intrinsic to git can extend to other tools which use git as a component.
I think we can agree that Git is at least partly responsible for this issue, if not more.
That said, even being aware of that doesn’t necessarily help much in practice. When you’re using Emacs or Vim, you’re not really thinking about Git at all. You’re just opening and editing files. So it’s not obvious to most users why Git would be relevant in that context.
This is why I think editor maintainers should do more to protect their users. Even if the root cause sits elsewhere, users experience the risk at the point where they open files. From their perspective, the editor is the last line of defense, so it makes sense to add safeguards there.
Please read the LLM output critically instead of doubling down on it.
Your defense-in-depth framing makes no sense. If .git/config or similar mechanisms are the attack vector, then adding more editor safeguards would be treating a symptom, as the real problem is git's trust model. The "users don't think about git when using editors" argument also proves too much. Many users also do not think about PATH, shell configs, dynamic linker, or their font renderer either, but you cannot make editors bulletproof against all transitive dependencies...
Seriously, it is actually backwards. Git is where the defense belongs, not every downstream tool that happens to invoke git. Asking editors to sandbox git's behavior is exactly as absurd as it sounds.
And BTW, "technically AV:L but feels like RCE" is your usual blog-post hype. It either is, or is not.
Sure, but you said that was the end of the day analysis, and I didn't think you went far enough in your analysis.
FWIW, I'm not thinking about git at all since I use Mercurial, and never enabled vc hooks in my emacs, which is based on 25.3.50.1, so wasn't affected by this exploit - I tested. I use git and hg only from the command-line.
My end-of-day analysis is to avoid git entirely if you can't trust its security model. ;)
Should the emacs developers also do more to secure emacs against ImageMagick exploits?
I have no doubt that systemd will implement a place to store political party membership, religion, LGBT status, veteran or draft status, or ethnic group membership if a handful of governments start to require that information.
"Spectrum quickly learned that far more had gone wrong than just a units conversion error. A critical flaw was a program management grown too confident and too careless, even to the point of missing opportunities to avoid the disaster.
"As reconstructed by Spectrum, ground controllers ignored a string of indications that something was seriously wrong with the craft's trajectory, over a period of weeks if not months. But managers demanded that worriers and doubters "prove something was wrong," even though classic and fundamental principles of mission safety should have demanded that they themselves, in the presence of significant doubts, properly "prove all is right" with the flight.
Plus, navigators, had concerns about the trajectory,which were dismissed because they "did not follow the rules about filling out [the incident surprise and analysis procedure] form to document their concerns" - from a trajectory team which was understaffed and overworked.
The fourth power law is only an approximation. If the road is designed for higher weight then the impact of larger loads is less. From https://en.wikipedia.org/wiki/Fourth_power_law
> A 2017 report commissioned by the New Zealand Transport Agency found a wide variation in the best-fitting exponents for a power law on 4T axle loads vs 6T axle loads, depending on the current condition and type of the roading. As a very rough summary of its highly detailed findings: A 9th-power law is most predictive when the road is barely able to withstand the 6T load; and the per-crossing damage is roughly linear to axle-weight when the pavement is able to withstand much higher loads than 6T per axle.
Highways (which this link focuses on) are designed for a heavier load than, say, residential streets.
A mid-size SUV is, what, 1 ton per axle? And a semi is max about 10 tons per axle (I don't know the average). And there are more SUVs on the highway than commercial trucks.
And in any case, there's already a Heavy Vehicle Use Tax which is meant to fund the additional maintenance demands caused by vehicles over 55,000 pounds.
reply