Meanwhile, I recently proposed doing the work of updating the runtime versions from .netframework over to even .net5+ to save $millions/quarter.
I ran benchmarks, showing 2-10x+ improvements.
Got told, “lol no, this service is unlike Bing”. For context, Bing has amazing blogs on this*.
This, btw is inside networking for Azure…
Not sure at what point I should stop caring. Could easily improve operating expenses to the tune of hundreds/billions per year if they applied this across Azure.
By saying "EU imposes something on a non-EU company", are you in fact referring to enforcing laws that are publicly known about? That seems like a totally different scenario to someone in the U.S. deciding that they need access to data in the EU due to some nebulous concern about national security and the company involved not even being allowed to openly discuss it.
Companies must adhere to the law where they are headquartered and where they are physically doing business. In particular court orders apply to them from either/both jurisdictions.
Oh definitely. Some of this goes back to my 6502 assembly days when there was no hardware multiply instruction. So to multiply by 40, for example. I would shift right 3 bits, store the result, shift right 2 more bits and add the stored result.
Similarly, a fast divisibility test (we’ll assume we’re dividing n by some odd prime p):
1. Shift the bits of p right so that there is a 1 in the last position.
2. If n = p then p∣n, if n < p then p∤n, otherwise continue.
3. Subtract p and go back to step 1.
(One of my ADD habits during meetings is to find the prime factors phone numbers or anything else very long. I do something similar with the numbers in decimal, but for this, I’ll subtract multiples of the potential divisor to get 0s at the end of the number in decimal. I remember co-workers puzzling over a piece of paper with my notes I left behind in a conference room trying to figure out what the numbers represented and how the process worked.)
Depends on the situation. The compiler is smart, but in a way it's also dumb. It's very good at recognizing certain patterns and optimizing them, but not all patterns are recognized, and thus not all optimizations are applied, let alone consistently applied.
One thing to consider is that the compiler can't simply replace a division by just a right shift for signed variables (it will round towards -inf for negative numbers), so even today there's a tiny bit of benefit of explicitly using shifts if you know that the number can't be negative (and the compiler can't prove it) or you don't care about the rounding (https://godbolt.org/z/vTzYYxqz9).
Of course that tiny bit of extra work is usually negligible, but might explain why the idiom has stuck around longer than you might otherwise expect.
I only have a little grey in my beard so far, but like all optimizations it heavily depends on context. My broad rule of thumb is that if you only care what the code does, you should let the compiler figure it out. If you care how the compiler accomplishes that goal, you should specify that rather than hoping things don't silently break in the future. This is a fairly common thing in crypto and systems code.
Nowaways you just write 'divout = divin/8192' and assume the compiler is going to do the right thing (and very possibly do something deeper than "divin>>12" at the assembler level).
Makes me wonder who pays attention to this sort of thing these days :)
I do! When optimizing code that must run obscenely fast, I look at the assembly the compiler spits out to make sure that it can't be improved on, speed-wise.
> I’ve found significant code in c/c++ where bitwise operations are done for things like division etc by shifting a certain way.
Oh, yes. I used to do that sort of thing frequently because the time savings was significant enough. As you say, though, compilers have improved a great deal since then, so it's not generally needed anymore.
If stupid bit tricks like that aren't necessary, they shouldn't be used. They do bring a readability/mental load cost with them.
If x is signed and happens to be negative, x/16 will round one way, and x>>4 another. x/16 can still be implemented more performantly than a general division with unknown (or even known but non power of two) denominator, but it will be marginally slower than a plain shift. It depends on which semantics you desire.
This is just the decay of knowledge over time and laziness combined.
From a quick glance internally, the overwhelming majority of repos are using an antiquated build/packager that while it might have been useful a decade ago is a productivity killer today.
The newer build / package system used publicly is light years ahead and provides a boon in productivity.
Not really sure why so many services in MSFT still run .NetFramework when “just” migrating over can lead to sometimes order of magnitude increase runtime performance and decreased resource consumption.
I think one of the real reasons is internally, most of the leads aren’t aware of it. There should be more evangelizing of the .NET team across the different orgs
J2EE? The latest version from J2EE is from 2003. It was called Java EE for some years and it is known as Jakarta EE. Almost everything has changed since 2003.
.net462 baby!
More like 4.6.2