It's a shame that Intel seemed to really not want people to use it, given they started disabling the ability to use it in future microcode, and fused it off in later parts.
> It's a shame that Intel seemed to really not want people to use it
AVX-512 was never part of the specification for those CPUs. It was never advertised as a feature or selling point. You had to disable the E cores to enable AVX-512, assuming your motherboard even supported it.
Alder Lake AVX-512 has reached mythical status, but I think the number of people angry about it is far higher than the number of people who ever could have taken advantage of it and benefitted from it. For general purpose workloads, having the E cores enabled (and therefore AVX-512 disabled) was faster. You had to have an extremely specific workload that didn't scale well with additional cores and also had hot loops that benefitted from AVX-512, which was not very common.
So you're right: They never wanted people to use it. It wasn't advertised and wasn't usable without sacrificing all of the E cores and doing a lot of manual configuration work. I suspect they didn't want people using it because they never validated it. AVX-512 mode increased the voltages, which would impact things like failure rate and warranty returns. They probably meant to turn it off but forgot in the first versions.
They had to disable AVX-512 only because Microsoft was too lazy to rewrite their thread scheduler to handle heterogeneous CPU cores.
The Intel-AMD x86-64 architecture is full of horrible things, starting with the System Management Mode added in 1990, which have been added by Intel only because every time Microsoft has refused to update Windows, expecting that the hardware vendors must do the work instead of Microsoft for enabling Windows to continue to work on newer hardware, even when that causes various disadvantages for the customers.
Moreover, even if Intel had not said that Alder Lake will support AVX-512, they also had not said that the P-cores of Alder Lake will not support AVX-512.
Therefore everybody had expected that Intel will continue to provide backward compatibility, as always before that, so the P-cores of Alder Lake will continue to support any instruction subset that had been supported by Rocket Lake and Tiger Lake and Ice Lake and Cannon Lake.
The failure to be compatible with their previous products has been a surprise for everybody.
Windows can work without SMM, especially NT - the problem is that SMM was created for a world where majority used DOS and the idea of using OS services instead of every possibly quirk of IBM PC was anathema to developers.
Thus, SMM, because there was no other way to hook power management on a 386 laptop running " normal" DOS
> Thus, SMM, because there was no other way to hook power management on a 386 laptop running " normal" DOS
In theory, there was: you could have a separate microcontroller, accessed through some of the I/O ports, doing the power management; it's mostly how it's done nowadays, with the EC (Embedded Controller) on laptops (and nowadays there's also the PSP or ME, which is a separate processor core doing startup and power management for the main CPU cores). But back then, it would also be more expensive (a whole other chip) than simply adding an extra mode to the single CPU core (multiple cores back then usually required multiple CPU chips).
The problem is reliably interrupting the CPU in a way that didn't require extra OS support. SMM provided such trigger, and in fact is generally used as part of the scheme with EC cooperating.
If Windows could work without SMM, is there a historical reason why SMM mode didn't just die and become disused after Windows becomes popular and nobody uses DOS any more? There are plenty of features in x86 that are disused.
The feature turned out too useful for all sorts of things, including dealing with the fact that before NT loaded itself you still had to emulate being an IBM PC including the fiction of booting from cassette tape or jumping to ROM BASIC.
Also, it's been cheaper to implement various features through small piece of code instead of adding a separate MCU to handle them, including prosaic things like handling NVRAM storage for variables (instead of interacting with external MCU or having separate NVRAM, you end up with SMM code being "trusted" to update the homogenous flash chip that contains both NVRAM and boot code)
I don't know if I'd call Microsoft lazy. Are there any existing operating systems that allow preemptive scheduling across cores with different ISA subsets? I'd sort of assume Microsoft research has a proof of concept for something like that but putting it into a production OS is a different kettle of fish.
> the P-cores of Alder Lake will continue to support any instruction subset that had been supported by Rocket Lake and Tiger Lake and Ice Lake and Cannon Lake
Wait. I thought the article says only Tiger Lake supports the vp2intersect instruction. Is that not true then?
Tiger Lake is the only one with vp2intersect, but before Alder Lake there had already been 3 generations of consumer CPUs with AVX-512 support (Cannon Lake in 2018/2019, Ice Lake in 2019/2020 and Tiger Lake + Rocket Lake in 2020/2021).
So it was expected that any future Intel CPUs will remain compatible. Removing an important instruction subset has never happened before in Intel's history.
Only AMD has removed some instructions when passing from a 32-bit ISA to a 64-bit ISA, most of which were obsolete (except that removing interrupt on overflow was bad and it does not simplify greatly a CPU core, since there are many other sources of precise exceptions that must still be supported; the only important effect of removing INTO is that many instructions can be retired earlier than otherwise, which reduces the risk of filling up the retirement queue).
The reason you had to disable the E cores was... also an artificial barrier imposed by Intel. Enabling AVX-512 only looks like a problem when inside that false dichotomy. You can have both with a bit of scheduler awareness.
The problem with the validation argument is that the P-cores were advertising AVX-512 via CPUID with the E-cores disabled. If the AVX-512 support was not validated and meant to be used, it would not have been a good idea to set that CPUID bit, or even allow the instructions to be executed without faulting. It's strange that it launched with any AVX-512 support at all and there were rumors that the decision to drop AVX-512 support officially was made at the last minute.
As for the downsides of disabling the E-cores, there were Alder Lake SKUs that were P-core only and had no E-cores.
Not all workloads are widely parallelizable and AVX-512 has features that are also useful for highly serialized workloads such as decompression, even at narrower than 512-bit width. Part of the reason that AVX-512 has limited usage is that Intel has set back widespread adoption of AVX-512 by half a decade by dropping it again from their consumer SKUs, with AVX10/256 only to return starting in ~2026.
For those not aware, they are most likely referring to an ongoing "tobacco war" among organised crime groups in Australia, with tobacconists becoming the targets of many arson attacks.
Obviously the illicit tobacco trade is a large/primary driver of this, but illicit vapes are also becoming a part of their business.
It's multiple lebanese family/gangs (Haddara family and others) vs biker gangs - as far as I can tell from news reports.
Both these parties run tobacco stores in many states - licensing was/is non-existant or minimal, although they are now increasing the overview and requirements in states that were lax.
These stores were mostly branded as vape/convenience stores until the recent regulation changes, now they often sell american candy and various random things along with local gov sanctioned cigarettes ($50/pack minimum) or the imported packs ($15-25 pack) - you can guess which one they pretty much exclusively sell.
Asians (Vietnamese/Chinese) are also selling the imported cigarettes out of their own branded stores and also some of the big chain tobacco stores (TSG, Cignall).
There have been dozens of arson attacks against the Lebanese and Biker stores over the last few years, supposedly over territory and payments.
A lebanese nearby had its security facade destroyed and was torched black the day after last years Christmas.
The asians don't seem to be involved in the arson attacks, they may be importing it as many of the packs are korean/chinese, but unsure as also eastern european and english brands are available.
The government engineered this situation with its ridiculous taxation policy, it was effective up to a point but it's becoming reminiscent of the era of alcohol prohibition in the USA.
$50 a pack? Yeah at those prices you can bet a black market crops up.
IMO this is only causing more problems. It won't put people off smoking because cheaper illegal alternatives arrive and it will create heavy crime syndicates.
Australian police: "We're not saying all tobacconists are linked with the sale of illicit tobacco, but what we are saying is that people are being targeted, businesses are being targeted because the organisers police allege are linked to the sale of illicit tobacco are simply standing over them."
There have been few isolated incidents of the tobacconist stores having been set ablaze, but it does not equate to, e.g. east coast Australian capital cities, being engulfed in putrid smouldering fires emanating from the said tobacconist shops.
Australian government introduced a (rushed) law in July this year that outlawed street sales of vapes and obligated local pharmacies to sell the officially licensed vapes instead, which has caused a uproar and a revolt on behalf of the Pharmacist Guild who bluntly refused to stock and distribute the vapes. The pharmacies that are not the guild members have made decisions at their own discretion. The drama is still unfolding.
There have been more than 70 arson attacks on tobacco stores and other businesses believed to be involved in the sale of illicit tobacco since March 2023, according to the Victoria police assistant commissioner Martin O’Brien.
Very well. You are correct, and I will admit my own ignorance with respect to the situation in Victoria as it appears to be wildly (in the literal sense of the word) different, and I used the NSW situational numbers.
ABC[0] reports following numbers for arson attacks on tobacconists across Australian states:
NSW: 14
Victoria: 130
Queensland: 30
South Australia: 12
Western Australia: 8
Whilst 14 (NSW) is more than a few, the order of magnitude difference compared with 130 (VIC) is a bewildering revelation, indeed, especially considering that NSW has a larger population. I do not know what makes NSW and VIC so different with respect to the matter at hand.
The sad thing is, the WRT3200ACM has more or less an unmaintained wifi driver, with 802.11w (and thus WPA3) possibly broken within the radio firmware itself. I believe there are other issues regarding regulatory settings being hardcoded in there too.
In addition to the wifi, I recall the preloader at the start of the boot chain is also a binary blob, which handles some of the chip init and memory calibration for the DDR4.
The insane thing when I tried to update Nextcloud, was that it kept timing out the download because it was too slow, and then required me to delete the upgrade in progress file in order to try again...
Not too surprising given what I've seen of their vendor sdk driver source code, compared to mt76. (Messy would be kind assessment)
Unfortunately, there are also some running aftermarket firmware builds with the vendor driver, due to it having an edge in throughput over mt76.
Mediatek and their WiSoC division luckily have a few engineers that are enthusiastic about engaging with the FOSS community, while also maintaining their own little OpenWrt fork running mt76.[1]
Why is it so much of this hardware/firmware feels so much like deploying a PoC to production? Why can't they hire someone that actually knows what they are doing?
The consumer space is brutally competitive - you're working on tight margins and designs become obsolete very quickly. MediaTek's business is built on selling chips with the latest features at the lowest possible price. Everything has to be done at a breakneck pace that is dictated by the silicon. You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship. These conditions are not at all suited to good software engineering.
In an ideal world, consumers would be happy to pay a premium for a device that's a generation behind in terms of features but has really good firmware. In the real world, only Apple have the kind of brand and market power to even attempt that.
> you're working on tight margins and designs become obsolete very quickly.
This seems like the exact place where open source is a competitive advantage.
Step 1, open source your existing firmware for the previous generation hardware. The people who have the hardware now fix problems you didn't have the resources to fix.
Step 2, fork the public firmware for the previous generation hardware when developing the next generation. It has those bug fixes in it and 90% of the code is going to be the same anyway. Publish the new source code on the day the hardware ships in volume but not before. By then it doesn't matter if competitors can see it because "designs become obsolete very quickly" and it's too late for them to use it for their hardware/firmware in this generation. They don't get to see your next generation code until that generation is already shipping. Firmware tricks that span generations and have significant value can't be kept secret anyway because any significant firmware-based advantage would be reverse engineered by competitors for the next generation regardless of whether they have the source code.
Now your development costs are lower than competitors' because you didn't have to pay to fix any bugs that one of your customers fixed first, and more people buy your hardware because your firmware is less broken than the competition.
What happens in that case is that competitors copy your hardware and throw the open source firmware on it to undercut you. Consumers don't know how to differentiate your products without marketing/segmentation and OEMs mostly care about the BOM cost. It doesn't matter much that your competitors are 2-6 months behind because they're still killing the long tail sales that sustain a company.
Note that I'm still pro-open source, but I've seen this cycle play out in the real world enough times to understand why manufacturers are paranoid about releasing anything that might help a competitor, even if it benefits their customers.
> What happens in that case is that competitors copy your hardware and throw the open source firmware on it to undercut you.
The entire premise of firmware is that it's specific to the hardware. By the time they "copy your hardware" it's already obsolete. Also, that's the thing you're actually selling. Your firmware sucks. Nobody wants your firmware unless they have your hardware. People are paying you for the hardware, which is the thing cheap competitors can't make as well as you or you're already screwed.
> You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship.
On top of that, there's bound to be errors in the hardware design, no modern technology even comes close to being formally proven correct, it's just too damn complex/large. Only after the first tapeout of an ASIC you can actually test it and determine what you need to correct and where to correct it (microcode, EC firmware, OS or application layer).
Indeed. A friend who is more plugged into such things me told me 4-5 years ago they laid off most of the senior Intel network driver team. Basically the only edge they had. I can’t imagine things are any better these days.
Inertia is a hell of a thing, but you are starting to see the cracks form. I just don’t know if there is an alternative.
When? The Intel X710 series of network cards was released in 2014, and it wasn't until ~2018 that it became actually usable (end of 2018? I don't recall really, but when I stumbled upon it it had already been a public problem for more than a year, and it took a few more months for patches to come).
I'm talking things like full OS crashes while doing absolutely nothing, no traffic whatsoever or even better, silently starting to drop all network traffic (relatively silently, just an error message in the logs, but otherwise no indication, the interface still shows up as fine and up in the OS). It was all a driver issue (although both Intel drivers didn't work, so not only) that was later fixed.
After that, it was rock solid. But the fact that there was a high class network card sold for lots of money, on hardware compatibility lists at various vendors, which didn't work at all for pretty much everyone for more than a few years is disgusting.
Back at the start of the century, Intel networking cards were the 'Best reliability for the dollar' and for some reason had a grudge against Linksys even before the Cisco buyout [0]. Same for most of their B/G Wireless stuff.
>Why can't they hire someone that actually knows what they are doing?
Because those employees cost a lot of money and these commodity widgets have razor thin margins that don't enable them to pay high salaries while also making enough profit to stay in business.
You can pay more to hire better people and put the extra cost in the price of the product but then HP, Lenovo, Dell, et-al aren't gonna buy your product anymore, they're gonna buy instead from your competition who's maybe worse but provides lower prices which is what's most important for them because the average end user of the laptop won't notice the difference between network cards but they do check the sticker price of the machine on Amazon/Walmart and make the purchasing decision on that and stuff like the CPU and GPU not on the network card in the spec sheet.
I feel like there's an opportunity for a joke here somewhere along the lines of hardware companies being really terrible at writing software, while software companies being just a normal amount of terrible at writing software.
A few attempts with chstgpt managed it: "Hardware companies writing software is like watching a train wreck in slow motion. Software companies? They just crash at regular speed."
Hardware manufacturers see software as a cost center, it’s often made as cheaply as possible. And hardware engineers aren’t necessarily good software developers. It isn’t their main expertise.
Quite a few of them actually end up configured to preference SD boot over internal flash and/or have easily accessible buttons or shortable pads to trigger bootrom recovery modes.
Which at least, stops them being automatically consigned to e-waste.
Although, customising a LibreELEC image for the dozens of different models of TV box isn't great. Typically involves sorting out the dts for the device and remapping the remote.
> Somehow, someone was intercepting and replaying the web traffic from likely every single device on my home network.
Normally I'd laugh and assume device compromise but...
The largest ISP in Australia (Telstra) got caught doing exactly this over a decade ago. People got extra paranoid when they noticed the originating IP was from Rackspace as opposed to within Telstra. Turned out to be a filter vendor scraping with dubious acceptance from customers. The ToS was quietly and promptly updated.
> Samsung does monthly updates on more premium phones. But a former flagship like the S22 would sometimes only get the update near the end of the month, even before the S24 is out.
This is complicated by their rolling updates per country, it can be few weeks between the first CSC (3 letter identifier for country and carrier variant) to receive an update and it being rolled out to the final one.
I was towards the end of that update cycle, so the Android security patch level could become quite detached from the actual month.