Reviewers online aren’t particularly well informed. They think they are because many of them sort-of-kinda understand the devices they’re talking about at a high level, but they’re not really familiar with how it works and what the constraints are.
M2 has very predictable performance, on a line with all other CPU gains in recent years.
M1 had a lot of low hanging fruit to reach for that standardised architectures aren’t able to mimic for fear of breaking things (co-located memory, purpose-built buses, function-specific co-processing). And then they got on a node no-one else could get on due to their backlog.
I watched that kinda cringe LTT mea culpa video over the PS5 storage architecture a while back and I was a little taken aback at how much of the basics he didn’t really seem to understand, even in his apology video.
These very bespoke systems can take some really innovative steps with custom compression hardware and expanded bus space.
It’s not Intel or AMD that can’t keep up with Apple, it’s the standards they build their hardware on.
This is what we see with M2 vs M1 considering the die-to-die interconnects and even before M1 Max/Ultra etc were getting shipped things like the AIC being present twice already did show us the amount of significant changes a single cycle (and even within a cycle!) can bring if you don't have the ball-and-chain of legacy standards to keep around for over three decades. We are pretty much living in the future here, getting consumer products with features that were bleeding edge and never seen before in servers just a couple of years ago.
Backwards compatibility can be nice, but if implemented backwards it starts hurting your hardware development (and software adoption) pretty bad.
As for reviewers/writers having bad takes, this keeps happening as long as they don't understand the goal that things were designed for. It's easy to always assume that big bad hardware corp made it so that you get the worst possible product for the highest possible price, but if we just take hardware pairing for a moment, this gives you two things: zero-trust guarantees ("just because the hardware is plugged in doesn't mean we trust it") and actual production (you have to actually have the keys to decrypt anything or prove ownership to the system, it's not just some pinky swear). As a downside, as long as PKI is the least-worst option we have for doing this, it means that Apple fully controls this (same as Intel with ME/BROM and AMD with BROM and AGESA), for which nobody really seems to have an alternate solution yet. Luckily for the big hardware companies, most users don't care and aren't really affected by it either. Unlike tractor repairs (those things tend to break frequently enough to warrant end-user expertise), the overlap of people that are in need of repairs and the people that have the expertise to do it is extremely thin.
Right now, Intel and AMD are having trouble. AMD is comparing their new Ryzen processor to last year's M1 Pro. I think this might become ordinary unless Intel and AMD abandon x86/amd64, users of 30yo software be damned, and release their own ARM chips. Then Apple will sweat.
I'm not sure what's up here but I think the points you're making in this thread are mostly opposite reality. Each of the M1 products has been a big step up from the preceding versions in almost every respect and especially performance. If there is a respect in which the M1 is derivative and predictable it's that it is a fairly incremental step from the A12X, which itself was a very strong performer. It's been fairly obvious since the A7 that Apple was lining up to do a chip for Macs, just a question of when. Intel's designs are ok but not great, they have been struggling with execution problems including on the fab side for years going back to around 2016. AMD's chips are pretty good now and their lack of mobile and desktop market share has nothing to do with technology. ARM architecture by itself is not significant performance advantage or competitive advantage at all, and there's no real secret sauce in Apple's chip program other than tight focus, a long time horizon, and massive amounts of cash. And CPU performance growth is slower than it once was as anyone who remembers the road from the 286 to the Pentium Pro can attest. I'll leave a quantitative comment about yearly perf growth in one of those threads. Edit: And efficiency and performance are essentially the same thing in laptops where getting rid of heat is the biggest limit on sustaining throughput for multiple minutes.
I don't know what's up with your comment, but you more or less contradict yourself. First of all, the switch to Apple Silicon, from each specific model, brought about a 30% increase in single core performance and a 40% increase in multi-core performance. It's a few more percents than the increases in performance from the second to last Apple Intels to the last Apple Intels, which was a few more percents performance than the third to the last Intels to the second to last Intels. Seeing the trend here? Apple's releases were never twice as fast as the last revision of a particular model. I'm pretty sure that's true of every hardware provider, Dell, HP, Acer, whatever. New models are always incrementally more performant than the last generation of that particular model. That is the way it's always been, so it shouldn't be such a surprise when the M2 Max is only 30% performant than the M1 Max in whatever model machine that was upgraded.
Now here is your contradiction:
> Each of the M1 products has been a big step up from the preceding versions in almost every respect and especially performance.
> ARM architecture by itself is not significant performance advantage or competitive advantage at all, and there's no real secret sauce in Apple's chip program other than tight focus, a long time horizon, and massive amounts of cash. And CPU performance growth is slower than it once was as anyone who remembers the road from the 286 to the Pentium Pro can attest.
These statements conflict. I agree with your second statement. Regarding Apple Silicon, the leap forward was in power efficiency, in the platform switch, which by itself was pretty amazing, the performance increases are NOT 200% increases across the board on all models. Apple Silicon is roughly a third faster than the previous Apple Intel model generation, up a small amount of increase from the previous Intel generation to the last, and if you go back through the models you'll see these performance gains have been increasing, but the gains are incrementally faster on an incremental performance gain at each generation. This shouldn't be surprising. What is surprising is you and everyone raving about the massive performance gains that aren't there. There is a performance gain, but it is not earth-shattering, like if each gen was performing twice as fast as the previous. They're not, and they won't, if they ever do, for years if not decades. The refreshes are always incremental, Apple Silicon is no exception.
Your math doesn't line up at all with the numbers you linked for the 13" Pro, and I've not mentioned anything about 200% (or 2x which by the way is different), so let's look at the real numbers. A 50%+ perf bump after a decade that averaged 13% a year doesn't seem very smooth to me. 13% actually overstates the situation, progress slowed towards 2020 and the mid-2020 i7 update vs mid-2019 i7 only gained 7%. It seems pretty fair to call the M1 jump "big."
As for the second, that's not a contradiction. ARM ISA has very little to do with Apple's in-house processor performance. If they had stuck with PowerPC they could have achieved roughly the same thing (on a technical basis, licensing is a whole separate issue and likely the biggest driver of the switch to ARM).
AMD announced at CES first week of January. The M1 Pro and M1 Max were introduced in October 2021, the M1 Ultra mid March 2022, and the M2 over last summer, 2022. Apple announced the M2 Pro and M2 Max January 17th. What I expect is for AMD to abandon backwards compatibility, which is a trap and has been holding back their's and Intel's processor designs for, let's see, decades, and design their own ARM chips to actually compete with Apple Silicon rather than embarrassingly announce a chip that barely beats an Apple chip that is nearing EOL.
Regardless of whether a new chip from AMD is an evolution of their previous chips or a radical shift in strategy, when announcing it they have to compare it against the competition that actually exists at that time, even if the competitors are on a different schedule.
Well good for AMD. I certainly didn't intend to rain on their parade. The point I was making was that at this moment, no one competes with Apple. But maybe today they compete with the Apple of October 2021.
I doubt Apple will sweat. Intel and AMD aren't going to come out with an ARM based chip that's twice as fast as an M1, and their partners who manufacture the actual devices will have a harder time creating their own SOC - I guess Samsung would be well placed to do it, but they'd be as likely to just bypass Intel/AMD anyway. It's not like Apple competes on top performance.
I'm puzzled why you think that. Apple doesn't compete with Intel and AMD. If those companies produced a better ARM chip they'd be happy to sell it to Apple. Even a non ARM chip, Apple has shown they'll switch.
What would make Apple sweat at this point? Google must have when it seemed like they were going to produce high end consumer versions of ChromeOS laptops and Android phones. That's the kind of integrated experience that would compete with enough money behind it to get somewhere. Google blew that one though.
I'm not like pro Apple, I just don't think they care that much what Intel does.
Reviewers online aren’t particularly well informed. They think they are because many of them sort-of-kinda understand the devices they’re talking about at a high level, but they’re not really familiar with how it works and what the constraints are.
M2 has very predictable performance, on a line with all other CPU gains in recent years.
M1 had a lot of low hanging fruit to reach for that standardised architectures aren’t able to mimic for fear of breaking things (co-located memory, purpose-built buses, function-specific co-processing). And then they got on a node no-one else could get on due to their backlog.