TBH my asahi M2 macbook experience has been the best linux experience I have ever had. It's night and day compared to the XPS 13 I had before which was supposedly a well supported laptop for linux, you could even buy it with ubuntu.
The only real drawback is no thunderbolt, and till recently no DP, and no x86 support. But I don't use any x86 only apps enough for it to matter. No thunderbolt sucks though.
Having multiple hardware features broken isn’t anything close to my best Linux experience.
I’ve got a framework 13 and literally nothing is broken, device firmware updates happen automatically through Linux, literally more integrated with the hardware than a windows laptop.
One hardware feature really. Besides thunderbolt there really isn't anything that doesn't work. I happily give up thunderbolt over the significantly worse performance of the SoC and screen in the framework 13. Especially the screen is terrible. When I purchased my macbook the framework 13 was top of the list of alternatives. But I can't bear a bad screen. Note that I never use macos, I purchased the macbook with the goal of running linux on it. Macbook was simply one of the best supported devices.
The problem are really not the CPU cores itself. It's a generic arm core in terms of ISA with just a tiny bit of proprietary extensions. The problem are all the peripherals. GPU, NPU, Display, USB, Wifi, HID, sound etc etc. These all require custom drivers and reverse engineering.
While it's awesome that it runs there doesn't seem to be GPU support yet as the screenshot reports the llvmpipe software renderer. From what I understand there are significant difference between the M2 and M3 GPUs so this unlikely to be implemented soon. Unless it turns out this original analysis turns out to be wrong.
Personally I don't consider it "working" as a laptop on an Apple M3 unless you actually have GPU support. Software rending just sucks, even with a SoC as powerful as the Apple M3.
Most large C code bases aren’t really written in C. They’re written in an almost-C that includes certain extensions and undefined behavior. In this case, it uses inline assembly (an extension) and manipulating pointers as integers (undefined behavior).
While I’m always thankful when people give the broad perspective and context in a discussion, which your comment does. The specifics of this particular project’s usage of almost-C is not something I could have quickly figured out, so thanks. For such a large program, an to be as old as Qt is at this point, I find it impressive and slightly amazing that it has in some sense self-limited its divergence from standard C. It would be interesting to see what something like SQLite includes in its almost-C.
The more portable a project is, the less weird stuff it’s likely to do. The almost-C parts become more of a headache the more OSes and compilers you support. This seem pretty tame, and I’d expect SQLite to be similar. I work on some projects that only support a single OS, compiler, and CPU architecture and it’s full of dependencies on things like the OS’s actual address space (few 64-bit archs use all 64 bits).
My first job was on a large-ish software product that ran on several completely incompatible platforms - various Unixes, early Windows, IBM mainframes, etc - and window systems. At first, making all of them happy seemed like annoying busy-work.
But our code was extremely clean and extremely well-factored, because it had to be. And after porting our product to the first two or three new platforms, the later ones were much much easier to do. Lesson learned.
Since (say) the 1990s, it feels like "the world" has slowly converged to pretty much (a) the browser; (b) desktop computers - Windows, Linux, Mac; (c) phones/tablets; (d) everything else (mainframes, embedded, industrial, what have you - stuff that most people will never deal with). And portability across different platforms is no longer all that important. Which is fine, but I do miss how the need for portability forced us to work with discipline and be relentless on quality.
Afaict, there are some patterns that are not supported, like converting pointers to/from integers and doing stuff with them like bitmasks (which is a huge anti-pattern, but some code bases do it)
Massive projects like Qt also push compilers to their limits and use various compiler-specific and platform-specific techniques which might appear as bugs to Fil-C.
Sure fooled me. I follow his Twitter account and there isn't much he hasn't got building with it at this point. UX comes later. Amazing it's the random work of one person
The author wrote WebKit’s allocator and worked on JavaScriptCore for over a decade. I really enjoyed his posts on the WebKit blog over the years like this one on the concurrent garbage collector (2017) https://webkit.org/blog/7122/introducing-riptide-webkits-ret...
I don’t think so much is fil-c itself, but from the looks of the diff it’s a new platform essentially. That can require porting existing software generally which you can read from the posted diff
The Dutch sold nexperia to the Chinese, recently decided that going to MS software for their tax division is the best option and now this. Higher ups seem to really be sleeping at the wheel.
I don't understand how that is weird. For some reason people have entered this point of view that if you dislike someone you suddenly need to dislike everything they do.
It's perfectly normal for a party you dislike to do something you like and also perfectly normal for a party you like to do something you dislike.
reply