My team has at least one person in every continent (except Africa and the Antarctic, but we do have someone on Réunion), so meetings are and will always be video conferences.
That is both really useful and a great example of why they should have stopped writing code in C decades ago. So many kernel bugs have arisen from people adding early returns without thinking about the cleanup functions, a problem that many other language platforms handle automatically on scope exit.
You don't even need an LLM for this stuff. GCC has the __cleanup__ attribute, and kernel static analyzers like Smatch have been catching missing unlocks for a decade now. People just ignore linter warnings when submitting patches, so the language itself isn't really the issue. The LLM is basically just acting as a talking linter that can explain the error in plain English
Linux doesn't have any of: sufficient testing, sufficient static analysis, or sufficient pre-commit code review. Under those conditions, which I take as a given because it's their project and we can't just swap out the leaders with more tasteful leaders, adding this type of third-party review feedback strikes me as valuable. Perhaps, to your point, it would also be possible to simply run static analyzers on new proposed commits.
Of course I specifically avoided invoking that language's name within the context of kernel programming in fear of summoning a Linus.
And he's so right. I didn't think like that back then, but new/delete (which have to be overloaded for kernel) behind allocators behind containers, vtables, =0, uninitialized members, unhandled ctor errors, template magic, "sometimes rvo", compiler hints, "sometimes reinterpret cast", 3rd party libraries, it would have been a disaster 20 years ago. Now he's being nice to Rust partially to spite that lang I love some more.
No, it's reviewing patches posted on LKML and offering suggestions. The original patch posted corresponding to your link was this, which was (presumably!) written by a human:
Not if it's using Confidential Computing. Then you're trusting "only" the CPU vendor (plus probably the government of the country where that vendor is located), but you're trusting the CPU already.
Are there really ISPs that don't support IPv6? I've had IPv6 from various ISPs since around 2010, and even my phone gets an IPv6 address from the cellular network.
Yes and it's ANNOYING. In Switzerland there is literally not one cellular network that issues IPv6 addresses. Also my workplace network (a school using some sort of Microslop solution) doesn't issue IPv6es.
I have a IPv6-only VPN with some personal services. Theoretically, the data can be transported via IPv4, but Android doesn't even query AAAA records if it doesn't have a route for [::]/0. So when I'm not home, I can't reach my VPN servers, because there is supposedly no address.
(I fix it by routing all IPv6 traffic through my VPN. Just routing connectivitycheck may suffice though).
Anything Microsoft lacking V6 is configuration issue - ever since Vista, Windows networking (in corporate) treats v4-only as somewhat "degraded" configuration (some time ago there was even a funny news post about how Microsoft was forced to keep guest WiFi with enabled v4, having switched everything else to V6 only)
It varies in different parts of the world. Here in New Zealand all except one fixed line (i.e. fibre/xDSL) provider offers IPv6 (the only hold out being the ex-government telco). Wireless/mobile (4G/5G mobile or FWA) is a different story however as all wireless/mobile networks are IPv4 only still to this day (even thogh two of them are also fixed line providers offering IPv6 via their fixed line service!).
Meta likes this stuff because (a) it's a barrier to entry to new social networks and (b) it heads off the under 16 bans which have happened in other countries.
It's also valuable verifiable data for advertisers, in that it verifies real people are being served your ads, and it's going to the desired age range/appropriate audience
reply