It's a long regex, but it's just whitespace followed by an alternation with 5 different types of data: split-unquote, special characters, strings, comments, symbols. The string tokenizing branch is a bit complicated because it has to allow internal escaping of quotes. Early iterations of the guide didn't explain the regex in detail but the section now describes each of the regex components.
Yeah little weird since regexes can’t parse context free languages. I suppose most so-called regexes aren’t actually regular expressions, but it still feels like driving screws with a hammer.
Mal uses a regex for lexing/tokenizing. I didn't want people to get hung up on the lexing step (my university compilers class spent 1/3rd of the semester just on lexing). It's certainly a worthwhile area to study but not the focus of mal/make-a-lisp.
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.
I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:
The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.
I'm a bit shocked that the IRB gave an exemption letter - are they hoping that the kernel maintainers won't take the (very reasonable) step towards legal action?
I'd guess they may not have understood what actually happened, or were leaning heavily on the IEEE reviewers having no issues with the paper, as at that point it'd already been excepted.
> We send the emails to the Linux communityand seek their feedback.
That's not really what they did.
They sent the patches, the patches where either merged or rejected.
And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.
> (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
It'd be great if they pointed to those "please don't merge" messages on the mailing list or anywhere.
Seems like there are some patches already on stable trees [1], so they're either lying, or they didn't care if those "don't merge" messages made anybody react to them.
The paper doesn't cite specific commits used. It's possible that any of the commits in stable are actually good commits and not part of the experiment. I support the ban/revert, I'm just pointing out there's a 3rd option you didn't touch on.
We have 4 people, with the students Quishu Wu and Aditya Pakki intruducing the faulty patches, and the 2 others, Prof Kangjie Lu and Ass.Prof Wengwen Wang patching vulnerabilities in the same area. Banning the leader seems ok to me, even if he produced some good fixes and SW to detect it. The only question is Wang who is now in Georgia, and was never caught. Maybe he left Lu at umn because of his questionable ethics.
Also, they are talking of three cases. However, the list of patches to be reverted by gregkh is far longer than three, more than a hundred. Most of the first batch look sufficiently similar that I would guess all of them are part of this "research". So the difference in numbers alone points to them most probably lying.
I was more ambivalent about their "research" until I read that "clarification." It's weaselly bullshit.
>> The work taints the relationship between academia and industry
> We are very sorry to hear this concern. This is really not what we expected, and we strongly believe it is caused by misunderstandings
Yeah, misunderstandings by the university that anyone, ever, in any line of endeavor would be happy to be purposely fucked with as long as the perpetrator eventually claims it's for a good cause. In this case the cause isn't even good, they're proving the jaw-droppingly obvious.
The first step of an apology is admitting the misdeed. Here they are explicitly not acknowledging that what they did was wrong, they are still asserting that this was a misunderstanding.
Even their choice of wording ("We are very sorry to hear this concern.") is the blend of word fuckery that conveys the idea they care nothing about what they did or why it negatively affected others.
..."Because if we're lucky tomorrow, we won't have to deal with questions like yours ever again." --Firesign Theater, "I Think We're All Bozos on the Bus"
Yet we do nothing about it? I wouldn't call that jaw-droppingly obvious, if anything, without this, I'm pretty sure that anyone would argue that it would be caught way before making it way into stable.
I've literally never come across an open source project that was thought to have a bullet proof review process or had a lack of people making criticisms.
What they do almost universally lack is enough people making positive contributions (in time, money, or both).
This "research" falls squarely into the former category and burns resources that could have been spent on the latter.
This is zero percent different from a bad actor and hopefully criminal. I think a lot of maintainers work for large corporations like Microsoft, Oracle, Ubuntu, Red Hat, etc... I think these guys really stepped in it.
> And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose...
Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".
Had this not been caught, it would've exposed a major flaw in the process.
> ...until they got caught and people started reverting all the patches from their university and banned the whole university.
Either these patches are valid fixes, in which case they should remain, or they are intentional vulnerabilities, in which case they should've already been reviewed and rejected.
Reverting and reviewing them "at a later date" just makes me question the process. If they haven't been reviewed properly yet, it's better to do it now instead of messing around with reverts.
This reminds me of that story about Go Daddy sending everyone "training phishing emails" announcing that they had received a company bonus - with the explanation that this is ok because it is a realistic pretext that real phishing may use.
While true, it's simply not acceptable to abuse trust in this way. It causes real emotional harm to real humans, and while it also may produce some benefits, those do not outweigh the harms. Just because malicious actors don't care about the harms shouldn't mean that ethical people shouldn't either.
This isn't some employer-employee trust relationship. The whole point of the test is that you can't trust a patch just because it's from some university or some major company.
The vast majority of patches are not malicious. Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action. Sending a buggy patch that creates a vulnerability by accident is not a malicious action.
Given the completely unavoidable limitations of the review and bug testing process, a maintainer has to react very differently when they have determined that a patch is malicious - all previous patches past from that same source (person or even organization) have to be either re-reviewed at a much higher standard or reverted indiscriminately; and any future patches have to be rejected outright.
This puts a heavy burden on a maintainer, so intentionally creating this type of burden is a malicious action regardless of intent. Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
The vast majority of drunk drivers never kill anyone.
> Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action.
I disagree that it's malicious in this context, but that's irrelevant really. If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken. That's what we're trying to figure out here, and there's no better way to do it than replicate the same conditions under which such patches would ordinarily be reviewed.
> Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
Yes, everyone knows that patches can introduce vulnerabilities if they are not found. We want to know whether they are found! If they are not found, we need to figure out how they slipped by and how to prevent that from happening in the future.
> If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken.
That is a complete misunderstanding of the Linux dev process. No one expects the first reviewer of a patch (the person that the researchers were experimenting on) to catch any bug. The dev process has many safeguards - several reviewers, testing, static analysis tools, security research, distribution testing, beta testers, early adopters - that are expected to catch bugs in the kernel at various stages.
Trying to deceive early reviewers into accepting malicious patches for research purposes is both useless research and hurtful to the developers.
Open source products rely on trust. There is no way to build a trust-less open source product. Of course, the old mantra of trust, but verify is very important as well.
But the linux kernel is NOT a security product - it is a kernel. It can be used in entirely disconnected devices that couldn't care less about security, as well as in highly secure infrastructure that powers the world. The ultimate responsibility of delivering a secure product based on Linux lies with the people delivering a secure product based on the kernel. The kernel is a essentially library, not a product. If someone is assuming they can build a secure product by trusting Linux to be "secure" than they are simply wrong, and no amount of change in the Linux dev process will fix their assumption.
Of course, you want the kernel to be as secure as possible, but you also want many other things from the kernel as well (it should be featureful, it should be backwards compatible with userspace, it should run on as many architectures as needed, it should be fast and efficient, it should be easy to read and maintain etc).
> Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".
This just turns the researchers into black hats. They are just making it look like "a research paper."
Not sure why you are so obsessed with this. Yes this process does involve humans, but the process has aspects can be examined as independent of humans.
This study does not care about the reviewers, it cares about the process. For example, you can certainly improve the process without replacing any reviewers. It is just blatantly false to claim the process is all about humans.
Another example, the review process can even be totally conducted by AIs. See? The process is not all about humans, or human behavior.
To make this even more understandable, considering the process of building a LEGO, you need human to build a LEGO, but you can examine the process of building the LEGO without examine the humans who build the LEGO.
This study does not care about the reviewers, it cares about the process. For example, you can certainly improve the process without replacing any reviewers. It is just blatantly false to claim the process is all about humans.
This was all about the reaction of humans. They sent in text with a deceptive description and tried to get a positive answer even though the text was not wholly what was described. It was a psych study in an uncontrolled environment with people who did not know they were participating in a study.
No. This is not all about the reaction of humans. This is not a psych study. I have explained this clearly in previous comments. If you believe the process of doing something is all about humans, I have nothing to add.
People are obsessed because you're trying to excuse the researchers behavior as ethical.
"Process" in this case is just another word for people because ultimately, the process being evaluated here is the human interaction with the malicious code being submitted.
Put another way, let's just take out the human reviewer, pretend the maintainers didn't exist. Does the patch get reviewed? No. Does the patch get merged into a stable branch? No. Does the patch get evaluated at all? No. The whole research paper breaks down and becomes worthless if you remove the human factor. The human reviewer is _necessary_ for this research, so this research should be deemed as having human participants.
You did. You just won't accept it because you don't want to. Every time you try to draw the focus of the conversation to "it's a process study" you're trying to diminish the severity of what the researchers did here.
How was this study conducted? For every patch that the researchers sent, what process did it go through?
The answer is, it was reviewed and accepted by a human. That's it. Full stop. There's your human subject right there in the middle of your research work. It's not possible to conduct this research without that human subject interacting with your research materials. You do not get to discount that human participation because "Oh well we COULD replace them with an AI in the future". Well your study didn't, which means it needs to go through the human subjects review process.
When you claim that this study was about a process, you're literally taking the researchers side. That's what they've been insisting on as the reason why this study is ethical and they did not need to inform or obtain consent from the kernel development team. That's the excuse they used to get out of IRB's review process so they can be considered "not a human subjects research". That's the excuse they needed so they can proceed without having to get a signed consent form. They did all of this so they could conduct a penetration test without the organization they were attacking knowing about it.
You don't seem to be able to comprehend why or how the maintainers feel deceived here, or that their feelings are legitimate. If you did, you wouldn't keep banging on about "oh this is just a process study, the people don't matter, it's all isolated from humans". Funny enough, the people who DID interact with this research DID feel they mattered and DID feel deceived. The whole point of IRB was to prevent exactly this; researchers conducting unethical research which would only come to light after the study concluded and the injured parties complained (and deceit IS a form of harm). For research which is supposed to be isolated from humans and thus didn't see the need in obtaining a signed consent form, that's not really the outcome you expect to see if everything was on the up and up. Another form of harm from this study, the maintainers now have to go over everything they submitted again to ensure there's nothing else to be worried about. That's a lot of wasted man hours and definitely constitutes harm as well. All of University of Minnesota now has less access to the project after getting banned, even more collateral damage and harm caused to their own institution.
Let's be honest. If the researchers were able to sneak their code into a stable, or distribution version of the kernel, they'd be praising themselves to high heaven. Look at how significant our results were, we fucked up all of Linux! Only reason they didn't is because at least they can recognize that would be going a step too far. They're just looking for excuses to not get punished at this point. Same with the IRB. The IRB is now trying to wiggle out of the situation by insisting everything is ok. The IRB is also made up of professors who have a reputation to maintain! They know they let something through that should never have been approved in it's current form. Most human subject research NEVER get this kind of blowback and the fact that this one did means they screwed up and they know it.
No ethics review board considers a multi page, multi forum, lengthy discussion on the ethics of a study they approved as a good sign. Honestly, any study that gets this much attention would be considered a huge success in any other situation.
"The answer is, it was reviewed and accepted by a human. That's it. Full stop. There's your human subject right there in the middle of your research work. "
Thats not the correct or relevant criteria. If you were correct, testing airport security and testing AntiMoney Laundering checks at a bank would amount to human experiments. In fact its hard to think of any study of the real world that would not become a "human experiment".
"When you claim that this study was about a process, you're literally taking the researchers side."
Thats some seriously screwed up logic right here.
"Weinstein was a Nazi and a serial killer, if you disagree with me you are taking his side"
Um, academics aren't allowed to assemble bombs and then try and sneak them onto planes with the excuse that it's not a human trial. That'd be absurd.
It's easy to think of studies that don't involve humans so that statement is just wilful obfuscation. Physics, chemistry, heck lots of biology, and of course computer science are primarily made of studies on objects rather than people. Of those that are done on people they are almost always done on people who know they are the subject of an experiment. Very few studies are like this one.
I am sorry, you arument is all over the place. What on earth are you arguing? That human trial does not excuse what would otherwise be a crime? That airport security is not tested with real bombs? That every study outside of natural sciences is a human expriment?
Studies of airport security are done all the time, thats how we know its terribly ineffective. The staff of the airport are not told about them, they are not human experiments.
The experiments on people have a spesific definition that goes beyond "a human is present"
Airport security staff consent to this type of testing at hiring time so testing can be random, and not just anyone can try to sneak a weapon through security to see if it's caught as "a test".
Perhaps a similar approach that allows randomness with some sort of agreement with the maintainers could have prevented this issue while preserving the integrity of the study.
Unfortunately “no” doesn’t constitute a rebuttal, and the responding commenter makes many valid points.
It is self-evident that this study tangibly involved people in the scope, those people did not provide consent prior, and now openly state their grievances. It is nothing short of arguing in bad faith to claim otherwise.
Repeating the same thing over and over does not make it a fact.
Maybe the stated aim of the research was to study the process. But what they actually did was study how the people involved implemented it.
Being publicly manipulated into merging buggy patches, and wasting hours of people's time are two pretty obvious effects this study had that could cause some amount of distress and thus it cannot be dismissed as simply "studying the process".
This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).
It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.
How in the world is conducting behavioral research on kernel maintainers to see how they respond to subtly-malicious patches not "human subject research"?
Of course, there are other ethical and legal requirements that you're bound to, not just this one. I'm not sure which requirements IRBs in the US look into though, it's a pretty murky situation.
It seems to qualify per §46.102(e)(1)(i) ("Human subject means a living individual about whom an investigator [..] conducting research: (i) Obtains information [...] through [...] interaction with the individual, and uses, studies, or analyzes the information [...]")
I don't think it'd qualify for any of the exemptions in 46.104(d): 1 requires an educational setting, 2 requires standard tests, 3 requires pre-consent and interactions must be "benign", 4 is only about the use of PII with no interactions, 5 is only about public programs, 6 is only about food, 7 is about storing PII and not applicable and 8 requires "broad" pre-consent and documentation of a waiver.
rather than arguing about the technical details of the law, let me just clarify: IRBs would actively reject a request to review this. It's not in their (perceived) purview.
It's not worth arguing about this; if you care, you can try to change the law. In the meantime, IRBs will do what IRBs do.
If the law, as written, does actually classify this as human research, it seems like the correct response is to sue the University for damages under that law.
Since IRBs exist to minimize liability, it seems like that would be that fastest route towards change (assuming you have legal standing )
Woah woah woah, no need to whip out the litigation here. You could try that, but I am fairly certain you would be unsuccessful. You would be thrown out with "this does not qualify under the law" before it made it to court and it wouldn't have much bearing except to bolster the university.
It obviously qualifies and the guy just quoted the law at you to prove it.
Frankly universities and academics need to be taken to court far more often. Our society routinely turns a blind eye to all sorts of fraudulent and unethical practices inside academia and it has to stop.
I had a look at section §46.104 https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... since it mentioned exemptions, and at (d) (3) inside that. It still doesn't apply: there's no agreement to participate, it's not benign, it's not anonymous.
If there's some deeply legalistic answer explaining how the IRB correctly interpreted their rules to arrive at the exemption decision, I believe it. It'll just go to show the rules are broken.
IRBs are like the TSA. Imposing annoyance and red tape on the honest vast-majority while failing to actually filter the 0.0001% of things they ostensibly exist to filter.
are you expecting that science and institutions are rational? If I was on the IRB, I wouldn't have considered this since it's not a sociological experiment on kernel maintainers, it's an experiment to inject vulnerabilities in a source code. That's not what IRBs are qualified to evaluate.
> it's an experiment to inject vulnerabilities in a source code
I'm guessing it passed for similar reasoning, along with the reviewers being unfamiliar with how "vulnerabilities are injected." To get the bad code in, the researcher needed to have the code reviewed by a human.
So if you rephrase "inject vulnerability" as "sneak my way past a human checkpoint", you might have a better idea of what they were actually doing, and might be better equipped to judge its ethical merit -- and if it qualifies as research on human subjects.
To my thinking, it is quite clearly human experimentation, even if the subject is the process rather than a human individual. Ultimately, the process must be performed by a human, and it doesn't make sense to me that you would distinguish between the two.
And the maintainers themselves express feeling that they were the subject of the research, so there's that.
Testing airport security by putting dangerous goods in your luggage is not human experimentation. Testing a Bank's security is not human experimentation. Testing border securiry is not.
What makes people revieing linux kernel more 'human' than any of the above?
It's not an experiment in computer science; these guys aren't typing code into an editor and testing what the code does after they've compiled it. They're contributing their vulnerabilities to a community of developers and testing whether these people accept it. It is absolutely nothing else than a sociological experiment.
This reminds me of a few passages in the SSC post on IRBs[0].
Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?
All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.
All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.
If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.
I guess I have a different perspective. I know a fair number of world class scientists; like, the sort of people you end up reading about as having changed the textbook. One of these people, a well-known bacteriologist, brought his intended study to the IRB for his institution (UC Boulder), who said he couldn't do it because of various risks due to studying pathogenic bacteria. The bacteriologist, who knew far more about the science than the IRB, explained everything in extreme detail and batted away each attempt to shut him down.
Eventually, the IRB, unhappy at his behavior, said he couldn't do the experiment. He left for another institution (UC San Diego) immediately, having made a deal with the new dean to go through expedited review. It was a big loss for Boulder and TBH, the IRB's reasoning was not sound.
They weren't studying the community, they were studying the patching process used by that community, which a normal IRB would and should consider to be research on a process and therefore not human Research. That's how they presented it to the IRB so it got passed even if what they were claiming was clearly bullshit.
This research had the potential to cause harm to people despite not being human research and was therefore ethically questionable at best. Because they presented the research as not posing potential harm to real people that means they lied to the IRB, which is grounds for dismissal and potential discreditation of all participants (their post-graduate degrees could be revoked by their original school or simply treated as invalid by the educational community at large). Discreditation is unlikely, but loss of tenure for something like this is not out of the question, which would effectively end the professor's career anyway.
At a minimum, is needlessly increasing the workload of an unwitting third party considered a harm? I ask, because I’d be pretty fucking mad if someone came along and added potentially hundreds of man-hours of work in the form of code review to my life.
Considering that the number of patches submitted was quite limited I don't think the original research paper would qualify as a DoS attack. The workload imposed by the original research appears to have been negligible compared to the kernel effort as a whole, no more than any drive by patch submission might result in. So no, I wouldn't personally view that as harmful.
As to the backdated review now being undertaken, as far as I'm concerned that decision is squarely on the maintainers. (Honestly it comes across as an emotional outburst to me.)
Wasting time is nor considered stealing. If it were, there is a long queue to collect money from: all the add agencies, telephone menues where you have to go 10 level deep before you speak to a person, anyone who is bothering people on the street with questions, anyone making your possessions dirty would be a criminal. Anyone going on a date that doesnt work out would be a criminal.
Sure, but I'm still going to be pretty annoyed with you. And if you've wasted my time by messing with a system or process under my control then I'm probably going to block you from that system or process.
As a really prosaic example, I've blocked dozens - if not hundreds - of recruiter email addresses on my work email account.
In my experience in university research, the correct portrayal of the ethical impact is the burden of the researchers unfortunately, and the most plausible explanation in my view given their lack of documentation of the request for IRB exemption would be that they misconstrued the impact of the research.
It seems very possible to me that an IRB wouldn't have accepted their proposed methodology if they hadn't received an exemption.
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.
Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?
I would say that the concept and implementation in C is inherently insecure. Switching to something less reviewed because there is a sudo vulnerability is not a guarantee that you are now "safer" especially if those ports are not reviewed.
As far as I can say, never ever use slicer69/doas, I've found 3 critical security vulnerabilities in it, the author does not understand C or how it should work in general.
Here are 3 examples if issues I found and the author used misleading commit titles to hide the issues and made excuses saying a clear buffer overflow very similar to the one found in sudo is just "potential":
1. Void Linux doesn't claim its BSD inspired, BSD inspired could mean 100 different things.
2. runit is not a "sysv-style" init, its the complete opposite.
runit is a supervisor and inspired by daemontools. A "rc init" is more closely related to "sysv-style" than runit is.
3. > I like OpenBSD's init system: super simple and best of all, no bs runlevels!
Runit has no runlevels, can actually automatically restart services when they die, can signal services without relying on pid files (which are prone to race conditions) and can create a pipe between a service and a log service that will never lose any logs.
The size for `sockaddr_storage` is not defined by POSIX, but `sockaddr_un` is defined, and you can't just change the `sun_path` to a pointer, so to increase `sun_path` you would have to increase the `sockaddr_storage` struct size.
This comes with other downsides, first is incompatibility with other OSes, most OSes seem to be around 104-109 bytes.
Second new problem would be that with any larger value each socket call would have to copy more data, even if they are not unix sockets. If you change it to something that looks sensible like `PATH_MAX` you end up increasing memory requirements for any application that works with sockets.
> Knoppix' Startvorgang läuft nach wie vor per Sys-V-Init mit wenigen Bash-Skripten, welche die Systemdienste effizient sequenziell oder parallel starten.
> Knoppix' boot sequence still uses Sys-V-Ini, with some bash scripts which start the system services efficiently sequential or in parallel.
So it has still all the downsides of sysvinit, except that might start sevices in a more efficient order or some in parallel.
If production is servers, then alpine is the better choice as it provides stable releases.
Void Linux is a nice desktop system and you can run it on single servers you personally take care of.
But I would rather not use it at scale for servers as updating rolling release can always lead to issues and not updating will leave security issues unfixed.
The one benefit would probably providing a glibc version if you require it.
Most websites don't have redundant servers but its still there and used by a lot of big sites.
The whole point of it is to connect to multiple A and AAAA records with a very short delay (250ms in chrome).
The browsers use poll to get notified about the first established connection and then use it.
So if one A/AAAA records IP is unreachable or slow it will use a another one.
There will be no fallout (except maybe for clients that don't use something like happy eyeballs, but any browser does that) if you have multiple A/AAAA records and one of them is down.