Hacker Newsnew | past | comments | ask | show | jobs | submit | ApolloFortyNine's commentslogin

That's the one where one of the pilots pulled up the entire time, ignoring an alarm literally blaring the word "stall" for 2 minutes.

The poor captain found out I the last 10s what he had been doing but it was too late.

A couple accidents occurred largely due to Airbus averaging conflicting inputs with nothing more than a small warning light when it occurred. I'm pretty sure they would have gotten the Boeing treatment if social media was more entrenched at the time.


A bit more complicated, as the aircraft itself was unable to detect the stall conditions due to icing of the pitot tubes so the warning itself was in and out several times. Clearly the copilots did not understand the situation so an inconsistent alarm could be seen as spurious or a secondary effect.

> At the same time he made an abrupt nose-up input on the side-stick, an action that was unnecessary and excessive under the circumstances. The aircraft's stall warning sounded briefly twice due to the angle of attack tolerance being exceeded

...

> The crew's lack of response to the stall warning, whether due to a failure to identify the aural warning, to the transience of the stall warnings that could have been considered spurious, to the absence of any visual information that could confirm that the aircraft was approaching stall after losing the characteristic speeds, to confusing stall-related buffet for overspeed-related buffet, to the indications by the flight director that might have confirmed the crew's mistaken view of their actions, or to difficulty in identifying and understanding the implications of the switch to alternate law, which does not protect the angle of attack.

Its a complicated interplay of systems, where autonomous control systems are changing modes and receiving bad information during a complex, raplidly developing situation.


>A bit more complicated, as the aircraft itself was unable to detect the stall conditions due to icing of the pitot tubes so the warning itself was in and out several times.

74 times the stall warning blared [1]

Of the 3 pilots in the cockpit, only one thought he had to pull up, see page 31, unfortunately he was one of the ones in control.

>raplidly developing situation.

It was the same situation from when it began to the end, stuck pitot tubes. Though the stall warning only started blaring when the pilot stalled the plane. Bad airspeed indicators don't stall the plane, and are something pilots are supposed to be able to handle, that's why 2 of the 3 were shocked one did the exact opposite in the situation.

It was pilot error. Just look at the report, every finding starts with "the Crew". Planes aren't supposed to crash into the ground just because an air speed sensor failed.

[1] https://bea.aero/uploads/tx_elyextendttnews/annexe.01.en.pdf


I'm just going to post so I can reference this in the future.

The council is going to do and accomplish nothing, eventually some company may try to build, but after 2 years another environmental survey will be requested and they'll give up and go somewhere else (likely considered a win by people who support this bill).

These special government councils rarely accomplish anything, they're the exact kind of thing people reference when answering why building in the US is so expensive and why we don't have large infrastructure projects. It's red tape on top of red tape.


If the council ends up being a total flop then it doesn't matter. The moratorium goes away after 2017 and everything will return to normal

The council itself isn't legislating anything. They are simply researching and hiring relevant experts


This reads like the classic Youtuber whose annoyed their views dropped (this almost always amounts to 'people don't actually like your content as much as you thought').

>We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.

It's incredibly unlikely someone at X shoved the EFF in a 'low visibility' bucket. It's much more likely they've simply updated their alogorithms and the EFF doesn't hit some engagement metric.

They're still getting 13 million impressions by simply posting tweets, I really don't understand 'taking a stand' here. Instead of 13 million they'll simply get 0... The opportunity cost in the worst case is a human being copy pasting a tweet, there's plenty of software to schedule posts across platforms though, which would make it essentially free even in user time.

Imo, they had a 'personal stance' motivation, and dug deep for any reason to argue for it.


> It's much more likely they've simply updated their alogorithms and the EFF doesn't hit some engagement metric.

It's even more likely that Twitter's audience in 2018 was fairly supportive of the EFF's goals, but X's audience in 2026 is either indifferent or hostile.

As they put it:

> X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.


I work as a consultant for a small media, zero politics and very technical, and they report the same trend for X for the last 5 years or so. I was surprised that they told me they still want the "share on Twitter button" and keep the Twitter account but their activity there is nil, for the following reasons combined: 1) they have thousands of followers and thousands of impressions, but the engagement ratio (likes, comment, shares per follower) is abysmal compared with the other networks, 2) the format is different from other networks, while you can create something common for LinkedIn or Facebook, the Twitter share requires image re-crop and text rewrite (they don't use Instagram, the content doesn't fit) 3) while the main site receives a lot of clicks to read the full content (and see the ads that drive the income) from LinkedIn and Facebook, Twitter doesn't send clicks (people just read the header, at most hit the like-heart, and keep scrolling). Their conclusion: Twitter doesn't work any more for them and is getting worse (that said, BlueSky is even worse for them). Even spending 30 seconds there to polish a publication are 30 seconds wasted.

I don't know the numbers for EFF, but having 400K followers on X and getting between zero and five comments per post if you go back a couple of weeks (to skip today's fire), between zero and 20 retweets... sounds like a failed platform. They get better numbers from Facebook, a dying platform, with half the followers. They get similar or better numbers from Instagram with less than 10% of the followers they have in Twitter.


>between zero and 20 retweets... sounds like a failed platform.

Or they're tweeting something their followers don't care enough about to engage with, so the platform stops funneling their post to other followers.

Again, youtubers complain about this same kind of thing regularly. It's almost always just a 'you' problem, your content is simply not engaging.


I don’t feel their stance is “I’m not getting enough attention and it’s all Musk’s fault and I’m leaving”.

More “X is simply not worth our time anymore”. I can’t say with any certainty that X is on a death spiral (personally it does feel that way), but the kind of crowd who have remained in spite of Musk’s many public embarrassments (and the handling of Grok deep fakes and women) probably aren’t the kind who are passionate about the EFF


If that was really true, they wouldn't make a big post about why they are leaving, they would just turn off the lights and go elsewhere.

The problem for the EFF is that they don't have anywhere else to go with nearly the reach of Twitter. Bluesky has only 15 million monthly active users. They could pin their hopes on Facebook, but it's hard to think of a criticism of Twitter that wouldn't apply to Facebook.

Basically the problem for EFF and a lot of the progressive activist orgs out there is that they want a mass global audience but a platform with progressive activist moderation, and that was possible in the heyday of the Biden Administration, but starting with Musk's purchase of Twitter and firing of much of the progressive activist staff, together with the loss in the Missouri vs Biden consent decree, it's getting harder to find a truly mass audience social media platform that is willing to enforce progressive activist social norms.

As this realization sinks in, we are seeing organization after organization rage quit the mass market platforms and join more niche platforms that is moderated to their niche taste (e.g. mastodon, bluesky, etc), and this is just one example of that. The EFF of old would never have seen this as a problem, but for the present day EFF it's a big problem.

Another option is a medium without engagement at all. You post your stuff and that's it, for example you can quote/amplify but not comment. No zingers, mocking quote tweets, no clapbacks, etc. I think an organization like the EFF could tolerate that, they want a pure write-only medium where you make a PR announcement that gets lot of attention but is not subject to any disparagement.

Big orgs would love a system like that, but I'm not convinced it could draw a lot of eyeballs.


However if you view your content as valuable and the algorithm does not anymore, it's probably not the best platform for you to be on.

>If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.

If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.

Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.


> If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.

I don't think that's necessarily out of line with struggling to return a profit to investors though: an individual company is only ever going to capture a tiny fraction of the productivity improvements it enables its customer base to make[1], its own cost base is unusually high for tech, and investors are seeking a 10x+ return on an $852B valuation for a company that isn't even the market leader in that segment (which isn't the only segment, but it's the optimum B2B one). You can have a great business with a great value proposition and a sustainable moat and still not generate the desired returns on investment at a $852B valuation.

[1]and that's productivity improvements over the best-known free models, not productivity improvements over reading StackOverflow


Even if you think Space travel is worth the money (which I personally do), adding humans to the mix makes projects incredibly more expensive. Even in the realm of space travel and research, sending humans is a questionable use of the money.


Sports would also be much cheaper without humans.


The most important (if not entertaining) things you can do in space don't involve humans. Telescopes, communications, earth observation, sending probes to distant bodies, etc.

It's nice that we can send humans to space and it's good to keep that capability going so that the knowledge doesn't die. But the unmanned missions tend to pull the weight of actually accomplishing useful things. Humans just get in the way.


Most people don't find those things interesting unless people are directly involved in them.


Turns out I don't understand the point sports either.


People are going to have to die in order for us to increase our space knowledge. It sucks but thats just how it be, it requires humans for most of it.


>The threat of a Chinese moon landing keeps the Artemis program alive.

I don't disagree but I also don't really get it. The US performed the feat almost 60 years ago when the technology to do it didn't exist at the beginning of the program, and people didn't even know if it would be possible.

Today it's pretty well understood as a funding challenge more than anything. And sending people with the level of automation we have available today is essentially just a political move.


There's the obvious meme of "the US used to be able to do it, but can they still do it?". That wouldn't stand in question if the US had say a Mars mission, but if all the US can show are some low earth orbit activities while China has astronauts walking the moon that makes for a great propaganda point for the Chinese. Something to the tune of "As the American empire declines, the Chinese empire rises"

But the more impactful point is that the Chinese don't want to stop at what the Apollo program accomplished. They want to build a moon base, turn it into a lunar research station and invite other countries to cooperate. If the Chinese are wildly successful on that front, cooperating with them to get access to their moon base might be very enticing. Both for research about the moon and about low gravity. If the US doesn't answer with their own moon base that might end up in a reversal of the ISS situation (where everyone except China was invited to cooperate on the ISS).

Of course we don't know whether the Chinese will be successful in those points. But so far their space program has a great track record. They did manage to build their own space stations and lunar rovers, everything after that is, as you say, mostly a funding challenge


> Today it's pretty well understood as a funding challenge more than anything.

I'm not sure this is true. We had very good scientists and engineers at that time.


If something in 'Chat Control' is so fundamental that it should lead to the law not even being brought up for discussion (privacy), then that 'right' should be more clearly defined in the constitution, or constitution like structure.

It's when laws can exist, but simply have bad implementations, where you obviously can't jump to an amendment process.


This just seems ripe for selective enforcement if not codified in law. I agree the algorithm they use can be addicting, but it's because it's simply good at providing content the user wants to consume.

Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.

Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.


I see it as similar to the public health crisis created when protonated nicotine salts made their way into vapes along with flavors allowing 2-10x more nicotine to be delivered and the innovation that made Juul so popular with children.

The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.

Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.

What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.

It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.


> it's because it's simply good at providing content the user wants to consume.

Well, a drug addict wants to consume his drug. Because his drug is good at keeping abstinence syndrome at a bay and probably the tolerance hasn't build up to levels when the addict couldn't feel the "positive" effects of it.

The user feels an impulse to consume the content, but whether they want it we can know only by questioning them. They can lie consciously or unconsciously, but there are no better ways to measure a desire to consume it. When talking about doom scrolling I never met a person who said they want to do it, but there are people who do it nevertheless.

> This just seems ripe for selective enforcement if not codified in law.

I agree. I'm not sure how they define "addiction" and how they measure "addictiveness". It is the most important detail in this story.


Companies that sell products to the public have managed this for a hundred years. Some are good at it, some are not, some completely disregarded their obligations. This is not all that new.


Lets just be honest, if you make enough money its legal in America.

Unless you hurt children, then its mostly legal and a slap on the wrist.


thats the point


Nukes are the same as knives, just different in magnitude. Should one have special rules?


I think in America the second amendment makes it legal to own a nuke.


> I'm really not sure what companies should do about it

disassemble the intentionally addictive properties they built into their platforms to maximise engagement and revenue at the cost of the mental health of their users.


There are around ~500 millions guns in the US according to a quick Google.

There's a lot of crime in the US, but I doubt even 1% of the guns have been used in a crime.

Also you can buy a gun and just shoot it at a range.


> I doubt even 1% of the guns have been used in a crime.

Guns are used to inflict harm. Why would the arms producer not be held accountable? He produced the gun. The gun is the tool to cause harm, injury, potentially death. If service providers are held responsible for users, arms producers must also be held accountable. Financially too.


>> Guns are used to inflict harm. Why would the arms producer not be held accountable?

Notably by criminals who have never, and will never abide by the copious amounts of federal and state laws that currently regulate how people are able to use guns. If that is the case, how does holding manufacturers responsible for something completely out of their control make sense?

Its like saying car manufacturers should be responsible for drunk drivers who kill others in collisions. Because they should've known their cars would be used by someone to do something dangerous and against the law?


The gun companies have incentive to sell as many guns as they can, to the consumerist base of gun hobbyists.

There are 500M guns in the US because it's a hobby based on buying and collecting.

Due to the amount of guns in circulation, it is common for guns to be stolen.

Therefore, there are more "illegal" guns in circulation due to the consumerist nature of gun owners, and the companies making money on selling these guns.

Without a large amount of guns in circulation, there would not be a similarly large amount of illegal guns in circulation, as they almost all came from a factory somewhere.

I like guns but I am so tired of people acting like the 2nd amendment insists it's their right to treat firearms like goddamn funkopops.

In states with legal marijuana, we set limits on the number of plants one can keep on their property, yet there is no limit to how many firearms one can poorly store for a slightly competent criminal to come collect under their nose. No liability for poorly storing them either unless it's in the immediate vicinity of a toddler.


I dont think the constitution has an amendement that guarantees freedom of Marijuana ownership. I think that's the main difference. This is akin to saying that you need a license to drive to why not be required to have an ID to walk around on the streets. The difference is rather simple, one is protected by the constitution and the other isn't.

Also I don't think the consumerist gun ownes commit a lot of crimes with their guns. Unless they are a demographic that is known to be prone to lose or get their guns stolen super often, I don't see how they cause any real issue in term of gun violence. I agree that it is really cringe to see, but they are actually usually responsible in terms of ownership, storage, etc.


You are oversimplifying the situation beyond the entire point of this ruling --

Cox internet is sometimes used to commit copyright infringement, but it is designed and marketed for legal purposes. Guns are also sometimes used for illegal purposes, but they are designed and marketed for legal purposes.


Just curious, do you feel the same way about knife manufacturers? Or automotive makers?


By that logic Toyota should be liable if someone uses a Tacoma to ram a crowd.


Strawman argument. Inflicting harm does not automatically equal a crime. And you're also disregarding the use of guns as a deterrent.


Any age verification should come with an OAUTH style government run API. The idea being you verify your ID with the government, and the service that required age verification gets back a true or false for does this user meet this age requirement. That way the amount of data shared is kept to a minimum.

The UK, and Brazil who passed a similar law, 'cheated' by just forcing private companies to figure it out.


No, this is an absolutely terrible idea. You're suggesting a giant, centralized, government-run data silo, with all of your online activity tied to your real-world ID. This is far worse for privacy than any data broker, it's hard to even compare.

Honestly I'd rather have private companies figure it out. Then at least you'll get multiple options, including from privacy-first companies. But that still sucks, and my preference strongly goes towards OS-level Age Indication. Just as effective in practice, 100% private and offline.


>No, this is an absolutely terrible idea. You're suggesting a giant, centralized, government-run data silo, with all of your online activity tied to your real-world ID. This is far worse for privacy than any data broker, it's hard to even compare.

Not all your online activity, even if they kept logs it would be something like 'this site asked for age verification, we said yes'.

So they would have a list of sites, if they stored them and were allowed to store them. Which is something they can get from your ISP regardless.

It could be used for bad sure, lots of things can. In my perfect world this wouldn't exist at all like it hasn't for 30+ years. But putting the burden on private companies was always going to create other avenues for issues.


As someone from the UK, do you honestly believe the UK government would be happy with just "true or false" data?


Companies may get multiple options but you and I and Joe average are going to have to submit PII to several vendors chosen by someone else, exactly like the credit bureau system but without the regulations they have to follow.

The fact that the powers-that-be need to understand but choose not to is that what they want is literally impossible, even with mandatory government blood screenings to access computers. Anything short of requiring identification per POST is inadequate. This whole thing is a fools' errand and we must not give any ground.


Doesn't that exist in the U.S. already? DOGE worked to create the "one big, beautiful database" and now the federal government is buying information about citizens from data brokers.


The EU is already implementing this in the best way it's ever going to be implemented:

https://digital-strategy.ec.europa.eu/en/policies/eu-age-ver...

I really don't like this perfect law enforcement future, but this EU initiative is about the best design one can have.


Almost. Their apps will only work on Apple and Google-controlled phones.

There are no plans to allow separate, standard AOSP attestation methods for Android. Google's crooked* Play Integrity will be the only one.

*crooked because it confirms Android 8 are safe and with full integrity, even when they're rooted, full of malware and present spoofed certificate.


Their reference apps only work on those phones, but these aren't required: https://github.com/eu-digital-identity-wallet/eudi-app-andro...

The user of Play Integrity can choose to just block Android 8.


Really? Ugh, that's terrible. Teaches me to hope.


Wrong, because then that government knows exactly what services you have accessed. It's a huge and extremely dangerous privacy violation. The real solution to the age verification problem is not to have one. The Internet has existed for over 30 years without it; it's solution to a problem that does not exist.


Ironic that Brazil government tends to pay lip service to digital sovereignty while forcing their own citizens to handle their data to Zuckerberg and Peter Thiel.


Now your government knows you are a registered user of PornHub.

It will be fun when (not if) the database is leaked.


I don't think they meant literally Oauth but instead that you can get a verification request from the party that needs your age verified, get it signed by the government, and then send the assertion back to the relying party. It's not necessary for the government to send the signed verification request directly to Pornhub. It's not even necessary for the government to sign the assertion itself. A trusted device (like most consumer phones) could store the identity locally after government verification and then sign assertions itself after biometric or PIN verification, which is what most proposals look like.


I am not holding my breath.


> The UK, and Brazil who passed a similar law, 'cheated' by just forcing private companies to figure it out.

At least on the Brazilian case, it's outright illegal for a private company to implement the thing you are describing. So, if the government doesn't provide the service, there isn't much for them to figure out.


UK Gov sometimes likes to do things in very awkward ways, against any sort of worldly grain established. See the covid app.

However my Apple ID verified me based on my account age, I didn't need to provide anything.


In EU we have EIDAS, at least in some countries. It works. But mostly just for actual citizens.


Some kind of Digital ID?

The UK government proposed that and was met by the usual resistance to it.


Fuck that. California's way is the absolute maximum that should be done: When accounts get created on an operating system, allow the user to provide a completely unproven age. Then that age should be the only age check.

If the goal really is to just help parents prevent their kids from accessing inappropriate material, that's plenty. Anything else, and you're admitting the real goal is Big Brother style surveillance.


If the US had this, Trump would definitely be using it right now to send ICE to arrest people that said mean things about him on social media, didn't drop out of college, didn't bribe him enough, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: