Hacker Newsnew | past | comments | ask | show | jobs | submit | probably_wrong's commentslogin

I think people here may enjoy John Oliver's report on how bad the situation for air traffic controllers currently is.

Jump to minute 18 for a discussion on floppy disks or, appropriately, to minute 25 for an "honest recruitment ad".

https://m.youtube.com/watch?v=YeABJbvcJ_k&t=1539


Reminds me a lot of his report on nuclear security.

https://www.youtube.com/watch?v=1Y1ya-yF35g

A lot of times as a citizen I think you feel that something is "off" with different Government jobs but can’t put what exactly.

And then you watch one of those reports and be like "holy duck, how can it be this bad and what are they doing to my people and with my taxes?"

Different country, but a lot of times when dealing with the government I think why are the people working there always grumpy, and then one gave me the "tour" of what they have to deal with that is hidden from the public eye.

Working toilets? Nah, they had to go outside around the building into Porta Potty’s.

He showed me like fifty places in the building with mold. Not the fun white one you get on cheese. I am talking about black fungus out of stranger things eating half the wall. Some offices had signs saying working in a different office today with the date printed to 1998. Inside water was dripping from the ceiling.

He is like that’s why we are grumpy. Ever since I bring a piece of cake and some hot coffee when I have to deal with government employees and thank them for their service. They are allowed to be grumpy working under conditions I would expect from a third world country.


10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".

I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

[0] https://news.ycombinator.com/item?id=47717587


I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.

The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".

It has worked for him, repeatedly.


No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...).

OpenAI has also repeatedly and quietly lobbied against them.

You linked a vague PDF whose promised actions are:

> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.

Welcoming and organizing feedback!

A pilot!

Convening discussions!

This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.

Please don't fall for this stuff.


Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).

> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

Well that makes two of us. Character seems to mean nothing today.


[flagged]


> Incendiary and false headline aside

The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.

> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?

No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.


The headline is completely false and misleading. The bill does not indemnify AI companies from all mass murder as it implies. It indemnifies them if they UNKNOWINGLY provide a product that is used by others for mass murder.

If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.


All of those are false equivalences. Let me give you a few better analogies.

Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.

Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.

Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.

Now let's consider a few relevant examples.

An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.

Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.

Or selling a financial trading AI that's known to make disastrous decisions at times.

Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.

I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.


>Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.

Please provide ChatGPT/Gemini marketing materials advertising it as good for mass killings.


People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.

Beautiful.


Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.

This can only be an intentional misreading the bill, or you haven't read the underlying bill at all. Because the headline is patently false. It indemnifies them ONLY if they unknowingly assist in mass murder.

If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.

In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.


By the time a train is delayed enough to be canceled the mandatory compensation applies anyway, and I'm not sure how much DB cares about bad press.

I can see the cancellations as a means of stopping a cascade of delays, but it's also true that doing so means the train won't count in the delay statistics for the remaining stops. If DB doesn't want people to accuse them of gaming the statistics, perhaps they should calculate said statistics in a way that doesn't directly benefit them when they inconvenience their delayed passengers even more?


> One day after this piece went up, Chaotic Good made significant changes to their website — including pulling the “Narrative Campaign” section completely.

I checked the Internet Archive but I cannot access any of the archived versions. Apparently the website uses JS to display its content and the IA can't deal with it. Internet searches show that the page existed, though, so I'll take the content deletion as proof.



> Apparently the website uses JS to display its content and the IA can't deal with it

This is going to become more and more popular sadly.


I don't remember Age of Empires having an atomic age?

It was probably Rise of Nations or one of the other similar games.

If I had to guess I think they meant empire earth instead.

> Isn't this what the free software movement wanted? Code available to all?

But this is not that. The current situations is closer to "what's yours is mine and what's mine is mine".

I have been releasing my writings under a Creative Commons Attribution-ShareAlike license which requires attribution and that anything built upon the material to be distributed "under the same license as the original". And yet I have no access to OpenAI's built-upon material (I know for a fact they scrape my posts) while they get my data for free. This is so far legal, but it's probably not ethical and definitely not what the free software movement wanted.


>not what the free software movement wanted

Sorry, you don't speak for the movement. Plenty of us want this world.


No one speaks for everyone, but when TiVo used the GNU license in a similar one-sided way the free software movement reacted by creating a new GNU preventing exactly that. And, again, the Creative Commons license I'm using was designed (perhaps ineffectively) to prevent precisely this situation. So I feel confident saying that the past actions of the main referents of the free software movement support my view.

Sorry I was being polite. I'm part of the free software world, and you don't speak for me. And I like the new freedoms that I have to make more free software and free existing proprietary software by remaking them as free.

What should be the maximum allowable cyclomatic complexity of license conditions?

You can download Qwen 3.5 under Apache 2.0

I'm saying more and more "if you don't have the time to write it then I don't have the time to read it". Therefore my first impression is: if the process is so formulaic that you can automate it, then the content itself cannot be of any interest and the whole song and dance should probably be scraped altogether - think of a person asking ChatGPT "make this one-liner sound professional" and then sending it to someone who auto-summarizes it.

You mention that the target audience is "stakeholders who want to validate your existence", which makes me think that your target audience doesn't really care about what you actually did but rather about being heard. If that's the case then replacing the Delivery Manager (who is arguably doing a good job) with a machine that screams "I want to think about you as little as possible" is definitely a risk. It may work well to provide the DM with a first draft, though.

Disclaimer: I don't know your team nor stakeholders and I'm probably not in your industry.


Some documents are not important today, but they become _critical_ in the future. We lost a guy from my team and he was the only one who happen to know how to do this one process that happens every couple of months. Having that document was crucial. There is a lot of writing that is like that. I make my LLM write PR descriptions. It's not for me now and it might be for some of the reviewer now. But it's 100% for me in 2 years when I'm trying to understand why I did any of this and what was even the intention. But I'm a dev who tends to work on long lived systems where every once in a while you desperately need to know why something was done this way 5 years ago.

If you remember the old OkCupid blog they used to post interesting articles about online dating. I know their article about whether you should smile on your profile picture was eventually debunked [1], but it was nonetheless nice to have objective, data-based, non-pua advice on how to be successful in online dating.

[1] https://blog.photofeeler.com/okcupid-is-wrong-about-smiling-...


You mistook a marketing effort for science.

There was an actual effort at data science going on here before the marketing team took it over in the latter years. See the published book Dataclysm by one of the founders for more of the good stuff.

I'm surprised it took this long but two weeks ago I saw my first live streamer at a flea market. He was wearing some type of camera on his head (can't tell which one) and had his phone mounted like a wristwatch to read chat notifications. It was like that old Penny Arcade's strip about Glassholes come to life [1].

He was definitely filming everyone without our consent.

[1] https://www.penny-arcade.com/comic/2013/06/14/glasshol



“Hey security, I think that guy was filming young girls. Please eject him.”

No. Plagiarism applies to people, not tools.

Everyone who studies linguistics will tell you the rules of language are descriptive not proscriptive.

This means that people saying "plagiarism" of an LLM, means that LLMs are necessarily in the set of things that can do plagiarism, regardless of if those same people would ever say this about a spanner.

And you can also think about it a different way: a book is a tool for storing and distributing information, photocopying it is still plagiarism when done without attribution. Likewise, taking the output of an LLM, which is a tool for generating text in response to a prompt, without attribution, is as much plagiarism as if it came from a book.

IMO, what matters most is that a lot of people want to be aware of if/when some content came from an LLM vs. from a human. That makes attribution useful, which makes it important to get right. And that's still the case even if you still object to the specific word "plagiarism".


I don't think your example works because in the book case there's a clear author whose ideas are being reproduced without permission. The LLM in your example is not the author but rather the printing press, and no one would argue that the printing press' ideas are being stolen because the press doesn't have any.

If one want to argue that "not citing the LLM would be plagiarism" then we would have to find the human at the end of the chain whose ideas are being reproduced, which would require LLMs to output "this idea was seen in the following training documents".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: