Hacker Newsnew | past | comments | ask | show | jobs | submit | pjc50's commentslogin

This has the air of a parody spy caper where the various people who have broken in keep tripping over each other.

The source leak is really interesting, though. We don't often get to see game source, and it often has surprises in.


> Will the SECOND GROUP leak the source code? Is the SECOND GROUP telling the truth? Did the SECOND GROUP lie and have access to Ubisoft code this whole time? Was it MongoBleed? Will the FIRST GROUP get pinned for this? Who is this mysterious THIRD GROUP? Is this group related to any of the other groups?

This read to me like the end of a soap opera. Tune in tomorrow to find out!


> it's a shame politicians can't be kept accountable tens of years from now.

Donald Trump is 79. He can only be held accountable in the afterlife, if there is one.


Usually different people. Or, in the case of LLMs, they're not given a no option, or it's carefully hidden.

Elon got singled out because the changes he was forcing on grok were both conspicuously stupid (grok ranting about boers), racist (boers again), and ultimately ineffective (repeat incidents of him fishing for an answer and getting a different one).

It does actually matter what the values are when trying to do "alignment". Although you are absolutely right that we've not solved for human alignment, putting a real limit on the whole thing.


I would also add that Elon got singled out because he was very public about the changes. Other players are not, so it's hard to assess the existence of "corrections" and the reasons behind them

No. If ChatGPT or Claude would suddenly start bringing up Boers randomly they would get "singled out" at least as hard. Probably even more for ChatGPT.

I think what the other poster was trying to say is that the other AI chatbots would be more subtle and their bias would be harder to detect.

Yeah, they did raise a fuzz when AI made black nazis etc.

He was public and vocal about it while the other big boys just quietly made the fixes towards their desired political viewpoint. ChatGPT was famous for correcting the anti-transgender bias it had earlier.

Either way, outsourcing opinion to an LLM is dangerous no matter where you fall in the political spectrum.


> declining service quality, higher complaint volumes, and internal firefighting

LLMs are a great technology for making up plausible looking text. When correctness matters, and you don't have a second system that can reliably check it, the output turns out to be unreliable.

When you're dealing with customer support, everyone involved has already been failed by the regular system. So they're an exception, and they're unhappy. So you really don't want to inflict a second mistake on them.


All true. A counter, and a counter-counter:

The counter: the existing system of checks with (presumably) humans was not good enough. For the last 15 months or so, I have been dealing with E.ON claiming one thing and doing another, and had to escalate it to the Ombudsman. I don't think E.ON were using an AI to make these mistakes, I think they just couldn't get customer support people to cope with the idea "the address you have been posting letters to, that address isn't simply wrong, it does not exist". An LLM would have done better, except for what I'm going to say in the counter-counter.

The counter-counter, is that LLMs are only an extra layer of Swiss-cheese: the mistakes they make may be different to human mistakes or may overlap, but they're still definitely present. Specifically, I expect that an LLM would have made two mistakes in my case, one of which is the same mistake the actual humans made (saying they'd fixed everything repeatedly when they had not done so, see meme about LLMs playing the role of HAL in 2001 failing to open the pod bay door) and the other would have been a mistake in my favour (the Ombudsman decided less than I asked for, an LLM would likely have agreed with me more than it should have).


This is (a) wildly over expectations for open source and (b) a massive pain to maintain, and (c) not even the biggest timewaster of python, which is the packaging "system".

> not even the biggest timewaster of python, which is the packaging "system".

For frequent, short-running scripts: start-up time! Every import has to scan a billion different directories for where the module might live, even for standard modules included with the interpreter.


In the near future we will use lazy imports :) https://peps.python.org/pep-0810/

This can't come soon enough. Python is great for CLIs until you build something complex and a simple --help takes seconds. It's not something easily worked around without making your code very ugly.

It's not that hard to handle --help and --version separately before importing anything.

You could, but it doesn't really seem all that useful? I mean, when are you ever going to run this in a hot loop?

> [...] not even the biggest timewaster of python, which is the packaging "system".

The new `uv` is making good progress there.


The app stores already block porn on their own initiative.

> I also wonder why smut literature (the best selling category of books on Amazon) seems to get a free pass.

It's popular with women and basically invisible to men.


There are plenty of NSFW oriented apps, especially in the AI category.

> It's popular with women and basically invisible to men.

Mostly true, and this might be a reflection of reality, but certainly not a justification.


And being long-form written text, likely invisible to minors as well.

It's extremely visible to teenagers. They're one of the main audiences for booktok.

Good Soldier Svejk working at the FBI decided to follow an illegal order as badly as possible.

This is like Feynman's method for solving hard scientific problems: write down the question, think very hard, write down the answer.

It doesn't necessarily translate to people who are less brilliant.


Yeah, "Step 1: draw 2 circles. Step 2: draw the rest of the fucking owl"

Very good metaphor. I'm going to use that in the future. It even has rows and columns.

Except the spreadsheet is a really accessible technology that's been cloned, while the critical problem with FPGA is the proprietary tooling. This is the same reason that NVIDIA made a gazillion dollars by turning GPUs into general purpose compute: a proper API, CUDA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: