Hacker Newsnew | past | comments | ask | show | jobs | submit | more MrSkelter's commentslogin

It’s a funny comment because those formats only existed to be proprietary. Sony learnt the wrong lessons from CD, which they co developed with Phillips. They saw the success of that format and wished they were getting royalties on the underlying tech.

They then wasted billions and decades in formats other companies wouldn’t touch because they had fees attached. Minidisc being a prime example. Sounded worse than CDs, cost the same. Had a recording feature people already had with cassette.


True but remember the American obsession with acceleration is very different from actual speed. Driving involves moving the car round corners and bends. Many less accelerative cars will beat their betters point to point on handling.

Then there’s stoping. The less time you spend slowing down the faster you go and lighter better handling vehicles win there too.

That F-150 will get its ass handed to it by an older sports car through a city.

Drag racing is a tiny niche, it’s not the true measure of automotive performance any more than horsepower is.


Two points.

You can’t say somone is being over cautious without k owing their location. It’s easier to supplement vitamin D than cure skin cancer. If the local population looks different to you there is reason to pause.

Secondly Asian people, East and South, are sun adapted. East Asians may not be dark but they, like Northern Europeans, selects for their current appearance based on environment. Just because they are not brown does not mean they aren’t sun adapted for their locale in ways visitors are not.


I have a personal corpus of letters between my grandparents in WW2. My grandfather fighting in Europe and my grandmother in England. The ability of Claude and ChatGPT to transcribe them is extremely impressive. Though I haven’t worked on them in months and this uses older models. At that time neither system could properly organize pages though and chatGPT would sometimes skip a paragraph.


I've also been working on half a dozen crates of old family letters. ChatGPT does well with them and is especially good at summarizing the letters. Unfortunately, all the output still has to be verified because it hallucinates words and phrases and drops lines here and there. So at this point, I still transcribe them by hand, because the verification process is actually more tiresome than just typing them up in the first place. Maybe I should just have ChatGPT verify MY transcriptions instead.


It helps when you can see the confidence of each token, which downloadable weights usually gives you. Then whenever you (your software) detects a low confidence token, run over that section multiple times to generate alternatives, and either go with the highest confidence one, or manually review the suggestions. Easier than having to manually transcribe those parts at least.


Is there any way to do this with the frontier LLM's?


Ask them to mark low confidence words.


Do they actually have access to that info "in-band"? I would guess not. OTOH it should be straightforward for the LLM program to report this -- someone else commented that you can do this when running your own LLM locally, but I guess commercial providers have incentives not to make this info available.


Naturally, their "confidence" is represented as activations in layers close to output, so they might be able to use it. Research ([0], [1], [2], [3]) shows that results of prompting LLMs to express their confidence correlate with their accuracy. The models tend to be overconfident, but in my anecdotal experience the latest models are passably good at judging their own confidence.

[0] https://ieeexplore.ieee.org/abstract/document/10832237

[1] https://arxiv.org/abs/2412.14737

[2] https://arxiv.org/abs/2509.25532

[3] https://arxiv.org/abs/2510.10913


interesting... I'll give that a shot


It used to be that the answer was logprobs, but it seems that is no longer available.


Always seemed strange to me that personal correspondence between two now-dead people is interesting. But I guess that is just my point of view. You could say the same thing about reading fiction, I guess.


Why on earth wouldn't it be interesting? Do you only care about your own life?


That’s newer than Star Wars and isn’t a huge piece of IP. To the estate a few book sales makes a difference.


As someone who has been involved in dating sites you have no idea how standard this is.

Until Match bought the industry it was pretty much impossible to launch a dating site with profiles. The industry standard was to post fakes. Real users would then receive stock responses to keep them around until real people showed up. No AI needed. Just a bag of standard emails. Guys are incredibly dumb and ego driven. It’s very easy to get them to think a beautiful stranger is into them.


You are morally liable to pay taxes where you use services and rely on infrastructure.


This is also the result of the failure of the third most expensive technical project of the war. The Norden sight. At the outset of the war it was believed that precision bombing from the air was possible. A vast sum was spent making a sight to allow accurate targeting to that end.

It simply didn’t work and the result was a return to area bombing and the massive civilian casualties that resulted in.

Even worse the bombing did almost nothing to lessen enemy output. In fact over the war Germany and Japan managed to constantly increase their output of weapons for various reasons. One being that having a large industrial base pre war meant they could always sift production. Ie at the start you are making bullets in a bullet factory. At the end you are making bullets in what was a sewing machine factory.

We still like to believe accurate weapons make for “clean” conflicts but we never seem able to resist area bombing.


This is correct. I have NOS Rotrings made in Germany. They sell for a pretty penny online. People who love the pens know the modern production isn’t the same.


Good luck printing anything at 870dpi.

This whole article is a bit confused. Image quality isn’t about the ability to discern detail. Many people cannot see the detail in their 4k TVs or a photo, it’s about not seeing visible pixelation.

Those aren’t the same thing. Visible pixelation is connected to contrast and color depth. That’s why a perfectly smooth gradient appears as bands of color in poorly encoded images and video. There’s no detail in gradients at all. The pixelation is due to a lack of color information.

On top of that printers use different numbers of colors (from 3 to 11 or more) and different ways of sizing and layering dots (if you aren’t using continuous tone printers which are very rare nowadays).

Then you have to add in the ability to up res images plausibly using modern algorithms. Whereas before we were always stretching the data we had, now by adding false detail using ML we can scale a significant amount without a visible reduction in quality. That can be very effective at removing pixelation while preserving the original image content.

So in reality there aren’t hard and fast rules. It’s totally image and output dependent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: