Hacker Newsnew | past | comments | ask | show | jobs | submit | slhck's commentslogin

Yeah it's a weird comparison to be making. It all depends on how they selected the quality (VMAF) target during encoding. You couple easily end up with other results had they, say, decided to keep the bandwidth but improve quality using AV1.

This VMAF comparison is to be taken with a grain of salt. Netflix' primary goal was to reduce the bitrate consumption, as can be seen, while roughly keeping the same nominal quality of the stream. This means that, ignoring all other factors and limitations of H.264 with higher resolutions, VMAF scores for all their streaming sessions should roughly be the same, or in a comparable range, because that's what they're optimizing for. (See the Dynamic Optimizer Framework they have publicly posted a few years ago.)

Still impressive numbers, of course.


Same experience here – it seems I have to specifically tell it to use the "X skill" to trigger it reliably. I guess with all the different rules set up for Claude to follow, it needs that particular word to draw its attention to the required skill.

Ditto, I also find it'll invariably decide to disregard the CLAUDE.md again and produce a load of crap I didn't really ask it for.

The Austrian consumer protection association has just released results on tests of headphones: https://vki.at/Presse/PA-Kopfhoerer-2025 (German article), and found that 40% contained possibly harmful chemicals, including the parts that touch your body.

It's wild. I have children, and I spent a great time researching foods, bottles, toys, etc., but I would've never thought much about doubting the (big brand) consumer electronics that we all use every day.


That article is a classic example of a prevalent error in this line of commentary: indiscriminately taking a "possibly harmful chemical", translating it to a totally different context (say, touching it instead of eating it), and then assuming that any interaction with the chemical is therefore bad.

The article specifically calls out pthalates and bisphenols (both common in plastics), but there's absolutely no reason to believe -- unless you're regularly eating your headphones -- that this is a problem.


Totally agree with you - the dermal exposure is a different pathway, and that could be more clearly mentioned. The fact that these materials are present are not automatically hazards (but they do state that!). I also wouldn't automatically assume that the products marked as red are not safe to use. For me it's just interesting to see that some manufacturers can do without, or less of those components.


Well, plastics generally require plasticizers. The Bisphenol A kerfuffle has largely resulted in the use of different plasticizers, which has in turn caused the sort of people who are fearful of chemicals to expand their definitions of “harmful” to include those new chemicals. It’s a never-ending cycle, but the evidence never really gets any better.


> unless you're regularly eating your headphones

Bunnies everywhere put on notice.


Children eat and chew on lots of things you’d never imagine, even up into elementary and middle school years. A smaller number of adults do too.


So don’t give them your headphones to chew on.

Let me just save you the effort of further rounds of responses here: if you chew on plastic, you will be exposed to the chemicals in plastic. If you’re truly worried about this, don’t buy plastic items.


Right, and I agree and I don’t. I’d o my best to explain to my kids they should never put anything in their mouth that isn’t made to be eaten.

But this should be considered when we make blanket claims about it’s okay because we’re just touching them, not eating them. We have to think about how people actually behave, not ideal usage.

By the way, headphones are required in elementary school here and are used at least an hour a day.


Don't knock Tide Pods 'til you've tried them.


While that would be nicer from an end-user perspective, it's something hard to maintain for FFmpeg itself. Consider the velocity of the whisper-cpp project. I'm sure that – just like with filters such as vmaf, which also require building a dependency and downloading a model – precompiled versions will become available for novice users to directly download. Especially considering whisper-cpp is MIT-licensed.


Yeah, look at what https://x.com/badlogicgames has done porting an engine with the help of Claude Code. He's set up a TODO loop to perform this: https://github.com/badlogic/claude-commands – background blog article: https://mariozechner.at/posts/2025-06-02-prompts-are-code/


Mariosechner post looks very promising.

We may finally get to the devs doing lock-in using ultra complex syntax languages in a much more efficient way using LLMs.

I have already some ideas for some target c++ code to port to C99+.


The todo and porting "programs" are unrelated. The blog post shows the full porting pipeline.


Note that it's not really "cleaned up" insofar as there is a uv cache folder that will grow bigger over time as you keep using that feature.


True. It's a good idea to periodically run:

  uv cache clean
Or, if you want to specifically clean just jupyter:

  uv cache clean jupyter


if anyone else is curious..

  % cd $(uv cache dir); find . -type f | wc -l; du -hs .
  234495
  16G .


Some thoughts based on my anecdotal experience — but it depends on the price you are willing to pay.

You can get quite good webcams for $100–300 (from Insta360, Obsbot, Logitech maybe …) which work out of the box with USB-C and have mostly okayish software that supports changing things like brightness, white balance, etc. These however still have small sensors and cannot achieve a good shallow depth-of-field (bokeh). Running them at higher sensitivity (ISO), e.g. in darker environments, inevitably causes noise. But if you just want to participate in meetings, it does not matter. I had a Logitech StreamCam and upgraded to an Insta360 Link 2C, which is definitely much better but still not on-par with a proper camera. You should at least get a good keylight or ring light.

The next step up would be mirrorless cameras with built-in or interchangeable lenses made for vlogging, which also can be used like a webcam. They have much bigger sensors and better image quality at a pricing point of $400-1000, e.g. Sony ZV-E10 II, Fuji X-M5, Canon EOS M50 Mark II, … most of them claim webcam support with the provided software. Fuji's software is bad though, so I wouldn't recommend it on a Mac. I can't talk about the other ones. The benefit is that they also have a flip screen that you can use for better framing. They all support webcam modes.

If you have a camera that has an HDMI output and that outputs a clean HDMI signal (without any overlays), you can also buy an HDMI USB capture device and feed that into OBS, which allows you to set up a virtual webcam. There are cheap no-name USB capture cards that produce mediocre images, and more top-of-the line ones like the Elgato Cam Link. This should be the most device-independent variant where you're also not dependent on any vendor's proprietary software.


Thank you for a comprehensive answer, I appreciate the time you put into it.


Since when is the WordPress ecosystem this … bad? I built WP websites 10-15 years ago and it was a quite straightforward experience back then. These days, there seems to be no around themes and plugins that all have very limited free versions, and constantly nag you about upgrading to the pro version, in a million different styles of banners and popups. Hosting providers have made it easier to deploy WordPress in a one-click manner, but anything beyond a basic page (sending email, backups, contact forms) already turns into a nightmare. No thanks!


Gave it a bunch of technical papers and standards, and while it's making up stuff that just isn't true, this is to be expected from the underlying system. This can be fixed, e.g., with another internal round of fact-checking or manual annotations.

What really stands out, I think, is how it could allow researchers who have troubles communicating publicly to find new ways to express themselves. I listened to the podcast about a topic I've been researching (and publishing/speaking about) for more than 10 years, and it still gave me some new talking points or illustrative examples that'd be really helpful in conversations with people unfamiliar with the research.

And while that could probably also be done in a purely text-based manner with all of the SOTA LLMs, it's much more engaging to listen to it embedded within a conversation.


The underlying NotebookLM is doing better at this - each claim in the note cites a block of text in the source. So it’s engineered to be more factually grounded.

I would not be surprised if the second pass to generate the podcast style loses some of this fidelity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: