Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

1. Amdahl's law

2. That's a language feature too. Writing non-trivial multi-core programs in C or C++ takes a lot of effort and diligence. It's risky, and subtle mistakes can make programs chronically unstable, so we've had decades of programmers finding excuses for why a single thread is just fine, and people can find other uses for the remaining cores. OTOH Rust has enough safety guarantees and high-level abstractions that people can slap .par_iter() on their weekend project, and it will work.


If you're a junior now, and think your code is worth stealing, it's probably only a matter of time before you gain more experience, and instead feel sorry for everyone who copied your earlier code (please don't take it personally, this is not a diss. It's typical for programmers to grow, try more approaches, and see better solutions in hindsight).

The lazy cheaters only cheat themselves out of getting experience and learning by writing the code themselves. It doesn't even matter whether you publish your code or not, because they'll just steal from someone else, or more likely mindlessly copypaste AI slop instead. If someone can't write non-trivial code themselves to begin with, they won't be able to properly extend and maintain it either, so their ripoff project won't be successful.

Additionally, you'll find that most programmers don't want to even look at your code. It feels harder and less fun to understand someone else's code than to write one's own. Everyone thinks their own solution is the best: it's more clever and has more features than the primitive toys other people wrote, while at the same time it's simpler and more focused than the overcomplicated bloat other people wrote.


Fil-C will crash on memory corruption too. In fact, its main advantage is crashing sooner.

All the quick fixes for C that don't require code rewrites boil down to crashing. They don't make your C code less reliable, they just make the unreliability more visible.

To me, Fil-C is most suited to be used during development and testing. In production you can use other sandboxing/hardening solutions that have lower overhead, after hopefully shaking out most of the bugs with Fil-C.


The great thing about such crashes is if you have coredumps enabled that you can just load the crashed binary into GDB and type 'where' and you most likely can immediately figure out from inspecting the call stack what the actual problem is. This was/is my go-to method to find really hard to reproduce bugs.


I think the issue with this approach is it’s perfectly reasonable in Fil-C to never call `free` because the GC will GC. So if you develop on Fil-C, you may be leaking memory if you run in production with Yolo-C.


Fil-C uses `free()` to mark memory as no longer valid, so it is important to keep using manual memory management to let Fil-C catch UAF bugs (which are likely symptoms of logic bugs, so you'd want to catch them anyway).

The whole point of Fil-C is having C compatibility. If you're going to treat it as a deployment target on its own, it's a waste: you get overhead of a GC language, but with clunkiness and tedium of C, instead of nicer language features that ground-up GC languages have.


I agree with you but jitl has a point: implicit reliance on the GC could creep in and you might not notice it until you switch back to regular C.


Fil-C should have a(n on by default) mode where collecting an unfreed allocation is a crash, if it doesn't already.


It's not that simple since some object allocations go unfreed.

For example, Fil-C lifts all escaping locals to the heap, but doesn't free them.


It depends how much the C software is "done" vs being updated and extended. Some legacy projects need a rewrite/rearchitecting anyway (even well-written battle-tested code may stop meeting requirements simply due to the world changing around it).

It also doesn't have to be a complete all-at-once rewrite. Plain C can easily co-exist with other languages, and you can gradually replace it by only writing new code in another language.


Lack of DC fast charging makes the range even more limiting. It takes 2.7 hours to add another 150 miles. Modern EVs can add 150 miles of range in 10-15 minutes.


It's a recreational vehicle for booting around to and from the country club and out to the fancy places that European gentlemen go on afternoon Sunday drives to impress their mistresses.

Oh that reminds me, I should go check my lottery ticket.


They’re a Dutch company.

You can drive from just about any point in the Netherlands to any other in less than 300km.

For a weekend toy in the densely populated parts of Europe the range is fine.


Take a look at the video the car driving. I don't think people who buy this are worried about range anxiety.


Modern EVs also have airbags. This is just a toy for the wealthy, like a golf cart.


Debian's tooling for packaging Cargo probably got better, so this isn't as daunting as it used to be.

Another likely thing is understanding that the unit of "package" is different in Rust/Cargo than traditionally in C and Debian, so 130 crates aren't as much code as 130 Debian packages would have been.

The same amount of code, from the same number of authors, will end up split into more smaller packages (crates) in Rust. Where a C project would split itself into components internally in a way that's invisible outside (multiple `.h` files, in subdirectories or sub-makefiles), Rust/Cargo projects split themselves into crates (in a Cargo workspace and/or a monorepo), which happen to be visible externally as equal to a package. These typically aren't full-size dependencies, just a separate compilation unit. It's like cutting a pizza into 4 or 16 slices. You get more slices, but that doesn't make the pizza bigger.

From security perspective, I've found that splitting large projects into smaller packages actually helps review the code. Each sub-package is more focused on one goal, with a smaller public API, so it's easier to see if it's doing what it claims to than if it was a part of a monolith with a larger internal API and more state.


The GCC backend makes Rust for Dreamcast work: https://www.dreamcast.rs/

(not relevant to Debian, but cool that it's already possible)


You don't really need 8K for gaming, but upscaling and frame generation have made game rendering resolution and display resolution almost independent.


And if all else fails, 8K means you can fall back to 4K, 1440p or 1080p with perfect integer scaling.


Except that the hardware doesn’t necessarily offer perfect integer scaling. Oftentimes, it only provides blurry interpolation that looks less sharp than a corresponding native-resolution display.


The monitor may or may not offer perfect scaling, but at least on Windows the GPU drivers can do it on their side so the monitor receives a native resolution signal that's already pixel doubled correctly.


Most modern games already have built-in scaling options. You can set the game to run at your screen’s native resolution but have it do the rendering at a different scale factor. Good games can even render the HUD at native resolution and the graphics at a scaled resolution.

Modern OSes also scale fine.

It’s really not an issue.


Games are not what I had in mind. Last time I checked, most graphics drivers didn’t support true integer scaling (i.e. nearest-neighbor, no interpolation).


> most graphics drivers didn’t support true integer scaling

https://www.nvidia.com/content/Control-Panel-Help/vLatest/en...

https://www.amd.com/en/resources/support-articles/faqs/DH3-0...

https://www.intel.com/content/www/us/en/support/articles/000...

I don't know what the situation is on Mac and Linux, but all of the Windows drivers offer it.


With very high PPI displays the gamma corrected interpolation scaling is far better than nearest neighbor scaling.

The idea is to make the pixels so small that your eyes aren’t resolving individual pixels anyway. Interpolation appears correct to your eyes because you’re viewing it through a low-pass filter (the physical limit of your eyes) anyway.

Reverting to nearest neighbor at high PPI would introduce new artifacts because the aliasing effects would create unpleasant and unnatural frequencies in the image.

Most modern GPU drivers (nVidia in particular) will do fixed multiple scaling if that’s what you want. Nearest neighbor is not good though.


Extra "wasted" capacity has many benefits for EV battery packs.

It allows distributing load across more cells. It allows using cells with a lower C-rating, which typically have better energy density, and longer lifespan.

Distributing load reduces energy loss during charging, makes cooling less demanding, while allowing higher total charging power, which means adding more range per minute.

The excess capacity makes it easier to avoid fully changing and discharging cells. This prolongs life of NMC cells.

(but I agree that cars are an inefficient mode of transport in general)


The code needs to pass integrity checks of the safe Rust subset, which is a different challenge than writing dangerous code without feedback.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: