Hacker Newsnew | past | comments | ask | show | jobs | submit | ameliaquining's commentslogin

I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)


> Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.

I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).


Rust's NonZero basically exists only to enable layout optimizations (e.g., Option<NonZero<usize>> takes up only one word of memory, because the all-zero bit pattern represents None). It's not particularly aiming to be used pervasively to improve correctness.

The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.


> By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.

I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.

I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.


From the article: "However, the ruling does not bar all uses of the io name, only marketing and selling hardware similar to iyO's."

I don't think this is really intended for container runtimes. You might be able to make it work in a square-peg-round-hole sort of way but the core use case is different.

If the application in the container wants to add more restrictive rules then it should be allowed to. But it should not be able to mess with the existing rules imposed by the container manager. This would be the ideal outcome.

There is nothing to do here. Landlock already a guarantees that you can't undo rules that were already applied. Your application can further restrict itself but it can't unrestrict itself.

Just need the container manager to not block the landlock system call

The threat model here is not malware, but code-execution vulnerabilities in legitimate apps. If you're developing an application, you might use this API to deny yourself privileges that you know you won't need, so that if an attacker finds a code-execution vulnerability in your app, they can't use it to take over the user's machine.

It is not a suitable technology for sandboxing a program that wasn't designed to be sandboxed in this way. For that, you need one of the other technologies listed in the article.


Stack Overflow users with 10,000 or more reputation can see deleted posts. (Only if they're logged in, of course.)

Semantically the two are kind of opposite; the similarity is that they're the lowest-syntax way in their respective languages to ignore the possibility of a missing value, and so get overused in situations where that possibility should not be ignored, leading to bugs.

This is exactly what I was trying to communicate in the article. You've put it in a way clearer way here though. Thank you:)

If your type definitions are airtight then this problem doesn't come up, but sometimes, for whatever reason which may not be entirely within your control, they aren't. Narrowing and failing early is precisely what the article advises doing.

There isn't a built-in assert function that behaves that way; you would need to either write it or import it.

I'm not a web dev. Are people really that opposed to writing a function which would solve problems in their project? It's two lines long...

DeVault's controversial takes are pretty similar to the ones that Kelley expresses in this post, so I don't see much misalignment here.

Yes, though language support runs behind the main Go compiler. https://go.dev/doc/install/gccgo

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: