Hacker Newsnew | past | comments | ask | show | jobs | submit | more jmillikin's commentslogin

  > I’m all for multiple backends but there should be only 1 frontend. That’s
  > why I hope gccrs remains forever a research project - it’s useful to help
  > the Rust language people find holes in the spec but if it ever escapes the
  > lab expect Rust to pick up C++ disease.
An important difference between Rust and C++ is that Rust maintains a distinction between stable and unstable features, with unstable features requiring a special toolchain and compiler pragma to use. The gccrs developers have said on record that they want to avoid creating a GNU dialect of Rust, so presumably their plan is to either have no gccrs-specific features at all, or to put such features behind an unstable #![feature] pragma.

  > Rust with a gcc backend is fine for when you want gcc platform support
  > - a duplicate frontend with its own quirks serves no purpose.
A GCC-based Rust frontend would reduce the friction needed to adopt Rust in existing large projects. The Linux kernel is a great example, many of the Linux kernel devs don't want a hard dependency on LLVM, so they're not willing to accept Rust into their part of the tree until GCC can compile it.


Dialects are created not just because of different feature sets, but also because of different interpretations of the spec / bugs. Similarly, if Rust adds a feature, it’ll take time for gccrs to port that feature - that’s a dialect or Rust becomes a negotiation of getting gccrs to adopt the feature unless you really think gccrs will follow the Rust compiler with the same set of features implemented in a version (ie tightly coupled release cycles). It’s irrelevant of the intentions - that’s going to be the outcome.

> A GCC-based Rust frontend would reduce the friction needed to adopt Rust in existing large projects. The Linux kernel is a great example, many of the Linux kernel devs don't want a hard dependency on LLVM, so they're not willing to accept Rust into their part of the tree until GCC can compile it.

How is that use case not addressed by rust_codegen_gcc? That seems like a much more useful effort for the broader community to focus on that delivers the benefits of gcc without bifurcating the frontend.


Note that becoming an international standard (via ISO, ECMA, IETF, or whatever) isn't necessary or sufficient to avoid dialects.

If the Rust language specification is precise enough to avoid disagreements about intended behavior, then multiple compilers can be written against that spec and they can all be expected to correctly compile Rust source code to equivalent output. Even if no international standards body has signed off on it.

On the other hand, if the spec is incomplete or underspecified, then even an ANSI/ISO/IETF stamp of approval won't help bring different implementations into alignment. C/C++ has been an ISO standard for >30 years and it's still difficult to write non-trivial codebases that can compile without modification on MSVC, GCC, Clang, and ICC because the specified (= portable) part of the language is too small to use exclusively.

Or hell, look at JSON, it's tiny and been standardized by the IETF but good luck getting consistent parsing of numeric values.


Rust's inline assembly syntax is part of the language, and in principle the same Rust source would compile on any conforming compiler (rustc, gccrs).

C/C++ doesn't have a standard syntax for inline assembly. Clang and GCC have extensions for it, with compiler-specific behavior and syntax.


I mentioned somewhere else but I might as well mention here too: there is no standard assembler that everyone uses. Each one may have a slightly different syntax, even for the same arch, and at least some C++ compilers allow you to customize the assembler used during compilation. Therefore, one would assume that inline assembly can't be uniform in general, without picking a single assembler (even assembler version) for each arch.


You're talking about the syntax of the assembly code itself. In practice small variations between assemblers isn't much of a problem for inline assembly in the same way it would be for standalone .s sources, because inline assembly rarely has implementation-specific directives and macros and such. It's not like the MASM vs NASM split.

This thread is about the compiler-specific syntax used to indicate the boundary between C and assembly and the ABI of the assembly block (register ins/outs/clobbers). Take a look at the documentation for MSVC vs GCC:

https://learn.microsoft.com/en-us/cpp/assembler/inline/asm?v...

https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html

Rust specifies the inline assembly syntax at https://doc.rust-lang.org/reference/inline-assembly.html in great detail. It's not a rustc extension, it's part of the Rust language spec.


>This thread is about the compiler-specific syntax used to indicate the boundary between C and assembly and the ABI of the assembly block (register ins/outs/clobbers).

I see... Nevertheless, this is a really weird issue to get bent out of shape over. How many people are really writing so much inline assembly and also needing to support multiple compilers with incompatible syntax?


Biggest category of libraries that need inline assembly with compiler portability are compression/decompression codecs (like the linked article) -- think of images (PNG, JPEG), audio (MP3, Opus, FLAC), video (MPEG4, H.264, AV1).

Also important is cryptography, where inline assembly provides more deterministic performance than compiler-generated instructions.

Compiler intrinsics can get you pretty far, but sometimes dropping down to assembly is the only solution. In those times, inline assembly can be more ergonomic than separate .s source files.


Exactly. It picks a single assembler:

> Currently, all supported targets follow the assembly code syntax used by LLVM’s internal assembler which usually corresponds to that of the GNU assembler (GAS)

Uniformity like that is a good thing when you need to ensure that your code compiles consistently in a supported manner forever. Swapping out assemblers isn’t helpful for inline assembly.


The quoted statement is weaker than what you're reading it as, I think. It's not a statement that emitted assembly code is guaranteed to conform to LLVM syntax, it's just noting that (1) at present, (2) for supported targets of the rustc implementation, the emitted assembly uses LLVM syntax.

Non-LLVM compilers like gccrs could support platforms that LLVM doesn't, which means the assembly syntax they emit would definitionally be non-LLVM. And even for platforms supported by both backends, gccrs might choose to emit GNU syntax.

Note also that using a non-builtin assembler is sometimes necessary for niche platforms, like if you've got a target CPU that is "MIPS plus custom SIMD instructions" or whatever.


I didn't follow up the stabilization process very closely, but I believe you're wrong. What you're describing is what used to be asm! and is now llvm_asm!. The current stable asm! syntax actually parses its own assembly instead of passing it through to the backend unchanged. This was done explicitly to allow for non-llvm backends to work, and for alternative front-ends to be able to be compatible. I saw multiple statements on this thread about alternative compilers or backends causing trouble here, and that's just not the case given the design was delayed for ages until those issues could be addressed.

Given that not all platforms that are supported by rust have currently support for asm!, I believe your last paragraph does still apply.

https://rust-lang.github.io/rfcs/2873-inline-asm.html


This sentence from the Reference is important:

  > The exact assembly code syntax is target-specific and opaque to the compiler
  > except for the way operands are substituted into the template string to form
  > the code passed to the assembler.
You can verify that rustc doesn't validate the contents of asm!() by telling it to emit the raw LLVM IR:

  % cat bogus.rs
  #![no_std]
  pub unsafe fn bogus_fn() {
   core::arch::asm!(".bogus");
   core::arch::asm!("bogus");
  }
  % rustc --crate-type=lib -C panic=abort --emit=llvm-ir -o bogus.ll bogus.rs
  % cat bogus.ll
  [...]
  ; bogus::bogus_fn
  ; Function Attrs: nounwind
  define void @_ZN5bogus8bogus_fn17h0e38c0ae539c227fE() unnamed_addr #0 {
  start:
    call void asm sideeffect alignstack ".bogus", "~{cc},~{memory}"(), !srcloc !2
    call void asm sideeffect alignstack "bogus", "~{cc},~{memory}"(), !srcloc !3
    ret void
  }
That IR is going to get passed to llvm-as and possibly onward to an external assembler, which is where the actual validation of instruction mnemonics and assembler directives happens.

---

The difference between llvm_asm!() and asm!() is in the syntax of the stuff outside of the instructions/directives -- LLVM's "~{cc},~{memory}" is what llvm_asm!() accepts more-or-less directly, and asm!() generates from backend-independent syntax.

I have an example on my blog of calling Linux syscalls via inline assembly in C, LLVM IR, and Rust. Reading it might help clarify the boundary: https://john-millikin.com/unix-syscalls#inline-assembly


Assembly by definition is platform specific. The issue isn’t that it’s the same syntax on every platform but that it’s a single standardized syntax on each platform.


Chicory seems like it'll be pretty useful. Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.

I've used it to experiment with using WebAssembly to extend the Bazel build system (which is written in Java). Currently there are several Bazel rulesets that need platform-specific helper binaries for things like parsing lock files or Cargo configs, and that's exactly the kind of logic that could happily move into a WebAssembly blob.

https://github.com/jmillikin/upstream__bazel/commits/repo-ru...

https://github.com/bazelbuild/bazel/discussions/23487


I don't understand logic and layers of abstraction here.

Chicory runs on JVM. Bazel runs on JVM. How inserting WebAssembly layer will help to eliminate platform-specific helper binaries? These binaries compiled to WebAssembly will be run, effectively, on JVM (through one additional layer of APIs provided by Chicory), right? Why you cannot write these helpers directly in JVM language, Java, Kotlin, Clojure, anything? Why do you need additional layer of Chicory?


You don't, just, easily rewrite everything. Being able to just re-use is the trick!


Exactly.

Why would you rewrite (parts of) Cargo from Rust to something that runs on the JVM, when you can use Wasm as basically an intermediate target to compile the Rust down to JVM bytecode?

Or how about running something like Shellcheck (written in Haskell) on the JVM as part of a build process?

You can see the same idea for the Go ecosystem (taking advantage of the Go build system) on the many repos of this org: https://github.com/wasilibs


This is great. The future is made of libraries packaged as WASM Components.


Aren't WASM Components pretty constrained? My (very fuzzy) understanding is that they must basically manage all of their own memory, and they can only interact by passing around integer handles corresponding to objects they manage internally.


Part of the component model is codegen to build object structures in each language so that you can pass by reference objects that have an agreed upon shape.

Yes they each have their own linear memory, that’s one of the advantages of component model. It provides isolation at the library level and you don’t have to implicitly agree that each library gets the level of access your application does. It provides security against supply chain side attacks.

Having said that, component model isn’t supported by all runtimes and since its binding and code gen are static at compile time, it’s not useful for every situation. Think of it like a C FFI more than a web API receiving JSON, for example. Upgrading the library version would mean upgrading your bindings and rebuilding your app binary too, the two must move in lock-step.


Oh, these tools are written in languages which can be directly compiled to WebAssembly without any changes? Yes, then it make sense, thank you for clarification.


Yeah, pretty much all of them are written in either Go or Rust. The Go tools pull in the Go standard library's Go parser to do things like compute dependencies via package imports, and the Rust ones use the Cargo libraries to parse Cargo.toml files.

From the perspective of a Bazel ruleset maintainer, precompiled helper tools are much easier to provide if your language has easy cross-compilation. So maybe one day Zig will start to make an appearance too.


Java already has plenty of FFI variants for that.


Yes, but WASM gives you more, especially WASM Components. E.g., FFI doesn't offer sandboxing, and unloading symbols is tricky. The WIT (WebAssembly Interface Types) IDL (+ bindings codegen) makes objects' exports explicit, but more importantly, their imports too (i.e., dependencies).


Basically CORBA, DCOM, PDO, RMI, Jini and .NET Remoting for a new generation.


None of what 'jcmfernandes lists are part of WebAssembly. At best they can be considered related technologies, like the relationship between the JVM and JavaBeans.

And in terms of design, they're closer to COM or OLE. The modern replacement for CORBA/DCOM/etc is HTTP+JSON (or gRPC), which doesn't try to abstract away the network.


They are certainly not much different from WIT (WebAssembly Interface Types) IDL (+ bindings codegen).


I've had the misfortune of working professionally with CORBA, and I've spent some time trying to keep up with WIT/WASI/that whole situation. Whatever WIT is going to be, I can assure you it's very different from CORBA.

The best way I think to describe WIT is that it seems to be an attempt to design a new ABI, similar to the System V ABI but capable of representing the full set of typesystems found in every modern language. Then they want to use that ABI to define a bunch of POSIX-ish syscalls, and then have WebAssembly as the instruction set for their architecture-independent executable format.

The good news is that WIT/WASI/etc is an independent project from WebAssembly, so whether it succeeds or fails doesn't have much impact on the use of WebAssembly as a plugin mechanism.


Correct, they are a part of WASI. Indeed, different things, but well, tightly related. Made sense to talk about them given the chat on bridging gaps in bazel using WASM.


Yes, the concept is old. I may be wrong, but to me, this really seems like it, the one that will succeed. With that said, I'm sure many said the same about the technologies you enumerated... so let's see!


I really don't want to sound flamewar-y, but how is WebAssmebly's security model well-designed compared to a pure Java implementation of a brainfuck interpreter? Similarly, java byte code is 100% safe if you just don't plug in filesystem/OS capabilities.

It's trivial to be secure when you are completely sealed off from everything. The "art of the deal" is making it safe while having many capabilities. If you add WASI to the picture it doesn't look all that safe, but I might just not be too knowledgeable about it.


It's really difficult to compare the JVM and wasm because they are such different beasts with such different use cases.

What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.

This makes it flexible and applicable to situations where java never could be - incorporating pluggable bits of logic into high-frequency glue code.

Wasm + some DB API is a pure stored procedure compute abstraction that's client-specifiable and safe.

Wasm + a simple file API that assumes a single underlying file + a stream API that assumes a single outgoing stream, that's a beautiful piece of plumbing for an S3 like service that lets you dynamically process files on the server before downloading the post-processed data.

There are a ton of use cases where "X + pluggable sandboxed compute" is power-multiplier for the underlying X.

I don't think the future of wasm is going to be in the use case where we plumb a very classical system API onto it (although that use case will exist). The real applicability and reach of wasm is the fact that entire software architectures can be built around the notion of mobile code where the signature (i.e. external API that it requires to run) of the mobile code can be allowed to vary on a use-case basis.


> What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.

Originally, but that's rapidly changing as people demand more performant host application interfacing. Sophisticated interfacing + GC + multithreading means WASM could (likely will) fall into the same trap as the JVM. For those too young to remember, Java Applet security failed not because the model was broken, but because the rich semantics and host interfacing opened the door to a parade of implementation bugs. "Memory safe" languages like Rust can't really help here, certainly not once you add JIT into the equation. There are ways to build JIT'd VMs that are amenable to correctness proofs, but it would require quite alot of effort and the most popular and performant VMs just aren't written with that architectural model in mind. The original premise behind WASM was to define VM semantics simple enough that that approach wouldn't be necessary to achieve correctness and security in practice; in particular, while leveraging existing JavaScript VM engines.


The thing is, sophisticated interfacing, GC, and multithreading - assuming they're developed and deployed in a particular way - only apply in the cases where you're applying it to use cases that need those things. The core compute abstraction is still there and doesn't diminish in value.

I'm personally a bit skeptical of the approach to GC that's being taken in the official spec. It's very design-heavy and tries to bring in a structured heap model. When I was originally thinking of how GC would be approached on wasm, I imagined that it would be a few small hooks to allow the wasm runtime to track rooted pointers on the heap, and then some API to extract them when the VM decides to collect. The rest can be implemented "in userspace" as it were.

But that's the nice thing about wasm. The "roots-tracker" API can be built on plain wasm just fine. Or you can write your VM to use a shadow stack and handle everything internally.

The bigger issue isn't GC, but the ability to generate and inject wasm code that links into the existing program across efficient call paths - needed for efficient JIT compilation. That's harder to expose as a simple API because it involves introducing new control flow linkages to existing code.


The bespoke capability model in Java has always been so fiddly it has made me question the concept of capability models. There’s was for a long time a constant stream of new privilege escalations mostly caused by new functions being added that didn’t necessarily break the model themselves, but they returned objects that contained references to objects that contained references to data that the code shouldn’t have been able to see. Nobody to my recollection ever made an obvious back door but nonobvious ones were fairly common.

I don’t know where things are today because I don’t use Java anymore, but if you want to give some code access to a single file then you’re in good hands. If you want to keep them from exfiltrating data you might find yourself in an Eternal Vigilance situation, in which case you’ll have to keep on top of security fixes.

We did a whole RBAC system as a thin layer on top of JAAS. Once I figured out a better way to organize the config it wasn’t half bad. I still got too many questions about it, which is usually a sign of ergonomic problems that people aren’t knowledgeable enough to call you out on. But it was a shorter conversation with fewer frowns than the PoC my coworker left for me to productize.


WASI does open up some holes you should be considerate of. But it's still much safer than other implementations. We don't allow you direct access to the FS we use jimfs: https://github.com/google/jimfs

I typically recommend people don't allow wasm plugins to talk to the filesystem though, unless they really need to read some things from disk like a python interpreter. You don't usually need to.


I wouldn't say 100% safe. I was able to abuse the JVM to use spectre gadgets to find secret memory contents (aka private keys) on the JVM. It was tough but lets not overexagerate about JVM safety.


You can have some fun with WebAssembly as well regarding spectre.

> Unfortunately, Spectre attacks can bypass Wasm's isolation guarantees. Swivel hardens Wasm against this class of attacks by ensuring that potentially malicious code can neither use Spectre attacks to break out of the Wasm sandbox nor coerce victim code—another Wasm client or the embedding process—to leak secret data.

https://www.usenix.org/conference/usenixsecurity21/presentat...

People have to stop putting WebAssembly in some pedestral of bytecode formats.


WebAssembly doesn't have access to the high-resolution timers needed for Spectre attacks unless the host process intentionally grants that capability to the sandboxed code.

See this quote from the paper you linked:

""" Our attacks extend Google’s Safeside [24] suite and, like the Safeside POCs, rely on three low-level instructions: The rdtsc instruction to measure execution time, the clflush instruction to evict a particular cache line, and the mfence instruction to wait for pending memory operations to complete. While these instructions are not exposed to Wasm code by default, we expose these instructions to simplify our POCs. """

The security requirements of shared-core hosting that want to provide a full POSIX-style API are unrelated to the standard use of WebAssembly as an architecture-independent intermediate bytecode for application-specific plugins.

'gf000 correctly notes that WebAssembly's security properties are basically identical to any other interpreter, and there's many options for bytecodes (or scripting languages) that can do some sort of computation without any risk of a sandbox escape. WebAssembly is distinguished by being a good generic compilation target and being easy to write efficient interpreters/JITs for.


WebAssembly doesn't exist in isolation, it needs host process to actually execute.

So whatever security considerations are to be taken from bytecode semantics, they are useless in practice, which keeps being forgotten by its advocates.

As they, and you point out, "WebAssembly's security properties are basically identical to any other interpreter,..."

The implementation makes all the difference.


The WebAssembly bytecode semantics are important to security because they make it possible to (1) be a compilation target for low-level languages, and (2) implement small secure interpreters (or JITs) that run fast enough to be useful. That's why WebAssembly is being so widely implemented.

Java was on a path to do what WebAssembly is doing now, back in the '90s. Every machine had a JRE installed, every browser could run Java applets. But Java is so slow (and its sandboxing design so poor) that the world gave up on Java being able to deliver "compile once run anywhere".

If you want to take a second crack at Sun's vision, then you can go write your own embedded JVM and try to convince people to write an LLVM backend for it. The rest of us gave up on that idea when applets were removed from browsers for being a security risk.


People talk all the time about Java, while forgeting such king of polyglot bytecodes exist since 1958, there are others that would be quite educating to learn about instead of always using Java as an example.


Ok, show me a bytecode from the 60s (or 90s!) to which I can compile Rust or Go and then execute with near-native performance with a VM embedded in a standard native binary.

The old bytecodes of the 20th century were designed to be a compilation target for a single language (or family of closely-related languages). The bytecode for Erlang is different from that of Smalltalk is different from that of Pascal, and that's before you start getting into the more esoteric cases like embedded Forth.

The closest historical equivalent to today's JVM/CLR/WebAssembly I can think of is IBM's hardware-independent instruction set, which I don't think could be embedded and definitely wasn't portable to microcomputer architectures.


The extent of how each bytecode was used doesn't invalidate their existence.

Any bytecode can be embedded, it is a matter of implementation.

> The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted.

https://en.wikipedia.org/wiki/Architecture_Neutral_Distribut...

> The ACK's notability stems from the fact that in the early 1980s it was one of the first portable compilation systems designed to support multiple source languages and target platforms

https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit

> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.

https://news.microsoft.com/2001/10/22/massive-industry-and-d...

Plenty more examples available to anyone that cares to dig what happened after UNCOL idea came to be in 1958.

Naturally one can always advocate that since 60 years of history have not provided that very special feature XYZ, we should now celebrate WebAssembly as the be all end all of bytecode, as startups with VC money repurpose old ideas newly wrapped.


  > The extent of how each bytecode was used doesn't invalidate their existence.
It does, because uptake is the proof of suitability to purpose. There's no credit to just being first to think of an idea, only in being first to implement it well enough that everyone wants to use it.

  > Any bytecode can be embedded, it is a matter of implementation.
Empty sophistry is a poor substitute for thought. Are you going to post any evidence of your earlier claim, or just let it waft around like a fart in an elevator?

In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.

For anyone who's reading this thread, check out the patent (https://patents.google.com/patent/EP0464526A2/en) and you'll understand quickly that ANDF is closer to a blend of LLVM IR and Haskell's Cmm. It's designed to be used as part of a multi-stage compiler, where part of the compiler frontend runs on the developer system (emitting ANDF) and the rest of the frontend + the whole backend + the linker runs on the target system. No relationship to WebAssembly, JVM bytecode, or any other form of bytecode designed to be executed as-is with predictable platform-independent semantics.

  > More than 20 programming tools vendors offer some 26 programming languages
  > — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.
I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly.

  > Naturally one can always advocate that since 60 years of history have not 
  > provided that very special feature XYZ, we should now celebrate WebAssembly
  > as the be all end all of bytecode, as startups with VC money repurpose old
  > ideas newly wrapped.
Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.

If you reflexively dislike any technology that is adopted by startups, and then start confabulating nonsense to justify your position despite all evidence, then the technology isn't the problem.


> It does, because uptake is the proof of suitability to purpose. There's no credit to just being first to think of an idea, only in being first to implement it well enough that everyone wants to use it.

Depends on how the sales pitch of those selling the new stack goes.

> Empty sophistry is a poor substitute for thought. Are you going to post any evidence of your earlier claim, or just let it waft around like a fart in an elevator?

Creative writing, some USENET flavour, loving it.

> In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.

It is a matter of prior art, not what they achieved in practice.

> I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly.

I never written that the CLR predates the JVM, where is that can you please point us out?

C++/CLI is as standard C and C++, as using emscripten clang extensions for WebAssembly integration with JavaScript.

But I tend to forget at the eyes of FOSS folks, clang and GCC language extensions are considered regular C and C++, as if defined by ISO themselves.

> Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.

Naturally, when folks are honest about the actual capabilities and the past they build upon.

I love WebAssembly Kubernetes clusters reinventing application servers, by the way, what a cool idea!


I love that i can get this kind of depth of conversation in hn.


Pssst, it is the usual WebAssembly sales pitch.

Linear memory accesses aren't bound checked inside the linear memory segment, thus data can still be corrupted, even if it doesn't leave the sandbox.

Also just like many other bytecode based implementations, it is as safe as the implementations, that can be equally attacked.

https://webassembly.org/docs/security/

https://www.usenix.org/conference/usenixsecurity20/presentat...

https://www.usenix.org/conference/usenixsecurity21/presentat...

https://www.usenix.org/conference/usenixsecurity22/presentat...


WebAssembly being described as a sandbox is perfectly valid. Applications with embedded sandboxes for plugins use the sandbox to protect the application from the plugin, not to protect the plugin from itself. The plugin author can protect the plugin from itself by using a memory-safe language that compiles to WebAssembly; that's on them and not on the embedding application.


Except the tiny detail that the whole application is responsible for everything it does, including the behaviour of plugins it decides to use, so if the plugin can be exposed to faulty behaviour on its outputs, that will influence the expected behaviour from the host with logic building on those outputs, someone will be very happy and write a blog post with a funny name.


Looking forward to seeing more Chicory in Bazel, is a great use-case! Thanks for spearheading it!


> Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.

I thought Java had all of this sandboxing stuff baked in? Wasn't that a big selling point for the JVM once upon a time? Every other WASM thread has someone talking about how WASM is unnecessary because JVM exists, so the idea that JVM actually needs WASM to do sandboxing seems pretty surprising!


The JVM was designed with the intention of being a secure sandbox, and a lot of its early adoption was as Java applets that ran untrusted code in a browser context. It was a serious attempt by smart people to achieve a goal very similar to that of WebAssembly.

Unfortunately Java was designed in the 1990s, when there was much less knowledge about software security -- especially sandboxing of untrusted code. So even though the goal was the same, Java's design had some flaws that made it difficult to write a secure JVM.

The biggest flaw (IMO) was that the sandbox layer was internal to the VM: in modern thought the VM is the security boundary, but the JVM allows trusted and untrusted code to execute in the same VM, with java.lang.SecurityManager[0] and friends as the security mechanism. So the attack surface isn't the bytecode interpreter or JIT, it's the entire Java standard library plus every third-party module that's linked in or loaded.

During the 2000s and 2010s there were a lot of Java sandbox escape CVEs. A representative example is <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0422>. Basically the Java security model was broken, but fixing it would break backwards compatibility in a major way.

--

Around the same time (early-mid 2010s) there more thought being put into sandboxing native code, and the general consensus was:

- Sandboxing code within the same process space requires an extremely restricted API. The original seccomp only allowed read(), write(), exit(), and sigreturn() -- it could be used for distributed computation, but compiling existing libraries into a seccomp-compatible dylib was basically impossible.

- The newly-developed virtualization instructions in modern hardware made it practical to run a virtual x86 machine for each untrusted process. The security properties of VMs are great, but the x86 instruction set has some properties that make it difficult to verify and JIT-compile, so actually sitting down and writing a secure VM was still a major work of engineering (see: QEMU, VMWare, VirtualBox, and Firecracker).

Smartphones were the first widespread adoption of non-x86 architectures among consumers since PowerPC, and every smartphone had a modern web browser built in. There was increasing desire to have something better than JavaScript for writing complex web applications executing in a power-constrained device. Java would have been the obvious choice (this was pre-Oracle), except for the sandbox escape problem.

WebAssembly combines architecture-independent bytecode (like JVM) with the security model of VMs (flat memory space, all code in VM untrusted). So you can take a whole blob of legacy C code, compile it to WebAssembly, and run it in a VM that runs with reasonable performance on any architecture (x86, ARM, RISC-V, MIPS, ...).


What a thorough, excellent response. I learned a lot, thank you for taking the time to write this up!


> the rationalist community grew around publications by Eliezer Yudkowsky

Rationalism (in its current form) has been around since long before someone on the internet became famous for their epic-length Harry Potter fanfiction, and it will continue to exist long after LessWrong has become a domain parking page.


Sure, but currently we are discussing (inaccurate portrayals of) the community that grew starting in 2006 around Eliezer's writings. I regret that there is no better name for this community. (The community has tried to acquire a more descriptive name, but none have stuck.)


The LessWrong affiliated "Rationalists" are to lower-case rationalism as the "People's Democratic Republic of Korea" are to democracy.


Please refrain from belittling the PDRoK with such comparisons, they do produce some extraordinary accordian players after all.


I love the Moranbong Band.


The trouble with PG&E is that it's trying to serve two incompatible goals.

The shareholders want it to provide electric service for a profit in the locales where doing so is economically sensible (= urban/suburban), slowly grow its value, and throw off a stable stream of dividends. This is the basic value proposition of all for-profit utilities: low growth, low volatility, stable income.

The state government -- and a not insubstantial proportion of the state population -- want PG&E to be a non-profit that provides electricity at cost to everyone in its coverage area, which is to include huge swaths of forest-covered hillsides and dry rural scrubland. Every time it gets mentioned on HN (not exactly a hotbed of communism!) there's a bunch of comments about how it should be illegal for an electric utility to have any profit at all.

PG&E can't have it both ways. It hasn't paid a non-trivial dividend since 2017 and its share price is ~half of what it was 20 years ago, which makes it an astonishingly poor investment -- compare to Southern Company (SO) or Duke Energy (DUK). But at the same time it is legally mandated to absorb the costs of operating high-voltage lines in brushfire territory, and half its customers think it shouldn't be allowed to exist.


In Japan utilities are separated into pseudo-governmental entities that operate the infrastructure as a monopoly, and for-profit companies that deal directly with customers. This seems to work relatively well in that the customer-facing companies can distinguish themselves in the market (e.g. offering higher prices but better customer service), and the infrastructure is immune to profit incentives but subjected to high levels of regulation.

The closest equivalent I experienced in the states is when I lived in the SF bay area and got internet access as a customer of Sonic, who provided DSL services over copper lines owned by AT&T.


Many professions have legally protected titles. In the USA it's illegal for someone without a CPA license to claim the title of "certified public accountant", and since engineering failures can have fatal consequences the same protection is offered to the title "engineer" (or "professional engineer", etc) in some countries.

You might argue that it's the certification that matters, not the title, but the title of "structural engineer" has been around for approximately as long as humans have been stacking up rocks to sleep under; the certifications have not.


There's a difference between software developers who glue together libraries to build cat picture voting sites and software developers who write real-time avionics firmware. The second type of developer can reasonably claim to be "software engineers" -- blueprints for a garden shed and for a skyscraper are distinguished by content, not medium.

Between those two limits the line must be drawn somewhere, and honest people will disagree about exactly where, but it seems reasonable to claim that people working in most software development positions of interest to Hacker News are closer to the latter than the former.

If you want to claim that junior developers are apprentices (journeymen? are apprentices interns?) and senior/staff/principal/whatever are engineers then that's fine (if idiosyncratic), but that's not what other people mean when they say or hear "software developers aren't engineers".


To be clearer: when you say the latter, do you mean the latter of your blueprints example or your avionics one? I'd guess that since most developers/engineers are working in things like web apps, fintech, applied AI/crypto, etc. these days that most of the folks contributing to and consuming HN are closer to the "not engineers" in your specific example, but that is indeed just an educated guess.


I meant avionics/skyscrapers -- edited to disambiguate.

The idea that most software developers are working in low-skill positions (i.e. distributing the output of an LLM over their Jira queue) is probably not as wrong as I wish it was, but those kind of developers aren't interested in software development as a craft to begin with, so I think (hope?) they aren't anywhere close to comprising most of the HN readership.

Also those positions don't pay as well; HN cares more about the $600k/yr jobs than the $60k/yr ones, and nobody pays $600k/yr for someone to copy-paste NPM invocations from Stack Overflow.


SREs are programmers who specialize in writing programs that manage complex distributed systems.

If you hire SREs and have them doing sysadmin work, then (1) you're massively over-paying and (2) they'll get bored and leave once they find a role that makes better use of their skills.

If you hire sysadmins for SRE work, they'll get lost the first time they need to write a kernel module or design a multi-continent data replication strategy.


I stand ish corrected. Feels the same difference to of that of a Senior Sysadmin. I do both, I wouldn't call myself SRE.


> If you hire sysadmins for SRE work, they'll get lost the first time they need to write a kernel module or design a multi-continent data replication strategy.

Ah yes, the old (incorrect) mantra of "sysadmins couldn't code". Which is ironic, as the vast majority of the abstractions that you'll interface with are written by sysadmins.


IDK, writing things like kernel modules to improve the reliability of a complex system doesn't really sound like a task sysadmins get paid for.

Yes, a lot of coding (mostly in scripting languages) is normal, mostly to automate tasks and improve visibility into the system, to make data digestible for tools like Grafana, but other optimizations seem to be out of bounds.

But I most likely do lack the insights you have.


I’ve written kernel code to do various anti-ddos stuff, however its the exception for sure.

Debugging complex systems is more in the wheelhouse of sysadmins. When I came up it was a requirement for sysadmins to be proficient in C, a commandline debugger (usually gdb), the unix/linux syscall interface (understanding everything that comes out of strace for example) and perl.

Usually those perl scripts ended up becoming an orchestration/automation platform of some kind- ruby replaced perl at some point. I guess it’s python and Go now?

The modern “kernel module” requirement is more likely to be a kubernetes operator or terraform module, and the modern day sysadmin definitely writes those (the rest of the role is essentially identical, just tools got better)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: