I was really excited about the idea of a modern statically typed language with green threads ala Erlang / BEAM. I lost interest when Rust moved away from that direction and became focused on zero-cost abstractions instead.
I’m not OP, but IMO go cannot be called a “modern language”. The go ideology seems to be that such basic “modern” ideas as sum types are just pointless intellectual games, akin to Haskell type astronaut behavior, or that they’re too advanced for most programmers to understand.
Not parent, but I think there is certainly space for a Typescript-esque language for Go. If the parent commenter was looking for a static type system, the implication is they would probably want a more functional language inspired type theory. Go’s runtime is not the BEAM, but it is usable for many of the tasks Erlang is pitched for.
I can readily see a Haskell inspired System F derivative the compiles down to valid Go, or a more flexible, special cases type theory that encompasses all of Go like Ts->Js. Likely a ‘transpiler’, I hate that term, to Go implemented in Go and you have a self-contained language with more advanced type features and Go’s green thread runtime.
I applaud the work that’s been done on Dingo (I also really like the name and inspiration, i.e. Dingo is a language that broke free from Google’s control). However, I don’t think Dingo is Typescript for Go, because it is too limited in scope.
Dingo adds Sum types and associated pattern matching elimination thereof, it adds a ‘?’ syntax for propagation of Optional types, and exhaustiveness checking for those pattern matching statements. There is no type system expansion, or syntax alterations that would make the Typescript comparison more appropriate.
I think Dingo probably addresses a lot of the common complaints with Go, but it is not nearly as far from Go as a baseline as I would assume a language positioned between Go and Rust.
As a person who actively uses Elixir, Rust and Go (in that priority order) to me there is a maddening gap between Rust and Go.
While Go has the goroutines and they have made concurrent and parallel programming accessible to many, they are a rather raw material that's easy to abuse and use wrongly (leaks, deadlocks, and don't even get me started on writing to a closed channel -- instant panic and program abort).
Rust async is also too raw but in another way: gets wordy / verbose and too implementation-oriented way too quickly -- I never wanted to care about Pin<Box<???>> or what must this or that Future implement in order to be eligible for usage in `select` and whatnot. Those things should have been hidden behind a DSL much like `select` and `async` itself are a DSL / macros / state-machines of sorts. Rust's async just seems... I don't know... unfinished. Zero disrespect is intended (and I do recognize you by your username), I love Rust and I never pass an opportunity to whip up an internal CLI tool with it and my most consulting engagements have resulted in teams highly appreciating such tools, but I can't shake the feeling that Rust's async could be more... polished.
Both Rust's and Go's async mechanics leave me with the impression that somebody thought that they gave the devs enough LEGO building blocks and called it a day. I don't judge. I understand. But as a user of the languages I also have the right to find their async facilities less than ideal.
Erlang / Elixir / Gleam / LFE on the other hand stand on a runtime that has exhaustively analyzed a ton of async scenarios and cover them very well (minus stuff like filesystem operations being centralized through a singleton runtime agent, something you can opt out of, thankfully).
So I'd say both Rust and Go are ripe for either disruption or courageous backwards-incompatible modifications. Their async implementations simply did not go far enough.
And don't get me wrong, I love both and do my best to use them regularly. But it's a sad reality that their "unfinishedness" led to a plethora of footguns that became a badge of honor if you get to know most or all of them, as in: know how to avoid them. Which I don't know why many see as a desired state of affairs. Job security, I suppose.
I would kill for something as terse and explicit as Go but as super strict as Rust but that also has 100% finished async semantics, and has a faster compiler than Rust but optimizing as aggressively as it as well. Yeah, I know: what color do I like my dragon?
All that being said, Rust is the closest we have to a perfect language currently. And that's kind of sad because it has a few things that it really needs to fix if it wants to keep gaining mind-share. I personally find myself reluctant trying to build a career with it. But I'll not digress further.
@T had a number of issues. The first was just that it was weird. People tend to not like weird things. Rust developed a reputation for "that language with a bunch of pointer types that are all weird."
The real reason it was removed in the end was just that it elevated a library concept into syntax. Today's Arc<T>/Rc<T> split isn't really possible in an @T world, for example. Shared ownership is a good concept, but you don't need special syntax to indicate it.
> The real reason it was removed in the end was just that it elevated a library concept into syntax.
Rust still does this in all sorts of silly ways, such as the ! type. What's the point of wasting an entire symbol that could have plenty of alternate uses on something that's so rarely used and could easily be defined as either a library type (empty enum) or at least be given a custom keyword, such as `never`? (Introduce it over an edition boundary, if you must preserve backwards compatibility.) The fact that it involves some compiler magic is no excuse; that's why Rust uses "langitem" markers within its core library.
The standing joke for the last few years is that "the never type is named after its date of stabilization."
I certainly don't disagree that Rust has flaws, for sure. I think this particular one is pretty far down the list, though. I'm not sure what else I'd want to use ! for, and by virtue of it not being used so often means that it's much less of a pain than @T would have been, though I would also argue that 2012 Rust used ~T and @T far more than contemporary Rust does (I still remember pcwalton's mailing list post about how you didn't have to use ~ for comparing strings!) and so was even more painful at the time than would be now.
If ! was just used as never it could still be used as an operator, because those are different contexts AFAICT. However, its use in macro invocations seems likely to be more difficult to differentiate from operators.
People just found the trio of &T, @T, and ~T to be strange and confusing. Lots of Perl comparisons, many people really dislike punctuation of any kind, it seems.
Most languages only have one kind of pointer, and they tend to use & and * as operators for them.
Sure, but people also find pointers and references confusing (& certainly their distinction). Literally all programming is considered weird if you talk to the right person.
I would argue as a rule of thumb, anyone who focuses on syntax over semantics has little to contribute until they write ten thousand lines in the language. Perl is a great example of how it still fails after this test passes. Rust feels a lot more like java and c++ now, and not in a good way. It could have done more to improve on basic readability than where we ended up, and people still bitch about basic tenets of the language like "lifetimes" and "not being enough like java".
You can stand on principle, or you can recognize that semantics is important, and syntax isn’t really, and therefore, accepting feedback about syntax is a fine thing to compromise on.
I also agree that you can’t listen to everyone, but this feedback was loud and pervasive.
Of course syntax is important. Otherwise people wouldn't complain about perl or C (eg wrt lack of operator overloading). It is just important in balance with semantics. And while I understand why rust compromised on this, IMHO it was a mistake that causes confusion about rust's memory management strategy. It looks too much like java and not enough like a language built around specific memory management paradigms. This compromise has backfired.
Implementation-wise they're the same trick as C++, monomorphization.
Stylistically they're not very like either, however the effect is more like C++ because Rust idiomatically prefers to constrain functions not types so e.g. it's fine to talk about HashMap<f32, f32> a hypothetical hash table of floating point numbers mapped to other floating point numbers - even though we can't use such a type because if we try to insert into it we'll be told that insert requires its key parameter to implement Eq and Hash, which f32 doesn't because of NaN.
In both C++ and Java as I understand it these constraints live on the type not the functions associated with that type, although C++ does not have the same constraint here and is perfectly willing to try to make a hash table of floats... but where a constraint lives on a function in C++ it would behave similarly to Rust due to SFINAE - the function won't match so your diagnostics say there's no such function, probably worse diagnostics than Rust but that's par for the course in C++.
Oh I see. Yeah, perhaps slightly for the syntax. I suppose Java's idiom of naming classes (its only user defined types) with capital letters stands out as more similar and so in Java our growable array of geese is ArrayList<Goose>, in Rust it's Vec<Goose> but in C++ std::vector<Goose> or perhaps std::vector<goose> if we're copying the style of the standard library.
It doesn't feel like very much, but now that you spell it out I guess I do see it.
> C++ templates look like an entirely new language within the language.
Templates cover more than just generics. Java full-sale lifted its generics syntax from sepples. I'm curious how you draw the line between the two, when all 3 of them use the angle bracket declaration lists bolted onto the symbol name, a unique style invented by CFront 3. Compare with the generics syntax found in contemporaries like ML, Miranda, Ada, Modula-3 and so on.
Other than the `|var|` syntax in closures, I can't think of a single way Rust looks like Ruby. I mean that seriously, there is almost no other similarities.
There's definitely some Ruby influence (closures syntax in particulary), but I think I'd argue that Rust syntax is closer to JavaScript/TypeScript than anything else.
> rest of the post is me trying to make sense of the tutorial on borrowing. It has fried my brain and negatively affected my skills in modern Rust, so be wary
I think that tutorial discouraged me from really getting into Rust
I’m no expert in Rust, but have done a couple of very minimal weekend projects. In the time I’ve read up on Rust, I’ve always looked at the borrow mechanism like an extreme/overactive implementation of RAII from C++, that is triggered with every context change.
Would be interested to hear where this analogy breaks down from someone more experienced than me.
Borrowing and RAII are basically separate features, though they do interact.
RAII in Rust is like C++, but simpler: there are no constructors, only destructors. The Drop trait gets called like in C++, except that in Rust, moves are the default, and destructors don't get called on moved-from objects, that is, your destructor only runs once.
References are, at runtime, the same as a pointer in C++, except they cannot be null. We also say that they "borrow" what they refer to, which means that the compiler keeps track of the lifetime of the referent. This ensures that the referent always outlives its reference, so that its reference is always valid. This is a compile-time analysis on the control-flow graph of your program.
I see now why my first attempt at Rust around this time failed miserably. I had forgotten about a lot of this, and remember trying to make liblua bindings at the time for something I was working on. I wrote Rust off as too cryptic to get anything done with.
I'm really, really glad I've since picked it back up post-1.0.
There was an early rule that Rust keywords should be five characters or less. I would guess it's because of this rule. I believe loop turned into cont to satisfy this rule, and then eventually we relaxed the rule, and it became continue.
Probably for the same reason that most new language these days cannot bring themselves to just use "function" and instead have "fn", "fun", "func", etc. It's a headlong pursuit of conciseness for the sake of conciseness.
IIRC it was just a personal preference of Graydon's back then. I'm also not sure it was intended to live forever, just something to try and nudge things in a particular direction.
Terseness doesn't inherently mean less readable.
I do think that that rule is probably not one that would be good permanently.
I think this was a year or two before I got to rust - some of these things still existed then (bare traits, no NLL, the ecosystem was only halfway onto cargo), while others (the old pointer type syntax) had already gone.
I was really hoping that there'd be movement on a comment without-boats made in https://without.boats/blog/why-async-rust/ to bring a pollster like API into the standard library.
Rust has very good reasons for not wanting to bless an executor by bringing it into the standard library. But most of those would be moot if pollster was brought in. It wouldn't stifle experimentation and refinement of other approaches because it's so limited in scope and useless to all but the simplest of use cases.
But it does in practice solve what many mislabel as the function coloring problem. Powerful rust libraries tend to be async because that's maximally useful. Many provide an alternate synchronous interface but they all do it differently and it forces selection of an executor even if the library wouldn't otherwise force such a selection. (Although to be clear such libraries do often depend on I/O in a manner that also forces a specific executor selection).
Pollster or similar in standard library would allow external crates to be async with essentially no impact on synchronous users.
Not quite yet. Crates like reqwest and hyper tend to use tokio's io types internally to set up the sockets correctly and send/receive data at the right time. Those might have different APIs than the thread-pausing sync APIs.
Sans-IO crates exist but are kind of annoying to schedule correctly on an IO runtime of choice. Maybe lending iterators could help idk
I feel async is in a very good place now (apart from async trait :[ )
As a regular user who isn't developing libraries async is super simple to use. Your function is async = it must be .await and must be in an async runtime. Probably as simple and straightforward as possible. There are no super annoying anti-patterns to deal with.
The ecosystem being tokio centric is a little strange though
I love Rust and async Rust, but it's not true that there aren't annoying things to deal with. Anyone who's written async Rust enough has run into cancel-safety issues, the lack of async Drop and the interaction of async and traits. It's still very good, but there are some issues that don't feel very rust-y.
There are several ~~problems~~ subtleties that make usage of Rust async hindered IMHO.
- BoxFuture. It's used almost everywhere. It means there are no chances for heap elision optimization.
- Verbosity. Look at this BoxFuture definition: `BoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + Send + 'a>>;`. It's awful. I do understand what's Pin trait, what is Future trait, what's Send, lifetimes and dynamic dispatching. I *have to* know all these not obvious things just to operate with coroutines in my (possibly single threaded!) program =(
- No async drop and async trait in stdlib (fixed not so long ago)
I am *not* a hater of Rust async system. It's a little simpler and less tunable than in C++, but more complex than in Golang. Just I cannot say Rust's async approach is a good enough trade-off while a plethora of the decisions made in the design of the language are closest to the silver bullet.
Because async and sync programming are two fundamentally different registers. There are things you can do in one that you can’t with the other, or which have dramatically different tradeoffs.
As an example: Call N functions to see which one finishes first. With async this is trivial and cheap, without it it’s extremely expensive and error-prone.
The actor model proves that that isn't really as fundamentally a difference as you make it out to be. Write synchronously, execute asynchronously, that's the best of both worlds. To have the asynchronous implementation details exhibit themselves at the language level is just a terribly leaky abstraction. And I feel that if it wasn't a fashionable thing or an attempt to be more like JavaScript that it would have never been implemented in the way it was in the first place.
Async makes everything so much harder to reason about and introduces so many warts in the languages that use it that I probably think it should be considered an anti-pattern. And I was writing asynchronous code in C in the 90's so it's not like I haven't done it but it is just plain ugly, no matter what syntactic sugar you add to make the pill easier to swallow.
Or that you can't do it in a systems programming language whose main intent is to replace 'C'?
I don't want to start off with a strawman but in the interest of efficiency:
Because C is far from the only systems programming language and I don't see any pre-requisites in the actor model itself that would stop you from using that in a systems programming language at all. On the contrary, I think it is eminently suitable for systems programming tasks. Message passing is just another core construct and once you have that you can build on top of it without restrictions in terms of what you might be able to achieve.
Even Erlang - not your typical first choice for low level work - is used for bare metal systems programming ('GRiSP').
Maybe this should start with a definition of what you consider to be a systems programming language? Something that can work entirely without a runtime?
Sure, you can absolutely build systems with the actor model, even some embedded or bare metal cases. But Erlang isn't written in Erlang. I'm talking about the languages that you implement Erlang in.
Yes, I think "entirely without a runtime" in the colloquial sense is what I mean. Or "replace C" if you want.
Ok, interesting because if 'should be written in itself' is a must then lots of languages that I would not consider systems languages would qualify. And I can see Erlang 'native' and with hardware access primitives definitely as a possibility.
'replace C' is a much narrower brief and effectively forces you to accept a lot of the warts that C exposes to the world. This results in friction between what you wanted to do and end up doing as well as being stuck with some decisions made in the 1970's. It revisits a subset of those decisions whilst keeping the remainder. And Rust's ambitions now seem to have grown beyond 'replace C', it is trying very hard to be everything to everybody and includes a package manager and language features that a systems language does not need. In that sense it is becoming more like C++ than like C. C is small. Rust is now large.
Async/Await is a mental model that makes code (much) harder to reason about than synchronous code, in spite of all of the claims to the contrary (and I'm not even sure if all of the people making those claims really believe them, it may be hard to admit that reasoning about code you wrote yourself can be difficult). It obfuscates the thread of execution as well as the state and that's an important support to hold on to while attempting to understand what a chunk of code does. It effectively turns all of your code into a soft equivalent of interrupt driven code, and that is probably the most difficult kind of code you could try to write.
The actor model recognizes this fact and creates an abstraction that - for once - is not leaky, the code is extremely easy to reason about whilst under the hood the complexity of the implementation is hidden from the application programmer. This means that relative novices (which probably describes the bulk of all programmers alive today) can safely and predictably implement complex systems with multiple moving parts because it does not require them to have a mental model akin to a scheduler with multiple processes in flight all of which are at different stages of their execution. Reasoning about the state of a program suddenly becomes a global exercise rather than a local one and locality of state is an important tool if you want to write code that is predictable, the smaller the scope the better you will understand what you are doing.
It is funny because this would suggest that the likes of Erlang and other languages that implement the actor model are beginners languages because most experienced programmers would balk at the barrier to entry. But that barrier is mostly about a lot of the superstructure built on top of Erlang, and probably about the fact that Erlang has its roots in Prolog which was already an odd duck.
But you've made me wonder: could you write Erlang in Erlang entirely without a runtime other than a language bootstrap (which even C needs) and if not to what degree would you have to extend Erlang to be able to do so. And I think here you mean 'the Erlang virtual machine that are not written in Erlang' because Erlang the language is written in Erlang as is the vast bulk of the runtime.
The fact that the BEAM is written in another language is because it is effectively a HAL, an idealized (or not so idealized, see https://www.erlang.org/blog/beam-compiler-history/) machine to run Erlang on, not because you could not write the BEAM itself entirely in Erlang. That's mostly an optimization issue, which to me is in principal evaluations like this a matter of degree rather than a qualitative difference, though if the inefficiency is large enough it could easily become one as early versions of Erlang proved.
Maybe it is the use of a VM that should disqualify a language from being a 'systems language' by your definition?
But personally I don't care about that enough to sacrifice code readability to the point that you add entirely new footguns to a language that aims for safety because for code with long term staying power readability and ability to reason about the code is a very important property. Just as I would rather have memory safety than not (but there are many ways to achieve that particular goal).
What is amusing is that the Async/Await anti-pattern is now prevalent and just about the only 'systems languages' (using your definition) that have not adopted it are C and Go.
Honestly, this is why I find "systems language" kind of an annoying term, because you're not wrong, but it's also true that we're talking about two different things. I just don't think we have good language terminology for the different sorts of languages here.
> could you write Erlang in Erlang entirely
I think this sort of question is where theory and practice diverge: sure, due to turing completeness. But theory in this sense doesn't care about things like runtime performance, or maintainability.
> But personally I don't care about that enough
Some people and some domains do need to care about implementing the low-level details of a system. The VMs and runtimes and operating systems. And that's what I meant by my original post.
So, as the author of not one but two operating systems (one of which I've recently published, another will likely never see daylight): I've never felt the need for 'async/await' at the OS kernel level. And above that it is essentially all applications, and there almost everything has a runtime, usually in the form of a standard library.
I agree with you that writing Erlang in Erlang today is not feasible for the runtime performance matter, less so for maintainability (which I've found to be excellent for anything I ever did in Erlang, probably better than any other language I've used).
And effectively it is maintainability that we are talking about here because that is where this particular pattern makes life considerably harder. It is hard enough to reason about async code 20 minutes after you wrote it, much harder still if you have to get into a code base that you did not write or if you have to dig in six months (or a decade) later to solve some problem.
I get your gripe about the term systems language, but we can just delineate it in a descriptive way so we are not constrained by terminology that ill fits the various uses cases. Low level language or runtime-free language would be fine as well (the 'no true Scotsmen of systems languages ;) ).
But in the end this is about the actor model, not about Erlang per se, that is just one particular example and I don't see any reason why the actor model could not be a first class citizen in a systems oriented language, you could choose to use it or not and if you did that would have certain consequences just like using async/await have all kinds of consequences, and most likely when writing low level OS code you would not be using that anyway.
I mean, I'm also not saying async/await is critical for kernels. I'm only saying that "everything is an actor" isn't really possible at the language level.
Async/await is used for a lot of RTOS like things in Rust. At Oxide, we deliberately did not do that, and did something much closer to actors, actually. Both patterns are absolutely viable, for sure. But as patterns, and not as language primitives, at least on the actor side.
If not, your async code is a deterministic state machine. They're going to complete in the same order. Async is just a way of manually scheduling task switches.
The system Rust has is a lot better than that of Python or JavaScript. Cleanly separating construction from running/polling makes it a lot more predictable and easier to understand what's happening, and to conveniently compose things together using it.
I encountered Rust sometime around 2010. I was working a couple of blocks away from Mozilla's Mountain View office and would often overhear people talking about it at Dana Street Coffee Roasting. A couple years later I was working at Mozilla trying to unf*ck their TLS and libpkix implementations. The team rocked, but management sucked. The best part about it is I kept bumping into Brendan Eich and having great conversations about Lisp. I can't remember if P. C. Walton worked there, but several occasions he was in the office and gave me a VERY good, VERY succinct description of the language.
I wrote a fair amount of Rust code in 2012, none of it on a real project like servo. All of it just code to try to understand what the language was trying to make easy. None of that code compiles any more. (Or enough of it fails that I stopped looking at it.)
It's not so much a "critique" as it is a confirmation that when the crustaceans tell you the language definition isn't finished yet, believe them. I like the feel of Rust much more than C/C++, but I have C code I wrote in 1982 that still compiles and does what you think it should do. C++ code from 1990 still seems to compile. I have Rust code from 2014 that won't even compile.
Rust is a cool language and I hope it eventually settles down enough to be considered for "real" projects. I've done a little bit of Ada in the last year and I really, really want something better. But... reverse in-compatibility is a deal-breaker for some corners of the commercial world.
And yes, I know that (like Python) you can build an environment that lets you continue to compile old code with old compilers and some old code with new compilers. But the projects I'm talking about have life-times measured in decades. Will rustc in 2055 be able to compile a program written today? It doesn't seem to be on the top of minds of most of the Rust community.
I'm not saying Rust is ugly. In fact, I really like some aspects of the language. And post 1.0 is MUCH better than pre 1.0. But if we could go for a few years without breaking changes that would be nice.
Rust achieved 1.0 in 2015, three years after you wrote that code in 2012. Stability wasn't guaranteed until then. Afterwards, it has been. Code from 2015 still compiles today. It's no surprise that 2014 code doesn't compile, as it came before those suggestions.
> I hope it eventually settles down enough to be considered for "real" projects.
Rust is being deployed for real projects at pretty much every major tech company at this point, and is used on the critical path of real infrastructure.
> Will rustc in 2055 be able to compile a program written today? It doesn't seem to be on the top of minds of most of the Rust community.
This has been a very strong focus of the Rust project for a long time; the reason you saw so much breakage from 2010-2015 was to make sure that it was in the state that we'd be okay with making it stable, and then the track record from 2015 to now has been excellent. There have been a few exceptions, but they've generally been quite small, and needed for soundness fixes.
_Some_ code from 2015 still compiles today. Quite a lot of code from January 2024 won't compile today.
Turns out that in practice, the promise not to break code in the past isn't that strong, exceptions that break most of the ecosystem are "Acceptable", and the rust developers response is "Sucks for you, you'll just have to update". See:
That’s what I meant by some exceptions. This one took five minutes to fix. I agree they should have been more careful here, specifically because it was such an easy fix, it wasn’t worth the breakage. It’s the only release I can even remember since 1.0 that I actually had to do fixes for; it truly is rare.
(Also a lot of projects that broke that day would not be broken today, because if there was a lock file, it was updated, and if there wasn’t, the new version with the fix would be selected. Only projects that had that specific dependency on time in their lock file broke.)
Sure. I'm less of a language purist and more of an app / infrastructure developer. [*] I manage projects where I have to plan for how many engineers I will need to staff projects 1, 2, 5 and 10 years in the future. I tend to work in a mature industries (with the exception where I worked for Mozilla and Linden Lab.) We have a vested interest in not throwing away our investment in the technical process. For better or worse, that often means maintaining code-bases for more than a couple of decades.
If I compare the historical stability of C++ with Rust, I find Rust lacking. As I mentioned before, I like the language, but I can't recommend using it because of churn. Python has the same problem. There are features of the Python language I appreciate, but it doesn't matter because, like Rust, I'm going to wait for a decade to see if there are breaking changes to the language. If not, I'll consider it.
I am not saying your baby is ugly. I'm saying your baby is growing but I need a fully-grown thing right now.
Edit: I may have been less obvious about why using a language whose definition changes every several months is bad for code-bases that want a multi-decade lifetime. Consider Python. You get a new, incompatible version of Python every year (yes, 3.X is MUCH, MUCH better than 2.X, but there's still no guarantee there won't be breaking changes.) You only get security updates for three (?) versions back. 3.9, which released in 2020 is currently unsupported. Python purists will point out you can run Python 3.9 apps in a properly configured venv, but that's not the point. The point is I would like to use my application in an environment that is supported. Not only supported by the "official" project, but also by third parties. I unfortunately inherited a project where someone decided to stuff some Python 3.6 code in an AWS Lambda. Had I not worked evenings and weekends to update the then-unsupported open-source software to 3.9, it would have broken when Amazon removed support for 3.6.
And yes, I understand I am describing a problem with a Python project and not a Rust project. That's because I haven't used Rust for mission-critical projects because after dealing with the hassle of updating Python code every year, I don't want to have to update the Rust code myself or try to find people skilled enough to understand that the version of Rust they learned is not the current version of Rust.
Go for a decade without breaking changes and then we'll talk.
[*] Not exactly true, my inner pedant comes out when people talk about Lisp.
I fully agree with you that stability is important, but I do think that you're letting your Python experience color what you think of Rust here. Python takes backwards compatibility less seriously than Rust, and it shows. Rust simply does not churn at the same rate as Python does.
There has already been a decade of Rust with roughly the same level of breaking changes as C++. The issue talked about above is roughly the same as, for example, how gcc can't upgrade to C++20 without a patch: https://gcc.gnu.org/pipermail/gcc-patches/2025-November/7007...
That patch is tiny. Fixing the breakage talked about above was not even changing code, it was running `cargo update -p time`. And it was a notable bit of breakage because even that level of breakage was exceptional in Rust land.
As a practical example, Meta has > 1 million lines of code in their monorepo, and last I heard, they update to each new release within a week of it coming out, and the person who does that update reports that 99% of the time, it's simply updating the version, no changes needed.
> The Facebook monorepo's Rust compiler has been updated promptly every 6 weeks for more than 7 years and 54 Rust releases, usually within 2 weeks of the upstream release.
> I estimate it's about ½ hour per 1 million lines, on average.
> Rust is a cool language and I hope it eventually settles down enough to be considered for "real" projects.
I keep seeing folks with this "when will Rust be ready" sentiment and it feels a bit dated to me at this point.
At my last job we built machine control software for farm equipment (embedded Linux, call it firmware if you like). The kind of thing that would have been in C or C++ not long ago. Worked great, Rust itself was never the issue. Code from the very first versions continued to work with no problems over years of feature additions, bugfixes, rewriting, edition upgrades, etc.
The job before that, my team wrote a large library of 3D geometry analysis algorithms code that powered some fun and novel CAD manufacturing tools. In Rust. Worked great. It was fast enough, and crucially, we could run it against user-provided data without feeling like we were going to get owned by some buffer overrun. 10 years earlier it 100% would have been in C or C++ and I would have been terrified to throw externally generated user data at it. We were able to do what we needed to do and it served real paying users. What more do you need?
Rust is everywhere. It's in the browser. It's in databases. It's in container/VM runtimes. It's in networking code. It's in firmware. It's in the OS. It's in application code. It's in cryptography. It's in Android. Rust is all over the place.
The only other thing I can think of with a similar combined breadth and depth of deployment to Rust on "real" projects (other than C/C++) is Java.
If I needed something to both work and be maintainable by somebody in 2055, Rust is one of the few things I'd bother to put on the list, alongside C, C++, Java, Python, and Javascript.
What you've written at the end there is a critique of Rust in 2012, pointing out that it's not a stable language, which, it isn't, as reflected in its versioning, in 2012.
But a few years later, in 2015, Rust 1.0 shipped. So the stability really firms up from there.
I happen to have in front of me the first commit of the first modestly sized piece of software I wrote in Rust in April 2021. Which compiles today just fine and works exactly as it did when it was written, on this brand new Rust toolchain.
I am aware that Rust 1.0 shipped. I am also aware that every year breaking changes in the language occur. It is 2025, it should not have taken this long to "settle down."
What are you claiming constitutes a "breaking change in the language" ?
Sibling comments talk about a 2024 stdlib change which broke some people because they had written code which depends upon an inference and with a new enough stdlib that inference is now ambiguous so the compiler requires that you disambiguate or your code doesn't compile with the newer library.
So, that's not a breaking change in the language. It's annoying, and ideally shouldn't have happened, but in contrast the two languages you praised (C and C++) have in the last ten years made real breaking changes to their actual language and as expected the same people who insist Rust isn't "stable" shrug off the extra work from that as No Big Deal.
As someone who wrote C for decades and now writes Rust instead it's very striking how much worse the "bit rot" is in reality for C.
I'll definitely say that. It's my biggest problem with the language. Type variable declaration lists, pub keyword instead of export lists, macros instead of functors, out-of-line method declaration, special keywords and symbols all over the place that could have just been built-in type constructors, module accessor is the horrible ::, which synergies with a weird decision to nest modules extensively so that code is even noisier.
Rust is a very ugly language to me. Ugly enough that while I like many aspects of its semantic design, I will never use it personally. I would rate it to be about as ugly as Ada, just for different reasons.
Sure. You can call anything ugly. I got in trouble with Walton because I said the borrow checker should be an external tool for C++. With the exception of Lisp, which is $DEITY's own language, all languages have problems. There are aspects of Rust that I like more than C++ (traits vs. classes, for instance.)
The main difference here is I won't down-vote you because you say you don't like Rust.
I think Rust has a lot of nice things about it, semantically. I think sepples is extremely ugly too. I didn't downvote you though, I don't even know how to do that on HN.
>Rust is a cool language and I hope it eventually settles down enough to be considered for "real" projects. I've done a little bit of Ada in the last year and I really, really want something better. But... reverse in-compatibility is a deal-breaker for some corners of the commercial world.
Critical loadbearing chunks of AWS, Cloudflare, Azure and Google are built on it. It's in both the Windows & Linux kernels, shipped on billions of devices, processing probably tens or hundreds of exabytes of data every day. It's running on satellites in space and in production cars. Respectfully, you don't know what you're talking about.
I’m also generally very glad at where it went from here. It took a tremendous amount of work from so many people to get there.
reply