That goes for manual memory management -- and certainly languages with a reference-counting GC, like Rust -- as well. The main difference is by far footprint overhead.
> and certainly languages with a reference-counting GC, like Rust
It's a mistake to say that the Rust language has reference counting. There's a pair of reference-counting wrapper types in its standard library (Rc and Arc), or you can roll your own, but there's no special support for these types in the language, and their use is optional. Most of the time, you won't be using these reference-counting types. Box (the generic heap-allocated or "boxed" type) doesn't use reference counting. String (and its several specialized variants) doesn't use it. Vec (the generic heap-allocated array type) doesn't use it. HashMap, HashSet, BTreeMap, BTreeSet, none of them use reference counting. And so on. You can write a lot of Rust code without using reference counting even once.
What the Rust language has is just C++-style RAII: when a value goes out of scope, if it implements Drop, its Drop::drop is called.
> It's a mistake to say that the Rust language has reference counting.
Having these types in the standard library is the language having those types.
Perhaps it's not integrated to the level that a language like swift is. However, I think it's reasonable to say the language supports Rc when the standard library supports it. I'd say the same thing about C++ with `shard_ptr`.
Otherwise you end up in weird pedantic notions about what a language has or does not have. Does C have a heap? Well, technically no since malloc and free are just function calls in the standard library and you can write valid C programs without calling those functions.
> Having these types in the standard library is the language having those types.
It depends on whether you consider the standard library an indivisible part of the language or not. For Rust, it's clearly not the case, since you have the #![no_std] mode in which only a subset of the standard library is available, and this subset does not include these reference counted wrapper types (or any heap-allocated type at all).
> Perhaps it's not integrated to the level that a language like swift is. However, I think it's reasonable to say the language supports Rc when the standard library supports it. I'd say the same thing about C++ with `shard_ptr`.
It's one thing to say a language "supports reference counting", which only means you can use reference counting with it, and another thing to say "[...] languages with a reference-counting GC", which implies that the language uses a GC for everything, and that GC is a reference-counting GC.
> Does C have a heap? Well, technically no since malloc and free are just function calls in the standard library and you can write valid C programs without calling those functions.
It's actually the same thing: C can run on either a "hosted" environment or a "freestanding" environment, and on the later, most of the standard library is not available, including malloc and free. So C does not necessarily have a heap when running on a freestanding environment.
It's not part of std exactly, it's part of alloc. It's re-exported by std.
It would still be available in a #![no_std] environment using `extern crate alloc`.
This crate generally abstracts over the concept of allocation too, so relying on it doesn't require you to also have an allocator - it just requires someone at some point specify one with #[global_allocator]
It is not, if you have objects with dynamic lifetimes, and allocating them for the whole duration of the program is not an option.
Sure, their use can be much less than a managed language that can only do automatic memory management, but RC is objectively a worse from most perspective than tracing GC, except for the fact that they don’t need runtime support, and a slightly lower memory overhead.
It goes even a step further. You can write a lot of Rust code with exactly zero heap allocations in the critical paths. While at the same time we’re struggling getting our Java code to stay below 1 GB/s heap allocation rate. Rust recounting could be 100x slower than Java GC and it would still win.
I think the important thing to understand is that reference counting isn't any better (and often worse) than "regular" garbage collection.
The point of manual memory management is to come up with problem-specific strategies to avoid or at least reduce dynamic memory allocation, not to insert manual release/free calls for individual objects ;)
Reference counting is regular garbage collections. The two broad classes of GC algorithms are tracing and refcounting, and while they can converge to similar behaviour, usually the former optimises for throughput while the latter for memory footprint; latency is similar these days.
> ...while I agree, for many C++ and Rust coders statements like this are pure heresy ;)
It's a matter of definitions. For many people, "garbage collection" refers only to tracing GC, and reference counting is a separate category. In my experience, that's the common usage; insisting that "reference counting is formally (in some paper from the last century) also defined as a form of GC" will not magically change the opinions "many C++ and Rust coders" have about tracing GC. In fact, I'd say that insisting on this nomenclature point only weakens the whole argument; tracing GC should stand on its own merits, and not on depend on some nomenclature equivalence to be accepted (if quibbling about nomenclature is your strongest argument, your arguments are weak).
There's no need to "change opinions". People who work on GCs know that reference counting and tracing are the two general GC strategies. The only people who don't think of refcounting as a GC are people who simply aren't familiar with GCs and how they work. If they also think refcounting has lower latencies (let alone higher throughput) than tracing, then they're also just wrong. No one needs to "insist" on the GC nomenclature. You're either familiar with it or you're not (and since most people are not, they commonly make mistakes on the subject). Also, given that tracing GCs are used by ~90% the market, they hardly require justification anymore; they've won by a large margin over the application space (which constitutes most of software). However, it's nice to occasionally educate those unfamiliar with the subject on GC algorithms and nomenclature.
I have to wonder whether some of this is semantic drift over time or context. My recollection since undergrad (a few decades ago) involves treating “garbage collection” as referring to tracing garbage collection, and “reference counting” as a separate mechanism. There is still a term for the category including both, only that term is not “garbage collection” but “automatic memory management”. But what I see nowadays is closer to what you describe.
If anything, the drift is towards GC being only tracing as it is so dominant in the languages that are normally considered having GC. But before C++ (via boost) introduced shared pointers, and Swift ARC, I'd expect the separation to basically not exist.
I agree; I meant “including” in the non-restrictive sense, not “including only”. Stack allocation is a special case where the lifetimes are arranged in a convenient way—see also escape analysis in languages where stack allocation isn't explicitly supported at the language level but can be added by the compiler.
> Also, given that tracing GCs are used by ~90% the market, they hardly require justification anymore; they've won by a large margin over the application space (which constitutes most of software).
Tracing GCs have clearly proven themselves and are everywhere (JVM, CLR, Go, Dart, OCaml, etc.) but we can't ignore that the Apple ecosystem (Swift) is using ARC. That's a significant share of the "market". Python and Ruby also use reference counting, but I don't think anyone is considering them state-of-the-art GC.
Except that obligate ARC ala Swift has even lower throughput than obligate tracing GC. It's the worst possible choice unless you really care about low-latency and deterministic freeing of resources (and even then, using RAII for common tree-like allocation patterns like Rust does will perform better).
You're right, I should have said "languages where GC is the primary means of managing heap memory are used by 90% of the market" rather than focused on a specific algorithm.
Ruby does not use reference counting. It has a mark and sweep generational gc that is incremental and use compaction. I doubt it would be state of the art, but it is not too bad nowadays.
> Tools to automate the “actually freeing the memory” part, like lifetimes in Rust and RAII in C++, don’t solve these problems. They absolutely aid correctness, something else you should care deeply about, but they do nothing to simplify all this machinery.
Rust is more similar to C++, in that the compiler inserts calls to free as variables exit scope. Runtime reference counting is limited to those objects wrapped with Rc or Arc.
I agree with pron’s larger point. GC is fine for most applications. It’s just factually inaccurate to compare Rust’s memory management with languages like Python and PHP.