First, I have no desire to handle both memory and external resources in a unified way, because memory management and resource management have different needs.
Memory is not just "one kind of resource", it's a very specific type of resource that if it has to be managed manually, inherently creates crosscutting concerns. And memory allocation is pervasive, often implicit in other language constructs. Garbage collectors get to cheat here, because they have a global view that ignores module boundaries and information hiding.
The classical example is that introducing a caching mechanism usually introduces API breaks. Where a function normally returns a pointer/reference/unique pointer and makes the caller responsible for freeing memory (whether through convention such as in C or enforced/automated through language mechanisms such as in Rust), the moment you cache it, you need to return a reference-counted pointer, because now the memory can only be freed if both the caller and the cache don't use it anymore. And that change from a non-reference-counted pointer to a reference-counted pointer is a breaking API change.
There are plenty more situations where manual memory management interacts poorly with modularity, such as filter() style functions, or the various complications that arise from closures capturing their local environment.
Conversely, it is absolutely possible to have pretty straightforward resource management with guaranteed and predictable lifetimes in a GCed language (though, alas, there's a lack of direct language support for that).
The general approach is as follows: Each resource's constructor takes an explicit or implicit owner argument (implicit being the current scope, whether defined through a language construct). You can also transfer a resource to a different owner (reparenting) [1].
Owners of resources can be lifetime managers such as scopes (but those do not need to correspond to lexical scopes and are more like transactions), that have more complex lifetime logic (such as a pool of resources) or objects that themselves are owned (e.g. if you have resources dependent on other resources). When the lifetime of the owner finishes, it calls a dispose function in all owned objects.
Because an owner is required in order to construct such a resource object (unlike a C# using clause or Java's try-with-resources) by virtue of its constructor requiring it, it is impossible to accidentally create such a resource without a controlled lifetime [2].
Note that this is not the equivalent to RAII. You can have a number of non-owning references to such resource objects, essentially the equivalent of a weak pointer. In my experience, this is generally a good thing, because you do not want to have a hidden pointer secretly extending the lifetime of a potentially expensive resource. I prefer resource lifetimes to be explicit and to get an error if they are used past their intended lifetime.
Introducing caching is a semantic API change, regardless of the way how memory is managed, so it should be breaking. After introducing caching, now you also accidentally introduced sharing, because two consecutive API calls can return the reference to the same object when earlier they couldn’t. This could lead to a correctness issue.
This is not a problem if the returned object is immutable. If you're returning mutable objects, then that already needs to be documented as part of the AI and not an incidental discovery from the object being reference counted.
In any event, that is hardly the only case of manual memory management breaking abstractions.
It may not be a problem in a language like Rust, where the compiler understands the concept of immutability. But most (if not all) mainstream languages except Rust don’t. Your object may be immutable in January but in February someone makes a change in a class used 10 layers below and suddenly your class is no longer immutable. Add invisible sharing to that and your code explodes.
I will happily pay a small price in breaking some abstractions and get the protection from fuckups like that every single time.
Abstractions understood as „I can change only X without changing Y” (aka GoF OOP patterns or most Clean Code OOP patterns) are overrated anyways. Readability and understandability of code is more important than ability to add something without changing something else. If code is readable and constraints are enforced by the compiler, it is easy and safe to change. Optimizing for ability to change is better than optimizing for not having to to changes. Because changes will happen no matter how beautiful abstraction you make.
> It may not be a problem in a language like Rust, where the compiler understands the concept of immutability. But most (if not all) mainstream languages except Rust don’t. Your object may be immutable in January but in February someone makes a change in a class used 10 layers below and suddenly your class is no longer immutable. Add invisible sharing to that and your code explodes.
No offense, but this strikes me as a strawman argument. What software design methodology leads to immutable types suddenly being made mutable? Outside of dynamic languages, where is support for immutability not there?
Note that any language with proper information hiding can expressly do immutability (literally going back to the days of Modula-2), plus of course several mainstream languages that have first-class language features to represent immutability for added convenience.
Finally, this is only one example. One underlying problem is that if all reference counts need to be capped at one, you have to either switch to full reference counting or to copy the underlying data.
You can see this play out in the std::collections::HashSet interface. Operations like intersection, union, difference, and symmetric difference return iterators rather than sets. While there are also operators that do return sets, such as bitor, that's implemented as follows.
Because the result and the argument can't in general share references to elements, you end up essentially doing a deep copy for e.g. a set of strings, which you want to minimize as much as possible. Thus, limitations on the sharing of references dictate aspects of the API.
> Abstractions understood as „I can change only X without changing Y” (aka GoF OOP patterns or most Clean Code OOP patterns) are overrated anyways. Readability and understandability of code is more important than ability to add something without changing something else. If code is readable and constraints are enforced by the compiler, it is easy and safe to change.
So, you never work with third-party libraries where you do not control the API and have never written libraries to be consumed by other teams/third parties?
Java cannot express immutability. `final` is not transitive, so nothing stops an unrelated change in code to break something that was immutable earlier. Same with Golang.
Memory is not just "one kind of resource", it's a very specific type of resource that if it has to be managed manually, inherently creates crosscutting concerns. And memory allocation is pervasive, often implicit in other language constructs. Garbage collectors get to cheat here, because they have a global view that ignores module boundaries and information hiding.
The classical example is that introducing a caching mechanism usually introduces API breaks. Where a function normally returns a pointer/reference/unique pointer and makes the caller responsible for freeing memory (whether through convention such as in C or enforced/automated through language mechanisms such as in Rust), the moment you cache it, you need to return a reference-counted pointer, because now the memory can only be freed if both the caller and the cache don't use it anymore. And that change from a non-reference-counted pointer to a reference-counted pointer is a breaking API change.
There are plenty more situations where manual memory management interacts poorly with modularity, such as filter() style functions, or the various complications that arise from closures capturing their local environment.
Conversely, it is absolutely possible to have pretty straightforward resource management with guaranteed and predictable lifetimes in a GCed language (though, alas, there's a lack of direct language support for that).
The general approach is as follows: Each resource's constructor takes an explicit or implicit owner argument (implicit being the current scope, whether defined through a language construct). You can also transfer a resource to a different owner (reparenting) [1].
Owners of resources can be lifetime managers such as scopes (but those do not need to correspond to lexical scopes and are more like transactions), that have more complex lifetime logic (such as a pool of resources) or objects that themselves are owned (e.g. if you have resources dependent on other resources). When the lifetime of the owner finishes, it calls a dispose function in all owned objects.
Because an owner is required in order to construct such a resource object (unlike a C# using clause or Java's try-with-resources) by virtue of its constructor requiring it, it is impossible to accidentally create such a resource without a controlled lifetime [2].
Note that this is not the equivalent to RAII. You can have a number of non-owning references to such resource objects, essentially the equivalent of a weak pointer. In my experience, this is generally a good thing, because you do not want to have a hidden pointer secretly extending the lifetime of a potentially expensive resource. I prefer resource lifetimes to be explicit and to get an error if they are used past their intended lifetime.
[1] Note that this is conceptually similar to talloc. https://talloc.samba.org/talloc/doc/html/index.html
[2] Obviously, it is still possible in any language to do the equivalent of a raw fopen() call, but that's not something that RAII can fix, either.