It's not so much about "I need that later", but to start thinking about a program differently to begin with. If you think about the things that "are" rather than what should happen in what order, you get the chance to rearrange everything and do some optimizations that an eager processing might prevent you to do. Most of the OOP power comes from uniformity, which many systems break unfortunatelly. When you work with a system in which everything is an object, you can start to relax a lot, although everythin is conceptually slower. Most software doesn't even have to be that fast. If it has to be, feel free not to use Ruby.
The tradeoff is most often performance versus comprehensibility. I'd argue in favor for the latter. Of course, ergonomics and ease of use are hard to measure (but not impossible, although empirical studies are quite difficult to do and expensive), but the tradeoff is similar for all higher level languages. Consider the overhead for a class `Year`:
Insane! Expensive! you might say. All it does is encapsulate some integer! But I'd take that little overhead over the scattered insecurity of my colleagues every day, who in every calling method will do the same "if then that else"-check for the year range over and over again, when they are handed an int and need to find out what's in it. The class provides locality for my concern that I only ever want to deal with valid years.
When, in your system you find a year-typed object somewhere, it is guaranteed that this is valid. This creates peace of mind, which is way more expensive than RAM.
It seems to me that this is mostly a design problem. Why would you need to check this data everywhere?
Checking the validity of the data is only necessary once. In a web context, this is the responsibility of the controller. Only once the data is sanitized should the controller inject it into 'anything else'.
Sanitizing data is not the responsibility of a model/service/repository/view/anything else, and trying to do so indeed leads to a lot of bugs and headaches.
Having this kind of object that checks your data and may throw exceptions anywhere on the code only augments the failure surface of all your codebase by adding unexpected exceptions.
Comprehensibility is in no way related to using objects. Wether you choose an integer, a class or a subtype of integer does not make anything more readable, it all depends on the quality of the naming. A var named `year` or `startYear` will always make the code more readable than a var named `start` or `begin`, whether it contains an object or an integer is irrelevant.
> Checking the validity of the data is only necessary once.
Exactly, which is why a class is the perfect singular location to place it. The object itself is just a pointer, the method doesn't get copied around, so object appear to be the perfect method to localize code.
> unexpected exceptions
True, exceptions introduce a communicative issue, but so does returning in-band values.
For example, what should the result of
open('file.txt').read()
be, when the current user does not have permissions to read file.txt?
I wouldn't argue against any of them, although I have my preferences of course. I like the exception model here, but you are right, rare exceptions, communicated poorly can be surprising and painful.
> Comprehensibility is in no way related to using objects.
On itself this statement is false, ... (hear me out)
> It all depends on the quality of the naming
... But this makes me understand what you're trying to say, and I 100% agree with it. In fact, I conducted some experimental research on identifier naming: https://link.springer.com/article/10.1007%2Fs10664-018-9621-... (Sci-hub or I can provide a preprint).
You are right in that objects and their use don't automagically turn a codebase in a field of readily availabe knowledge. Many mechanisms applied in OOP languages really work AGAINST comprehension (for example, buried exceptions originating from deep within an object graph). But still, objects are tightly coupled to comprehension, even historically speaking. They were first used to make it possible to model physical simulations without requiring users to know much about computer architectures (Simula 67). Objects are meant to "model", that is, symbolically represent concepts, entities or physical things in such a way that they might show agentic behavior. This is fundamentally different from having stupid data, smart functions but actually a completley different means of "Erkenntnis" (translates to "insight", but is more accurately conceived as "Epistemology"). The relationship between objects and readability / comprehension is complex, but I wouldn't call them "in no way related" (OOP can break comprehension, but it was invented to improve it. Irony.)
Also I would, again, like to second your words: Identifier naming might be the most imporant aspect of readability and comprehensibility.
The tradeoff is most often performance versus comprehensibility. I'd argue in favor for the latter. Of course, ergonomics and ease of use are hard to measure (but not impossible, although empirical studies are quite difficult to do and expensive), but the tradeoff is similar for all higher level languages. Consider the overhead for a class `Year`:
In its constructor validates integers.Insane! Expensive! you might say. All it does is encapsulate some integer! But I'd take that little overhead over the scattered insecurity of my colleagues every day, who in every calling method will do the same "if then that else"-check for the year range over and over again, when they are handed an int and need to find out what's in it. The class provides locality for my concern that I only ever want to deal with valid years.
When, in your system you find a year-typed object somewhere, it is guaranteed that this is valid. This creates peace of mind, which is way more expensive than RAM.