I can't tell if this is an honest call to keep things simple, or if it's meant to ridicule that idea. Because I strongly, deeply agree with some of these points, and am absolutely horrified by some of the others.
Yeah, I really can't tell if it's satire. Especially things like this:
> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.
OK, well then you might need to gather a little more work experience and meet the repercussions of decisions like this one.
Or maybe the OP is a contractor and never has to deal with the repercussions of their actions.
> contractor and never has to deal with the repercussions of their actions.
This doesn't match my experience of contracting.
Not caring about code quality and then leaving others holding the bag of sh*t is frequently what "ladder climbers" do, and those are, by definition, on payroll rather than contractors. For a contractor, there is no ladder to climb, but willingness to get one's hands dirty is definitely part of the job description.
Contractors are quite frequently the people who are brought in at extra expense to maintain mission-critical infrastructure when the employee who made it is no longer at the firm, or has moved on to different responsibilities. After having gone through several contracts, a typical contractor will often have a lot more experience with different coding practices and their respective outcomes and a deeper appreciation of good engineering.
The trick is to refuse to adopt the job-title "contractor" - simply refer to oneself as a "consultant" - it goes a long way.
Fun-fact: there's no rule against contractors/consultants earning equity in their clients' projects either: apparently I can artificially raise my hourly-rate by 20% during the post-interview negotiations, then sweet-talk the client into letting me have token equity in exchange for a 16% pay-cut. Simply do this for every small gig you get and eventually it works out as a pretty-nice personal-ladder to early-retirement.
...not that I want to retire, really - the idea terrifies me because I simply can't do anything except work :/
Yes, my experience being a contractor largely backs this up.
Also, remember another possible reasons why a company might hire a contractor: to have someone to blame for a project they know is failing.
One of my favorite things about contract work is that it's time-limited. This means that I can be 100% honest about bad dev processes, bad code, bad management, etc., because I won't have to worry about angry managers making my life miserable for any longer than the remainder of my contract period.
I can't count the number of times I've spoken up about something terrible, only to have the permanent devs privately tell me later "I'm so happy you said that. I've been wishing I could bring that up for years"
That's a particularly weird one in light of starting out with "I only use statically typed languages", because it's basically trivial in a statically typed language. You have "type UIData", "type StorageData", and functions/methods to convert back and forth between them. Those functions/methods may need additional parameters, which will be documented by the mere act of calling for them in the function signature. You take the right type in the right place and the language does the rest, basically. Statically-typed languages are so good at this that even refactoring it after the fact is trivial; potentially tedious and lengthy, but generally trivial. You just create the new type you need, put it in one place in the code where it belongs, then start compiling the code and fixing the errors it flags. Repeat until you've converted all uses of the old type to the new type or performed the correct conversion. It'll probably just work the next time you run it. You don't anything even remotely as strong as Haskell to have an "It Just Works when I compile it" moment.
In dynamically-typed languages this is a nightmare, and this is one of the significant reasons I've stopped using them. Finding a bug six months to six years later from some attempt at this and finding a place that used the old type incorrectly when it was getting the new type, or because the programmer thought they were getting one but in fact got the other, or my favorite, the code is actually called with both types and nobody noticed until way too late... it's just something I've had enough of.
But, _what_ is the point of using a different layout/structure for storage vs the UI layer?
Remember I'm a 0.5x developer. If I have to write code to transform data for every kind of entity/object I need to store and have a UI to view and edit, that's way too much. I'm already very slow. No need to slow me down further by telling me I have to write so much extra code that does no useful work.
> the point of using a different layout/structure for storage vs the UI layer?
By way of example, I'll point to the classic example of a New User form UI: that UI will need 2 password inputs (one for the password, the other for the "confirm password") - but your User object/schema/DB-table won't have two separate string password fields - it'll have a single binary/byte[] salted-password-hash field, and you certainly must never show that in the UI.
As for your (reasonable!) concerns about having to write-out repetitive types/models that all share similar (though hardly ever identical) data/fields/shape: I agree it's tedious, but that's what we have scaffolding-tools for (granted, I often end-up having to write my own scaffolding and templaying tools...), so for a simple CRUD application just design your database and have the scaffolding tools take care of the rest, including stubbing-out a functional UI (and this is why Ruby-on-Rails was huge when it first came out: it took-away all of the drudgery in CRUD web-applications - but then all the other frameworks/platforms improved their scaffolding-stories and RoR is certainly less attractive in comparison now.
That always pisses me off. I can copy/paste the password from the first field into the second. Hell, I use a password manager; I pasted into the first field, so I paste into the second.
It's just a check to make sure they match. But my passwords are complicated enough that I can't remember them long enough to type the characters in the first time; I literally have to copy/paste.
So this anti-pattern assumes that users are using some memorable password like their mother's maiden name, probably for all the services they use. Anyone using sane password practices is penalized with stupid friction.
[Edit] I have an even bigger gripe about asking me to enter my email address twice. If you really aen't sure I entered it correctly, send me an email asking me to confirm.
> So this anti-pattern assumes that users are using some memorable password like their mother's maiden name, probably for all the services they use. Anyone using sane password practices is penalized with stupid friction.
I think you're really, really underestimating how easy it is to make a typo when entering a password into a masked password box, hence the 2 fields.
I appreciate that you're just-as-annoyed with web-form tropes, cliches and irritations as I am - and when I'm building a system or UI for savvy people then sure, I'll do things like skipping the registration form entirely and just use OIDC federation or PassKeys or whatever the current-security-fad-of-the-month is - but my day-job requires me to write software for ...uh... "normal people" and part of that means having to weigh-up the support-costs of users who want a predictable and easy-to-understand that isn't too different from what they already know - and those normal-people are what pays my bills.
> Also, web-browsers don't let you copy-and-paste the password from one box into another
Point taken; perhaps simply having that widget that lets you see the password unmasked would be better, than forcing you to enter it blind twice (and getting it wrong twice). Fact is I don't try to copy/paste one field into the other; I paste the same stuff into both.
FWIW I'm a normal person. I don't know what OIDC is, and I've never been asked to use Passkeys (I'm proud to know nothing about Apple devices, and I don't entrust my security to "the cloud" or third parties). My password manager is local, backed-up locally. I'm already annoyed once I'm presented with a registration form, and only complete one when I'm forced to. Every extra annoyance increases my rage.
Maybe, instead of having two password fields, they should have a single password field that you cannot type in, but can only paste in, forcing everybody to use a password vault.
But even in this case, I don't have two representations of the same object. Rather I have two different object:
Account (persisted)
SignupRequest (not persisted)
The SignupRequest is used to create the Account
The signup form on the UI is about editing the SignupRequest object. This object will be sent from the UI code to the backend as-is. The backend code will use it as-is (ignoring the json encoding/decoding).
There's a code path in the backend that takes SignupRequest and uses it to create a new Account.
In this case SignupRequest is your contract representation, Account is your storage representation, and the "backend code path" is the transformer/napping layer.
>I don't have two representations of the same object. Rather I have two different object
Exactly! Your api contract and your storage are ALWAYS two different objects, because they serve two different concerns. Sometimes by coincidence they can share the same shape but there's no reason that they need to be coupled together and impossible to change independently, other than the fear of inconvenient "boilerplate" mapping logic. By doing this up front, and not even letting it enter your data model, you create a formal abstraction boundary; it's reserving the right to change two pieces of data independently. Also, mapping/transformation logic can often just be simple, pure, total functions; which are trivial test and maintain compared to anything that touches I/O.
> Exactly! Your api contract and your storage are ALWAYS two different objects
Not really. I have request objects for everything. "Search" is a request object. List pagination is a request object.
Every function exposed through the RPC API takes a request object and returns a response object.
The response object is often just a collection of objects straight from "the database".
A response to a paginated list request will contain a list of objects straight from the database, in addition to some metadata about the pagination (names: current page number, total page count).
The cruicial part is there's no "transformation" of data as it goes out from the database into the UI. There's some aggregation and grouping (an outer object that contains multiple objects), but that's about it.
Again though there's a subtlty: some transformations do occur, but they don't occur on the path from the storage to the UI. Instead, everytime I store a complex object, I also derive a "simple" version of the object and store it too.
When you request a list of objects, you get the "simple" version, and the UI displays them in summary format. When the user clicks one of them items on the list to see more details about it, the backend sends the "full" object.
Notice the underlying principle: the UI flow dictates how the storage layer stores objects.
This is the anti-thesis to the common wisdom, where the storage layer does not care about the UI, and it's the job of the intermediate layer to transform data for the needs of the UI.
Yes really. The fact that you can "often just return an object straight from the database" and have it fulfill the functional requirements of your client is just a coincidence, or more likely it's an invarient that you have decided to enforce. What people have discovered (usually very painfully) is that unavoidable breaking changes to either your client representation or storage representation are bound to happen, and when they do if you haven't separated these concerns this will have a ripple effect through the entire application. This may be fine. If your applications are tiny or downtime is okay, then you likely won't care about this. But to casually dismiss this advice as simply always being overcomplicated and overengineered is a grave oversimplification that you may regret someday. Many of the topics in this post fall into this category - people do it for a good reason, and you might not need it, but everything is a tradeoff and "I'm just going to do the stupid simple thing" is not the silver bullet.
> Also, mapping/transformation logic can often just be simple, pure, total functions
I find this is never the case - usually for the exact reason you inadvertently sold as some kind of "benefit": "it's reserving the right to change two pieces of data independently" - because when you need to map from, say `SignupRequest` to an `SavedAccount` object you'll encounter data-members required by `SavedAccount` which cannot be sourced from the `SignupRequest` object - for example, supposing the UX people come to us and say we need to split-up the registration form into 2 pages, such that most fields are still on page 1, but the password boxes are on page 2. Now you need to deal with how to safely persist data from page 1 for use in page 2 (so using hidden HTML inputs between pages won't work because that requires POST requests to work, but both pages should be able to be intiially-requested with a GET request.
You want computer storage to be flexible, non-redundant, and small. You have a machine capable of quickly and flawlessly integrating data from anywhere, correctness is your main concern.
You want your human representation to be specialized, highly redundant, and as large as needed so anything important is presented. Your users can't find data or keep it in memory, enabling them is your main concern.
Usually, those two sets of constraint lead to the same format on behind the scenes admin panels and nowhere else.
I think of it this way, actually: I have a different structure for every major use in my system. A fairly general variant is having a structure for input, a structure for my internal operations, and a structure for output, but I use that just as an example. Each of them can be used as I outlined above, with methods to transform between them as needed.
This is necessary because each of those things represent completely different needs, because they operate in different domains. In particular, the guarantees are different; the input must effectively be treated as having no guarantees, and you must check them all. Your internal operation may add additional guarantees it provides, things you don't need to check anymore because the mere act of being passed a value of a particular type means that the code can rely on this particular thing being true, thus saving me a ton of code everywhere. For the output, you don't care at all about any guarantees but you need to conform to what the external world needs.
In general, trying to cover all these bases with one structure is a bad idea which leads to pain. I've seen it many times, where developers try to overload one structure to do too many things.
In specific... it so happens that 95%+ of the time, covering all the needs with one structure works out fine. But I conceptualize this differently than you. I do not say to myself "It's OK, one structure is all I even need because anything else would be overcomplication." I say "It so happen here that the structures are so similar that I can conveniently elide them down to one without significant loss. I can do this because I have examined all the needs and guarantees and verified that they do not conflict."
But the difference is, as soon as they do conflict, as the codebase changes over time, I split them, leaning on the compiler to guide me through the process, because I know from experience it is not particularly hard. When you need to do this, it is easier to just do the split of types than to try to make one type straddle the gap. (Besides, once you have one type straddling one gap, the odds are by the time you're done it's going to be straddling more than one gap. This tends to happen precisely to those most central types.)
I bet you have at least one type somewhere that is suffering from trying to straddle too many use cases. But I would also bet you don't actually have many such types. It turns out that the majority of the time the elision is safe. But I think it is a useful perspective to still mentally model that as an elision and having separate types as the underlying model. I think the way you are advocating for thinking of it works fine the 95% of the time our models agree, but in that other 5%, someone following my system is going to be a lot happier than someone following yours.
I do a lot of relatively small programming, projects in the single person-year range. I do a lot of this sort of elision, because bringing the full power of generalized architecture to such projects can cost you a lot more than it gains. But I also do this sort of modelling in my head a lot, too, and when I notice a particular elision is starting to cost me something, I will on-demand unelide some particular architectural elaboration on the spot. In fact I am taking a break from this exact process to type this post, as I need to replace an increasingly complicated hard-coded structure of decorator-based plugins for a fully configuration-based model of arbitrary combinations. For any given such re-elaboration, it is perhaps more expensive to do than if I had started with that architectural feature in the first place, but across the full space of possible architectural features I win big versus starting with an architecture that is too heavy weight in many ways that I will ultimately never need in this particular project.
I agree wholeheartedly. I don't understand why everyone insists on repackaging the same data over and over.
Store it in one format in the database.
Read from the database and transform it.
Package that into a TO object and send it over the wire.
Receive the TO object and repackage it into your local context
Take the local context and repackage a bunch of parts of it into each view model as necessary.
Everyone just wants to bundle up data and throw it over a wall instead of working together to engineer end to end.
Because you don't want couple your data model with your public contract. it allows you to change them independently.
When it comes time to change your data model you can't without bubbling it all the way through the to the public API which may not be desirable.
Repackaging the data at every level means you only have to change the transformation at one of those levels.
For small projects this isn't a big deal.
But if you are working on a large project with multiple teams you have public contracts at multiple levels. You don't want to wade through 10 layers and 10 teams of changes because you change the way you store and compute some attribute.
If you are working on a large project with multiple teams and you change something like that you better version it or make sure it's backwards compatible anyways.
Otherwise you're going to break something downstream and you're still going wade through all of those layers, and now it's less obvious what broke because everyone is re-bundling your data into their own formats.
And I am not advocating for dumping your DB rows directly onto the wire for the frontend to fumble.
I am just saying it's silly to write a frontend and backend in a super modular, decoupled way when they are actually just a single service.
Not if the public view on the data doesn't change. Just the storage. That's sorta the point. To not have to version if you just change the underlying storage model.
For example let's say for whatever reason we were storing a duration as an int. But instead we decided to migrate it to start time and end time.
Do we need to force that change on everyone downstream and add a new major version API? Or can we just compute the old duration from the new attributes in the transformation.
Even in your example do we really need to change the frontend in this case? For small projects the extra boiler plate probably doesn't out weigh the benefits but for large projects it absolutely does.
If the data structure needs to be changed it needs to be changed at the beginning of the system typically the front end. And downstream code needs to be fixed to accommodate that.
Otherwise youve created 20 different state machines for each part of the system stacked on top of each each other each expecting a different data structure and returning a different data structure.
So changing anything after the system is sufficiently complex is an exercise in masochism and development will slow to a crawl to avoid breaking one of the 20 downstream black boxes.
There should only be a single data structure contract that all teams follow.
There needs to be a SINGLE data structure passed through the system originating at the beginning of the system.
The data structure can be added to by code along the pipeline but never changed.
Am I understanding that you think that there should be just one data structure which is shared between all teams which is a superset of all fields that it could have and people just add data to those fields? So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?
At this point you may as well just use a object or dictionary. The type doesn't give you any idea about the actual shape of the data.
> So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?
At least with a non mutating data structure with the public contract you can tell if it's passed through a part of the pipeline because a required field is blank or null.
With a mutating data structure that gets changed, say 20 different times throughout the system you have no idea if it's passed through some part of the pipeline or not.
Even better if somehow everything can interact via a centralized database where all state is stored even intermediate state. Its not just storage. It's also state management.
And they'll also run into major liability issues when they leak customer credit card information, SSN, or something similar because they have a single class that represents the table in the database and they use it in both the frontend and backend.
Because of things like mass assignment or IDOR, or injection attacks, presumably.
Handing data from the user (untrusted input) directly to the backend unchanged is going to in 99.9999999% of cases also mean it's unchecked.
"I'm not smart enough to bother sanitizing my input, or to learn about stuff that's someone else's job like security" would fit right into this "manifesto".
Well first off, that's not correct anyways; type-conformancy is a very valid and important part of data sanitization.
E.g. does the SSN consist of 3 valid integers split on dashes? If not, it ain't a proper SSN. Catching that typeError is much safer than trying to roll regex or character allow/blocklists.
But also, the manifesto never mentions data structures.
It says
> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.
Transforming data means actually changing the data, not the data structure that houses it. The author is explicitly talking about making changes to the data itself.
You absolutely may need and want to transform data.
You don't want to store code unmodified in a database; that's how you end up with SSTI or stored XSS. You encode special characters in a simple, reversible way (as one example).
Similarly, you don't store passwords untransformed in a db, you hash them. That's a transform.
> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.
That has nothing to do with the data structure, is about how the data is being represented in the backend vs the frontend. It's explicitly about changing the data itself.
Stuff like url-encoding and escaping characters are examples of transforming data to be represented differently in the backend (where e.g. it's encoded or escaped) from the frontend (where it's displayed in "proper" formatting).
So you think that if e.g. Stack exchange is storing code examples in their database, they shouldn't transform the special characters into e.g. url-encoding or some other escaped format?
How are you going to validate it doesn't do something harmful?
And also, you keep mentioning sanitization as though it's not inherently a transform. Stripping out whitespace? That's a transform. Just because it's a one-directional transform doesn't make it not one.
>Sanitizing your inputs has been known about for literally almost half a century that should just be default for developers at this point.
Except if you're a "stupid programmer", in which such defaults are irrelevant to you. In such cases, one can only hope they're relying on tooling that sanitizes as much as possible for them.
Data protection also happens at a much lower level. If you're running your binary blob on a server with no firewall you will be hammered with constant root hacks. https and REST encryption will be irrelevant.
Even worse, you may be running a mail server on the same machine.
That happens to every server exposed to the internet.
Unless there is a targeted ddos event its usually on the level of a query or two per second AT MOST or so.
Its something most people don't worry about because theres no way around it.
If you need ddos protection you put the server behind cloudflare or something like it.
Its totally fine to raw dog the internet without a firewall in my opinion if the server only has a single web server that you keep updated and sanitize your inputs.
Running other services on the same server without a firewall becomes horrible though youre right. lol
I think he's just rebelling a bit against the culture of caring about scale and performance. A culture that has been pushed out by FAANG that also has some academic undertones. The subtext he's giving is: it doesn't really matter.
That's what I read into it, because that's what I identify with. I've met so many fellow CS students (back in the day) who cared about performance optimization. I never cared. I just wanted it to be simple, working and done. I cared about clean code because I had a hard time understanding anything else. I felt allergic to over-engineering because it portrays a whole host of thoughts/emotions that just hurt my brain empathizing with it how it would feel if I'd have those. So I kept things simple too.
Usually the repercussions aren't really there. When they are there, then they need to be there and it's usually a sign that your product and the place in the market you're at is evolving. Code is a living thing. Sometimes a big rewrite is necessary, but if that simple unoptimized code held up for 10 years (which is what I'm experiencing), then I'd argue that's okay.
A lot of these things aren't about scale and performance, but maintainability. Once you get over the initial learning curve, things like static typing, Docker, cloud services managed with Terraform, etc. don't require that that much up-front effort and save you a lot of time in the long-run.
> I can't tell if this is an honest call to keep things simple, or if it's meant to ridicule that idea.
Me neither.
All of these things involve tradeoffs. Multiple repos vs monorepo highly depends on your organization. Microservices vs monolith has been discussed to death. A simple, locally-driven deploy process is great for solopreneurs but doesn't scale to full companies. Load balancers might be required depending on your availability requirements. Storing all your data on disk is easier than maintaining a database, but aside from performance bottlenecks, it's harder to define a recovery process (RTO/RPO) in the case of a hardware failure. And as much as I dislike them, there are ease-of-use benefits in using dynamically-typed languages.
The biggest technical challenge in most software engineering roles today is evaluating the pros and cons of these types of choices and choosing the right one for your situation.
I was working on a feature for a webapp that I thought should be doable in a day, but I spent roughly a week on it. When I was done with it, I looked back and thought: wait, why did this take me the whole week? I couldn't come up with a satisfying answer.
So I just decided to accept that I'm not that productive. Maybe there are things I can do to improve my speed of implementing features, but for the time being, this is my speed.
I was going to make a writeup about coming to terms with that.
But somehow as I was writing (originally on Twitter) I linked it in my head to the way I hate modern dev culture: the docker and the webpack, etc. Programming is already hard, why make it 10x harder with all the tools that require complex configurations, etc?
I remember raising this point on HN and other places with other developers, and that there was always push back from people who swear by these tools.
So it clicked in my head: I hate these tools because I'm already slow as it is and I can't take it when these complexities slow me down even further.
Then as I spent more time thinking about the content, I thought this a manifesto worthy content.
But in terms of the points mentioned: I'm dead serious about all of them. Although what I actually mean might not be obvious from first glance.
When I said I write objects as-is, it appears that most people in this thread thought I'm writing individual files. This is not what I'm doing. I'm using a B-Tree backed key-value store and doing binary serialization. I also have a scheme where I make use of the properties of B-Trees to make indexing/querying possible. I have a whole write up about this topic: https://hasen.substack.com/p/indexing-querying-boltdb
These alternative techniques took time for me to develop, so you could say that it would have been faster for me to just use postgres/docker/aws etc (the standard stack). But this is my point: I really am too stupid to learn them.
There are plenty of reasons why something you think should take a day might take a week. We often (fallaciously) assess a problem as "easy" without considering the context in which it has to be incorporated, i.e. the codebase. The duration is moreoften a function of the complexity of the codebase, rather than the complexity of the isolated problem.
We've all been there. Things take much longer than expected. This is, as you've already stated, something we come to realize in our careers.
However, I would like to share a different angle, if I may. I don't think your speed is necessarily the problem. I think your time estimation skills might need some work.
It's easy for us to think things "should be simple". Unfortunately, the modern world of software is becoming more and more diverse and complex (a symptom of software "eating the world"). It's only natural in a world that has evolved from 8086's and terminals to interconnected-everything, multi-OS, multi-platform, multi-device magic is going to be orders of magnitude more difficult to build on at any reasonable scale of business. There are just so many small things that can go wrong!
I would gently suggest you stop being so hard on yourself and instead of beating yourself up for "being slow" just take each technology one at a time. Ultimately, the way to survive in this career is embracing change and staying sane while doing it. It's not easy, but it's the only way.
As for you being dumb, I would also like to say that it is clear you choose to deeply understand the topics you put in your head. Many folks "learn enough to get the job done" and very little more. It seems you have an appreciation for deep learning. This is not a bad thing, it is usually this trait over a long time that differentiates a truly senior engineer from an intermediate one. Your path may be slower, but it is probably more thorough and more informative in the long term. Keep in mind though, this breeds imposter syndrome in your mind. When you accept you still have lots to learn, it is easy to feel like you know nothing at all. I would suggest not letting this get to you. We all feel it, it is real, we're all just trying to do our best day-to-day.
Oh, and always 2x pad all your estimates at minimum, cause if we know anything about computing, it's that it always has hiccups.
It's completely fine to accept your situation without giving up on improving it.
The problem is when people refuse to acknowledge reality because they really hate certain labels and are too invested in not having that label apply to them.
I have met many, many brilliant engineers. They fly real fast and really far. The funny thing about it is that they tend to crash and burn sometimes too. Sometimes they even break orbit, but even then, someone has to stabilize that orbit from time to time.
Life takes all kinds of people. Folks who are willing to simplify things while ignoring fads are just as useful as those who jump into the unknown of new technologies.
What you think of as a weakness you need to compensate for so you can get work done is what other employers will consider an asset because you refuse to add complexity where it is unwarranted. What you call "slow" others might call wisdom.
I'm really not trying to cheer you up. I'm sharing facts from over a decade writing software being the "slow one".
Now, to address your point more directly: I'm too stupid to figure out configuration, but not too stupid to figure out code. Code gets compiled and type checked. You can have tests, etc. Tractability for code is much higher than configuration.
With configuration, you have to be really smart and keep many moving parts in your head.
With code, you can be a bit dumb and lean heavily on the tooling.
Skeptical of the person who is "not smart enough to figure out docker", yet is smart enough to know about building a "web application into a self contained statically linked binary executable file".
I think they are smart enough, they just don't want to, or don't see the value in it (even if they've made an uninformed decision)
If you take 'figuring out Docker' as an open-ended problem then it makes sense. Getting an app running in a container is relatively easy, but that isn't really 'figuring out Docker'. That's just scratching the surface. Learning how to optimize that container, make it secure, put it in a registry so you can reuse it, etc means there's a lot to get through before you can say "I've got Docker figured out." There's no 'Done' when it comes to learning Docker. That makes it a harder problem in many people's opinions.
Idk this person claims they “can’t” figure out HTTP verbs and REST. I’d say there’s a very good chance “figure out docker” means “run a program in a container”
Many doctors watch videos at 1.5X speed or they can not take in the information. If its too slow their brain switches off. I find I have this problem too and i havent figured out docker either even though im a ninja at programming.
This appears to be someone railing against these practices, but in terms of "defending" them.
Some are universal, but some, only apply to certain domains.
I'm not really a fan of tearing down others, but I do feel the industry has a lot of room for improvement. Not sure if these types of screeds will actually make things better, though.