Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My guiding principles after 20 years of programming (2020) (alexewerlof.medium.com)
505 points by firstSpeaker on March 22, 2022 | hide | past | favorite | 189 comments


> Never start coding (making a solution) unless you fully understand the problem.

While I agree with this in general, I find that to reall fully understand a problem, I need to attempt to code, or at least formulate, a solution to it.

a) because when I break down a problem into its code-able component parts, I learn a lot about it

b) because in the process of then actually implementing these parts I often discover edge cases or undefined cases (especially in naturally grown business-logic)

c) because what the problem actually IS, is often not that clear at the start of the problem. Yes, in an ideal world, changing requirements would wait until the next version, however, sadly that's not what happens in the wild.


Yeah, it could just be me, but I prefer to make two false starts, toss them, and then get it right on the third attempt rather than attempting to whiteboard the problem for two weeks.

Not only is it more interesting to me to try three different ways to tackle a problem, but I have been burned when the two weeks of whiteboarding missed something and I'm back to having to iterate anyway.

To be sure, I do a little whiteboarding, but generally it might be about 2 hours or so of sketching out ideas, major structures, code flow ideas.

I generally was nodding along to most of the author's points though.

I definitely have grown to divorce myself from my ego and always try to not only try to shine the spotlight on my younger coworkers (new engineers) but try to give them "ownership" of key pieces to allow them not only some sense of autonomy/ownership but a sense of pride as well.

That does go slightly against some of the author's points about collaboratively working on a project. Engineers need a part of the code (let's say an image cache manager, as an example) that they can "own" though in order to grow. You don't want an engineer to always have "training wheels" on. (And frankly, I think this is one of the things I dislike about code reviews, I think it dis-incentivises autonomy.)

The team though, let me be clear, is the most important part of any project, not the individual luminaries. The "team" though, to a degree, needs to have engineers who feel an ownership stake in pieces of the product.

(FWIW, I have probably 35 years of programming experience, ha ha.)


I forget where I read this, but something like:

first time to understand the problem

second time to understand the solution

third time to do it right


it reminds me of some adage for surgeons in training: see one, botch one, nail one.


I thought it was "See One, Do One, Train One"... which might actually be worse!


I take the original text as "don't just jump into coding right away" which I've seen so many people do. As you said, take a little bit to process and think through it, then start out your pseudocode, etc.


You shouldn't try to code a complete top-down solution without fully understanding the problem. (And how often do you fully understand the problem, or think that you do but don't?) But you can code up a bottom-up partial solution of the parts of the problem that you do fully understand, and in the process learn enough to make progress on the total solution, so long as the components you develop in the bottom-up process are composable and robust.


There was recently a thing I worked on where there were multiple similar things that needed to be done. Rather than finishing the beginning of A and then the beginning of B and then C, I opted to finish ALL of A then all of B then all of C. This way, what I learned when working on all of A, I could use to make better decisions about the beginning of B. Then what I learn from doing all of B would be used to help make the beginning of C even better.

If I did all the beginnings first, I'd then go back to doing the middle of A and realize I did the beginning wrong and have to go back to fix the beginnings of B and C.


Fred Brooks' other famous bit (besides the "mythical man month"): "You should plan to throw one away. You will anyway."


I agree. Coding a solution is my main approach to understanding a problem better.

While it is possible to completely understand the problem before writing a solution, it takes much less time to just build a prototype, analyze it to see what mistakes you made, rinse and repeat.

Note that the third principle of the "unix philosophy" is[0]:

    Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
[0] https://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s...


The key part here is that you shouldn't be afraid of throwing parts (or even throwing wholes) away. Many times we get too attached to solutions to the wrong problem because that's what we built.

So in that sense we could rephrase the idea as do not commit to the code you write until you have a good understanding of the problem. Use it as a learning tool.


Yep. It’s fine to explore the problem space by coding. It’s not fine to think you already have the solution and end up coding yourself into a dead end. Ie. solving the wrong problem.


Yeah I believe this was identified as the main thing leading to the log4j debacle; they intentionally kept the offending code in there for backwards compatibility and edge cases, which really should have been thrown out a long time ago forcing users to accommodate the update or remain knowingly compromised.


While what you say is true, the context is a little bit different. Java built its whole world on the premise that you write your code once, and you run it everywhere [forever]. Every single breaking change loses you a bit of customers - I suppose Java just cared more about keeping their customers, rather than keeping their customers safe.


I call this, Insight by Progress. I.e. the further you progress into a problem the more insight you gain.

Personally I have never 100% solved a problem before writing any code. For me it really the other way around. By writing the code I better understand the problem.

I guess what we're dealing with here is that different brains solve similar/same problems in different ways. I guess its also the reason that advice such as "Never start coding ... unless ..." doesn't work for everyone.


> I guess what we're dealing with here is that different brains solve similar/same problems in different ways.

I think the main difference is between human brains and machines. Biological systems operate with a kind of "fuzzy logic" by default...glancing over edge cases, smoothen out discrepancies on the fly, filling in missing information with context knowledge or assumptions, etc. Being imprecise is not a problem, in fact being able to handle the imprecise, is what keeps living things going.

When we describe a problem to a human like "take that crate and put it in the next room", we know that he will fill in the gaps (hopefully) like not assuming that the cleaning closet is the "room" we meant, even tho its right next to the starting room, and its technically a room.

Describing this to a computer is different. Edge cases need to be considered. What if there is no crate? What if there are many, which one does he need to get? What does "in the room" mean? Does the orientation of the crate matter? What do I do when the room is full? "Next" on which side of the corridor? And so on, and so on.

Only in having to describe a problem in an algorithmic way, can we fully appreciate all of its parts, and the information flowing through the process.


This idea is the source of Fred Brooks' observation: "Plan to throw one away. You will anyway."

In order to implement a good solution you need to understand the problem well. Often, the only way to reach that level of understanding is by trying and failing to implement it.

These days, with modern refactoring and incremental development, it's less throwing away a whole program and more gradually refactoring it until little of the first code remains but I think the observation still holds in many cases.


Completely agree! This is my number 1 point on my own "I've developed SW for decades, here's what I have learnt"-post:

1. Start small, then extend. Whether creating a new system, or adding a feature to an existing system, I always start by making a very simple version with almost none of the required functionality. Then I extend the solution step by step, until it does what it is supposed to. I have never been able to plan everything out in detail from the beginning. Instead, I learn as I go along, and this newly discovered information gets used in the solution.

I like this quote from John Gall: “A complex system that works is invariably found to have evolved from a simple system that worked.”

From: https://henrikwarne.com/2015/04/16/lessons-learned-in-softwa...


I would say, that it is not so easy to make this from-small-to-big approach work. It requires planning ahead on another layer. When you develop something small, which is supposed to grow later, you need to make sure to not bake in assumptions, which do not hold for later versions. You need to keep the primitives flexible.

I see this as a basic skill, that we should aim to master at some point, even if we may never truly master this art.

If we do not move carefully around assumptions, we will have to refactor things later, which means more work. Of course moving carefully can also consume time. At some point it may become a tradeoff.


Your principles are better than the OP’s.


Definitely agree. I think some people find it controversial these days with agile etc but i think making prototypes is a severely underrated skill and practice. Not all problems are obviously tractable from the outset by pure analysis. Sometimes you need to build something out of cardboard and rubber bands (figuratively) to see if it will work. Was a topic I wrote an essay about some time back (https://www.machow.ski/posts/galls-law-and-prototype-driven-...).


The weird danger of making prototypes are managers who think having a prototype means you're almost done with the whole task.

This is especially bad when the prototype works, but doesn't work well. They think you just need to polish things up a bit when really, you need to take what you've learned and build the real deliverable.


Yes this is very true. There's a certain amount of managing upward, and expectation setting that is required. Unfortunately a common feature of many companies these days is non-technical management managing technical staff, and in such cases it's indeed a dangerous proposition.

Another example perhaps as to how the software engineering field still has significant maturing to do vs. e.g. mechanical engineering.


This is why a good IDE is so important. Since I use Jetbrains products (Rider, PhpStorm) I don't worry about refactoring anymore. This results in a very agile way of working.

I believe people underestimate the power of a good IDE.


> I believe people underestimate the power of a good IDE.

Definitely. And in some parts of the tech world, people actually deride the usage of an IDE! "If you're not using vim you're a newb," kinda deal.

Which, sure, vim is great! And you can get a lot of decent plugins that can make that workable. But gosh IDEs provide so much powerful functionality, why would you make your life harder on purpose by not using them.

That attitude does seem to be fading though, in my more recent experience.


Well, I am a vim user, and I deride noone for the tools they want to use. If someone wants to work in a super-modern IDE, great. If someone wants to work in Notepad++ great. If someone wants to work in ed and use a lineprinter, great. If someone wants to use EMACS, great. Programming is one of the professions, where the craftman gets to choose the tools, and for me, that is something to celebrate.

So with that being said, why do I use vim in a terminal emulator, instead of a modern IDE? Here are my most important reasons. Bear in mind, these are extremely dependent on my personal taste, modus operandi and thinking.

1. Simplicity helps me focus. vim hits a perfect mark between simplicity and feature-richness. Interfacing it with more complex systems such as LSPs, Linters, etc. is almost trivially easy.

2. I like building own tools and adapting existing tools them as I see fit. vim is pretty much perfect in that regard, not just because of the power of vimscript, but because how easily I can integrate tools I wrote myself into it.

3. It's absolutely trivial to set up: I copy my ~/.vim and that's it.

4. Once I figured out jumps, markpoints and linescripts, it allows me to do absolutely crazy things in codebase navigation

5. It works over ssh

6. It runs instantly and has a negligible performance impact

7. I can seamlessly integrate it into other CLI tools that require an editor, providing my default work environment in every situation.

8. Since version 8, I can actually use it as a terminal multiplexer, which is just crazy good for my workflow...I often have several split tabs to edit/navigate the code and a terminal tab controlling deployment and testing on the remote machines...all in a single terminal emulator window.

So yeah, why do I like vim? Because its flexible, fast and extendable, works everywhere and does exactly what I want.


> If someone wants to work in ed and use a lineprinter, great.

Sure, as long as it's not on my dime! :-)

vim et al are great and I know skilled users can get very productive in them, but I would not push it as a default environment for incoming developers. The amount of force-multiplying functionality in the stock install of IntelliJ or VSCode is very good.


> but I would not push it as a default environment for incoming developers.

I would not push any default environment on incoming developers unless there is a very good reason to do so (eg. custom graphical lowcode tool that only works with a specialized editor). As I said, one of the beautiful things about programming is that the craftman gets to chose the tools.


> And in some parts of the tech world, people actually deride the usage of an IDE! "If you're not using vim you're a newb," kinda deal.

I share the sentiment, but for completely other reasons.

Human mind is very limited - it can only hold so much complexity at once before it starts making mistakes and oversights. IDE's raise that bar of tolerable complexity, making it easier for people to build enterprise-level Rube Goldberg's machines just to keep the software working.

Using Vim (or any other non-IDE editor) forces me to keep the software simple, because otherwise I can't understand it. And in my experience, keeping software simple (long term) is much more important than keeping software working (short term).

Of course, sometimes we have no choice but to meet the deadline. That's where all those bells and whistles really come in handy.


I've approached IDEs similarly. But in addition to this, a lot of IDEs lack real accessibility functionality. Jetbrains has accessibility features [0], but it is clear no one at Jetbrains actually uses them. This isn't unique to Jetbrains, there are a lot of companies that implement what I call "fake accessibility". There's a section in their settings to enable "accessibility", but if you actually use it and depend upon it you're met with a garbled mess.

> For macOS, switch on the VoiceOver and install and set up IntelliJ IDEA. However, for a full screen readers' support, we recommend Windows.

I've never tried the Windows version, but the VoiceOver version is unusable.

[0]: https://www.jetbrains.com/help/idea/accessibility.html#scree...


> Of course, sometimes we have no choice but to meet the deadline

What do you mean, "sometimes"?


There’s a lot of different kinds of deadlines. Some can be extended. Some can not.

“I love deadlines. I love the whooshing noise they make as they go by.” - Douglas Adams, a man who was very familiar with the many varieties of deadlines.


I think that has basis too though. IDEs are huge and hugely complicated. Until fairly recently, they were quite often nearly unusable. Want to edit a text file? You'd first have to start your IDE loading and go to lunch, then maybe if you were lucky by the time you got back it'd be done loading and indexing and you could actually edit your file.

Well not so fast, first you'd have to answer 100 questions to configure the IDE for a 'project'. Only then could you edit your file. Slowly. One. Single. Character. At. A. Time. Waiting for the IDE to become responsive again after each character.

Nowadays, however, the hardware has caught up and some IDEs are actually usable most of the time. And they're indispensable on the big convoluted projects we now deal with, that have tons of dependencies and convoluted ravioli code from OOP hierarchies.

I do kinda miss working on projects that were simple and clean enough to not need an IDE. But it would be terrible to go back to not using one with the code we wrangle these days.


Maybe you have to deal with OOP ravioli because the authors were using IDEs.


If you read ahead two sentences:

> You need to progressively go through the code-test-improve cycle and explore the problem space till you reach the end.

So the author seems to agree, somewhat contradicting his own point.


I think this is common for many programmers. Programming isn't just a way to implement the solution to a problem, it is also a way to find solutions to problems (by working them out in code). Often times, programming can also help you figure out what the problem is. Programming isn't just about implementing things, it can also help with thinking.


I think this depends on the scale of the problem. For small, algorithmic, problems, the kind you would see in a coding interview I find it far more efficient to solve on paper, or on your head, before getting bogged down in code. The difficulty in these types of problems is not understanding the issue but coming up with a solution. For much larger problems, just getting you head around the problem is very challenging and you need some scaffolding, in the form of half-backed, but still precisely described solutions, simply to provide ballast to your thought process.


If like most of us you are working on legacy (as opposed to greenfield) then the existing code is a part of the problem space. Coding (and debugging) is a good way to understand that problem!


> While I agree with this in general, I find that to reall fully understand a problem, I need to attempt to code, or at least formulate, a solution to it.

Me too. Sometimes when I don't feel like I really know how to solve a problem, I'll just write some really hacky code to try to get an answer. Just anything that moves the problem forward.

Then once I understand what I want to do better, I either throw out that code and start clean. Or I just refactor the heck out of it until it is in a good state.


I think that line may be more focused on eliminating any known ambiguities in the requirements rather than trying to determine any unknown ambiguities or tackling edge cases and bugs.


I agree.


In a way, coding can be thought of as two distinct things.

One: coding is a way to play around with ideas, equivalent to a back of the napkin calculation, a diagram in the sand or a pencil sketch of a part. We use code in this scenario more as a way to write down our thoughts and process. But rather than writing in just pure English and writing an essay or a set of Todos, we write more detailed "specs". "Do this three times or until this value is zero" is also written in code and perfectly understandable in code too.

Two: code is used to run by a computer to produce an outcome. In this case, the back of the napkin sketch IS the part but more thought out.

Society is so used to there being two material objects between the "plan" and the"final product" that we forget that code is different. With code, the plan and the final product are of the same form. Of the same physical space.


The same distinction still holds in programming - it's just that the process of using the plan to generate the final product has been automated, by that handy class of tools we call "compilers".


>> Never start coding (making a solution) unless you fully understand the problem.

> While I agree with this in general, I find that to reall fully understand a problem, I need to attempt to code, or at least formulate, a solution to it.

I'd go as far as to say that programming is a good medium for expressing poorly understood and sloppily formulated ideas. Oh, wait, someone else already said that...


This is pretty similar to the "pantsers vs plotters" debate in creative writing. Do you need an outline before writing a novel? How much of one?


both. you absolutely benefit from being free to improvise and follow the creative muse when pen on paper

you also need a well-thought out outline for any larger work like a novel, or, it will be a disaster, or at least not be anywhere as good as it could have been

credentials: creative writing for decades and experienced 1sthand tradeoffs of each. my verdict? seek the Hegelian dialectic. ;-)


> you also need a well-thought out outline for any larger work like a novel

Hasn't been the case for me so far. Every story and essay I've written started with an idea. If there is an outline, it gets thrown away page 2 and never returns. On my second novel and this still hasn't changed.

My writing is actually better this way because I start with characters and plot emerges as they deal with the forces around them. I never know how they will react until they do.

I program the same way. Write a bit, revise, write a bit, revise. Drives some people nuts if they see the process, but the end result has been just as good or better than my coworkers throughout my career. (I'm a huge fan of tests. My revise step includes adding tests.)


If one ever does forgo an outline (I'm not sure if this is ever done), you'd probably have to revise the entire thing a lot, such that you're imposing an outline anyway.


bingo. agreed. been bit by that a lot, esp when younger haha. my 1st book had no outline. my 2nd and 3rd books both have outlines. while still trying to keep the baby, minus the bathwater. I like the results, and the total sunk LOE much much better.

related semi-tangent: with fiction I also adopted the rule to write the Ending first. because, for me at least, it then crystallizes what sorts of Middles and Beginnings might lead to it, and make the most sense or have the biggest emotional impact on the reader. which then practically paints out the character journeys for you, as the author. very helpful tactic


> If one ever does forgo an outline

It's more common than you think.


the 'beautiful' thing about this, is that it's then impossible to convince product people and "stake holders" to authorize enough time and resources to improve the solution afterwards.

In my experience they'd only relucantly agree if and only if it's "do this or die". Which then happens under time-pressure or under stress factors of other kinds.

"if it already works, why fix it?" (because it's making developer's lives much more difficult than they could be? --"but how does this increase revenue?" --errhmm... because incresaingly difficult to maintain? but they don't care about this "that's your job")


> it's then impossible to convince product people and "stake holders" to authorize enough time and resources to improve the solution afterwards.

> "if it already works, why fix it?"

I suppose one attempt at a fix for this would be to make the user interface of your prototype the least prioritised part; maybe even go out of your way to make it ugly and clumsy, so the thing as a whole looks more like a prototype and less like a "solution".


That's not coding the solution though, that's exploring the problem.

Obviously it's helpful to formalize the problem! This is no different from manipulating mathematical equations on a piece of paper. =)


This one is weird but I agree with the gist of it. You shouldn't be coding until you understand the problem and have a high level design planned out in one form or another.


me too...

I usually so a vertical proof of concept where I tackle each step I understand in a unit-test. ofc it's no unit test- the test framework just serves as an quick entry point where I can test different approaches next to each other or different versions/variants of each approach.

like a repl with the benefit of an easy view of my "history" with the ability to step back in time.


Analysis through synthesis


Here's one more from me: count your liabilities.

1. Code is a liability. No code? No bugs. The best commit is one that removes unnecessary code. This includes dependencies.

2. State is a liability. Multipliers for: hidden or non-obvious state, shared state, externally (by actors you don't control) accessible state, concurrently accessed/mutated state. Often the worst offenders are environment settings/configuration, such as Windows Registry, environment variables, installed dependencies, daemon services, etc, scoring full points.

3. All publicly observable behavior is a liability. Also known as Hyrum's Law: "all observable behaviors of your system will be depended on by somebody", or https://xkcd.com/1172/. Huge multiplier for systems with a long expected lifespan, and another huge one if you guarantee backwards compatibility to paying customers. Program under this assumption, and if you can hide or control your internal behavior, do it. E.g.: maybe don't output results reliant on a HashSet order - sort them.

None of these are hard 'rules'. Often the liability is necessary, or worth the saved effort or gained feature. But be aware of them, and minimize them where possible.


I'd add developers as a liability, notably bus factor; if you have only one dev that knows tech X or product Y, you need to get rid of tech X / product Y, or hire and train more people that know tech X / product Y.

Don't keep adding new technologies to your tech stack, it increases complexity and bus factor. I'm saying this as a warning as well because I've witnessed numerous projects where they switched to a new technology or tried to rebuild their tech stack, only to go back once the consultants and self-employed folk got bored. They hired external people to accelerate without having them focus on handover and building up their own IT department.


> E.g.: maybe don't output results reliant on a HashSet order - sort them.

I would much prefer shuffling them in that case. Less expensive and reduces the size of the public API. (Returning items in a specific order, whichever it is, still is a sort of promise.)


One gotcha is that some SQL databases automatically sort items when you do a GROUP BY.

Then a few releases later the database switches from a B-tree to a hashmap for the GROUP BY internals and now your output is no longer sorted.


> "all observable behaviors of your system will be depended on by somebody"

Ugh, a former team I was on had that to the extreme. We were customizing open source software for internal use. A lot of operators had (over years) developed scripts that would just SSH in and use a CLI to interact with the software, using regex to parse output. We once thought we were safe adding a new line to the output rather than modifying an existing output line. Nope, someone somewhere used a multi-line regex.

Kicking everyone's automation out of the shells of those boxes was a multi-year project that is, as far as I know, still ongoing.


Listing Performance as the lowest value bugs me. I understand that most software isn't performance-critical, you can probably afford to use a garbage collected language for most use-cases. But putting performance last contradicts the high priority of Usability in the list, because Usability strongly depends on performance.

Your product might still work if it has a second of lag after every operation, but I won't want to use it. And if there aren't any decent alternatives, I am going to experience irritation and sadness on a continuous basis.

Jonathan Blow said[0] (5 minute interview) that software authors have an ethical responsibility towards their users. How many people use YouTube, or Windows? The time wasted might be small on an individual scale, say a minute a day, but multiply it by a billion people (eg. YouTube or Windows) and that's 16 million man-hours per year.

I'd argue the same responsibility exists just as strongly even if you have one user! Do you want to frustrate her, constantly waste her time? Of course not! She is not going to be very happy with you if you do that :)

[0] Jonathan Blow on Success and Ethics in Software Development https://youtube.com/watch?v=k8gIJOy0c2g


There are publicly accessible MIT lectures about A&D and computing performance. Part of the taught rationale is that performance behaves like a currency. Some thoughts about this:

The more you have of it, the more you can afford to do. This is interesting to think about. There might be stuff we don't do, that we rule out because we don't even think we can afford doing them. A slightly milder effect here is that we tend to make things more structurally (architecturally) complex and expensive, just because we don't even think of making them fast in the first place. Both of these effects impose real-world limitations and costs.

Secondly, if you don't have enough of it, then you tend to be constantly distracted and limited. Anyone who went through financial hardship for a time knows about the mental, physical and social tax this imposes. I think it is useful to think of performance that way. Or to turn it around: if everything we do and compute was incredibly fast and reliable (also an aspect of performance) then how would that change our behavior, well-being, productivity?

Side note:

Garbage collection is often used as an example in the way I describe above. One can "afford" to use it, or not. But I think this touches on a rather special aspect of performance, which is a broad term to begin with. It affects the overall memory footprint and variance (GC pauses) but doesn't necessarily affect overall latency and throughput, which are more generally applicably aspects.


> But putting performance last contradicts the high priority of Usability in the list, because Usability strongly depends on performance.

Well that's why usability is its own item. Whatever is needed for usability is ranked as important by the list. If indeed your users find that their button clicks are just so very fast you need handrolled assembly to keep up, then writing that assembly is vitally important. Otherwise? Probably not so much. Performance for its own sake is last priority.

Now obviously there's a coherent argument that in general performance isn't considered enough when evaluating usability. That's totally fair, and probably true.


Valuing performance has indirect effects on other things: you're forced towards better models, better SQL queries and better validation/verification of data.


All those things also come from valuing things earlier in the list: Reliability, Usability, Maintainability, and Simplicity


My opinion is that too often we say "good enough" performance, let's optimize later if this degrades overtime... And there we are, sometimes creating time-bombs of performance degradation just for the sake of delivering features the faster the better... When you have to clean up and refactor other people mess because it's a clusterf*ck of "just works", you begin to appreciate some basics rules and "dont's" about doing things in software


"Performance serves usability" is, to me, another very strong reason to do at least some prototyping very early.


I don't understand the connection, could you please elaborate?


Perhaps this is no longer as much of a problem as it used to be.

At one time, people would plan "interactive" applications where the response time turned out to be >10X or more than what they had anticipated. As an example, imagine clicking on a dropbox and having to wait 20sec for choices to appear -- and then also finding scrolling down a list impossibly slow.

That would call for reconsidering the approach to presenting choices, in this case, and in general it could call for a significant redesign in advance.

EDIT: I suppose this could still be most obvious in games. Could you successfully plan a visual game design while being 10X miscalibrated in how fast an engine will support updates?


I don't know how I feel about the insistence of having pet projects and learning something new every day. It feels like a very work-centered approach to life.

Unless of course the pet projects happen at work during working hours, but finding a place that allows that is probably a whole different kind of beast.


I don't know about every single day, but I've worked professionally as a software developer since 1998; and I've been writing code for fun outside of work since I got my first computer around 1985.

Work is when someone tells me what to do or where/how to do it, which is why they usually have to pay me obscene amounts to make it happen.


This one's very personal, it's only a good thing to do if you genuinely really enjoy it.

I used to be a maths teacher and I love maths, and would do it for fun outside of my job. That in turn often helped with the job itself, and certainly helped to keep my interest in the subject alive even when 90% of the job was teaching fairly basic stuff that wasn't particularly exciting.

I'm now a software engineer - I'm much happier in my job, but I don't love writing code enough to ever want to be doing it outside the 40 hours a week I'm paid for. I'm slightly jealous of the people who do, because I know first-hand that it's a great way to do better at work and keep feeling fresh and interested in what you do, but forcing yourself to do it doesn't bring any of the same benefits.


Once you've settled into a programming job there is actually very little that is new and intellectually challenging. If you don't do anything else you'll become obsolete and atrophied.

To have pet projects is to break out of this daily slow death and to keep learning and being intellectually challenged while keeping up with the industry.


Gather a bunch of co-workers and lobby for 15 % time! You should absolutely have some time for pet projects at work.


I think it's amazing if one can claim with a straight face people should `absolutely` be paid for pet projects at work.


If we're arguing here that it's a great form of professional development, why would it be a joke to get better at your job, on your job?


Furthermore, in some cases I think it is a great idea.

I've seen to many "interesting" technlogies being crammed into projects just because devs needed them on the CV.

If people can test out everything they want without having to claim that they need it on a project it can save ourselves a lot more than 15% on any project or product that lives for more than a year or two.

At work we are still sometimes suffering because of how cool Redux once was.


To be the devil's advocate, you wouldn't want to pay your plumber for time on his pet project while he's working on your bathroom.


No, but I might prefer a plumber from a company that pays their plumbers to train and work on the latest 'tech' in their down time. I might even be willing to pay a 15% premium, especially if I'm looking for a 'cutting edge' solution for my bathroom.


You really don’t want a "cutting edge" solution for your bathroom. You want a boring one that you will be able to maintain by yourself for years with standard tools from the store.

And I truly think it’s the same for your codebase.


Given the choice, I would absolutely go for the plumber that spends some of their time practising difficult plumbing jobs and doesn't just do routine cases. Even if they are 15 % more expensive. I honestly believe they'll be much better equipped to handle anomalies, should they occur.


Looking at it further, I would ignore a 15% difference if it came with any enticing reasonable. Any small difference in quality is certain to return me much more than 15%.


this is such a poor analogy


I give you that, haven't spent much time on coming up with it.

I would like a competent plumber and if he does some fancy work in his spare time and gets even better than great.

But I would raise an eyebrow if beside my work he will also want to be paid for 6 hours he spent in the weekend, experimenting on his bathroom...


I wouldn't expect to have professional development overtly included in a short term contract, either. I'd expect a much higher hourly rate to pay for it though, amongst other things.

I'd expect if the plumber worked for a plumbing company they'd probably invest some time and money into skills growth, and pay for it out of the money they charge you.


Because it makes sense. Learning should be part of most professions. People attend conferences and workshops. Companies pay for travel, accommodation, participation and other expenses. Giving your employees time to learn something new with a side project is incredibly cost-effective in comparison to external learning opportunities.


Besides the things others have said, some major innovations started as corporate-sponsored pet projects. Gmail is a commonly cited example. I believe many of 3Ms successes (including post-its) also belong to that category.


While I absolutely support the idea that employment should include professional development, let's be wary: first, of survivorship bias, and second, of perversion of incentives.

How many side projects, at Google, and then at other places, didn't turn into runaway successes?

More importantly, if you tell management "you should let us work on side projects because it could make the company millions of dollars", watch how fast "did you use your 15% time to invent anything that is marketable this week" becomes a part of your performance review, and your "side project for personal development" becomes "PMs breathing down your neck asking for progress and status".


Oh, no, I would be surprised if more than one in every 100 pet projects as much as break even.

But that's the point. Your run-of-the-mill incremental improvements with a fairly certain ROI of 0.5 % or 2 % is what you spend almost all week doing. Then you have a few hours of doing whatever you want. And a very small fraction of those things will have a ROI of thousands of percent. And you won't know when it happens until it's happened.


If you apply the other principles described in the blog post, you will probably save more than 15% of your time so if you spend 15% of your time learning everyone is still better off in the end.

If you do not offer the possibility to learn to your employees they will try to learn while working on a project which is where problems come from : speculative programming or hype-driven-development.


Speculative engineering often gets a bad wrap, mostly because people mis-characterize it. For example: a feature is written up for a product that talks to X third party system. The same organization has other teams using X system. Wouldn't it be wise to abstract that integration work out to allow other teams to use it instead of building their own, especially in the agile world where teams are highly autonomous? This is often characterized as speculative engineering, when in fact it's engineering for scale.


It begs the question, what is your boss more likely to say YES to? 15% raise, or 15% self learning time? (For me it is NO to both, but the self learning time is probably more realistic)


pet projects can still be relevant to work.. maybe you've been wanting to make some internal tool, etc


When I introduce this, my criterion is that whatever pet projects people work on during work hours should make the lives of people at the company better, if successful.

Beyond that, I don't care if it's cancer research, self-driving bicycles, an email client someone else at the company uses, or a script to automate some commonly performed actual work task.


I still (after over 30 years as pro) develop stuff for fun on the site. I don't see that as work (and I don't see my work as work either these days either).


> Never start coding (making a solution) unless you fully understand the problem. It’s very normal to spend more time listening and reading than typing code. Understand the domain before starting to code. A problem is like a maze. You need to progressively go through the code-test-improve cycle and explore the problem space till you reach the end.

I agree with a lot of the points made by the author (2, 5, 7, and 8 really resonate with me too) but I think this one is my favourite. One strategy I've used which is successful - but not very popular - is "readme based development". The concept is simple, if we're building something new then the first thing we should do is summarise the project in the readme. If we spend time describing the problem and scope of the project and finessing it down to a few sentences before we start coding then we should have a better chance of staying on track and have an easier time communicating with others. The bigger the project the more useful this can be but unfortunately the majority of folks I've worked with do not like writing.


This point makes literally no sense to me. He says don't start coding before you fully understand the problem, and then he says the way to explore the problem space is coding (the code-test-improve cycle).

By the way I agree with the last part. Sometimes the best way to understand the problem is writing a half-broken solution for it. You just have to be aware that what you're writing probably won't solve the problem, and make sure all the other stakeholders are aware too (that's usually the hardest part).


I've found this sentiment is common among people who dislike coding. They feel code is a burden and do whatever they can to minimize the amount of it that they do.

I've consistently outproduced peers by picking up the compiler early in the cycle and throwing away what I've done if it isn't working out. It requires a bit of self awareness, judgement and taste to know if something isn't going to work but I think the people who hate coding also can't stand throwing effort away like that.


I think the author makes a distinction in the very point you critique between the problem and the solution. If you don't understand the problem, it doesn't matter how dirty your hands get, you'll have trouble solving it. Once you understand roughly what you're trying to solve, you should figure out how to solve it by messing around and seeing what works (as is stated in the article).


I usually go about it differently than hold off coding. I do try to understand the problem fully before I implement anything. I try to figure out what abstractions make the most sense and what to optimize for. Then I write a preliminary implementation and almost without fail I learn something new that changes the parameters of the "problem". Scrap it and do it again just with more knowledge. No amount of analysing and understanding up front have produced a better or faster result for me.


I suppose we can only accurately define a problem when there are no big unknowns, so I wouldn't prescribe readme based development in every scenario - a quick and dirty exploration of the space which can then be scrapped is more important sometimes.


I agree. There is exploratory coding. It’s a bit like talking to yourself (or your rubber duck) or writing things down, or drawing. It’s the kind of thing you do to load up your head with inputs to just play and it’s incredibly powerful.


Internally where I work we use RFCs if we want to propose a new feature/addition, it helps bring others in to skim the doc to find out if there is anything they need to add. It's not too bad, I quite like this


It should be easy to talk a pointy-haired boss (PHB, see Dilbert) into "readme-based development" (RBD). In fact, you can probably get funding for training, aka pizzas.

That's because RBD is basically Amazon's "write the press-release first" methodology and no PHB will say "we won't use something that Amazon says makes them successful."

The pizzas come from another Amazon saying.


I think it's key to think about the wording here

> understand the problem

> code-test-improve cycle

I think you're first trying to understand a non-programmatic problem, a business problem, a user problem. Then you start to explore the solution to that problem. Yes, you'll learn more about the "problem" as you solve it, but the cycle explores how you're solving the problem.


With big projects spanning across years, the "problem" is often a moving target. Adding new features alters the problem space dynamically. The best you can do as a maintainer is to be mindful of the original problem that the solution was architected for, and extend the thing without crossing certain boundaries and assumptions.


> The best you can do as a maintainer is to be mindful of the original problem that the solution was architected for, and extend the thing without crossing certain boundaries and assumptions.

Speaking of boundaries and assumptions, one of the most important realizations I have had about software development is the true meaning of modularity:

Software modules are the things whose boundaries limit the spread of assumptions.


Absolutely, we can always try to assume that our predeccesors or past selves had the best of intentions but nothing beats writing down the context and problems at that moment in time for future reference.


Readme based development sounds like a great idea!! Thanks.


"Doc > Test > Code" is something I can get behind :)


I like the tech debt quote:

>Tech debt is like fast food. Occasionally it’s acceptable but if you get used to it, it’ll kill the product faster than you think (and in a painful way).

However, I would have worded it more strongly:

>Tech debt is like cocaine. It will make you unnaturally productive, until one day it doesn't.

My point here is twofold:

1. The speed with which which the worm turns is underestimated by Silicon Valley "move fast" culture.

2. The obsession with go-to-market speed in SV and SV-adjacent companies is extreme. In some sense, it's as extreme as the lets-take-blow-and-get-rich finance circles.

There is certainly something to be said for iterating quickly, and it is likewise true that engineers have perfectionist tendencies, but I think our industry has taken a good thing much too far. Going to market has more to do with being focused than with doing sloppy work. Slow is smooth, and smooth is fast.


I think you both oversell it. Debt, as the metaphor, implies borrowed something. You could be borrowing effort from pulled in libraries. You could be borrowing future maintenance time. Both can be valid choices. Both can be success or failure.

So, if you have the time now to not borrow, don't. But if you can leverage growth to have more future resources to pay back on debt, you should probably do so.


That's fair. My post is mostly a reaction to the fact that tech debt is rarely (read: never) handled correctly, i.e. as debt to be paid back with interest. It's instead an excuse to burn the candle at both ends.


I agree with that. I mainly think it is like most metaphors, and largely used to argue whatever position the debater has. Whatever use it may have, is indeed evaporated as both sides of the debate burn away at it.


How much cocaine is reasonable to do then? And how frequently?


That's exactly my point. If you find yourself in a situation where cocaine is necessary to achieve success, you're not in a healthy situation.

From there, I have no good answer. Some people will take it once or twice, succeed, and be fine. Others won't.


Amendment to #1: if the right tool for the job doesn't exist, find a cheap way to make the tool. Toolmaking is highly leveraged work.

I like the list! There are some things I would replace with other things (overall, I think I'd emphasise more general business and product development aspects more) but of course, everyone has their own list!


> great code is well documented so that anyone who hasn’t been part of the evolution, trial & error process and requirements that led to the current status can be productive with it.

Just the other day I was doing a code review where the code itself didn't have a single comment in it, yet the PR had several notes for the reviewer around why something was done a certain way, the failed approaches, unexpected behavior they had to workaround, etc.

My response, "Please put all this information into comments in the code so the next person to work on it doesn't have to figure all this out again".

If code was tricky enough that it took you down several unexpected paths to get a few lines right, take the time to note that in the code for the next poor soul to work on it. (Which may well be yourself in a few months when all those lessons are forgotten).


In general, I'm pretty skeptical of imperative guides like this. Such statements only focus on one side of the coin. So while it may have an element of truth as long as you are absorbed only on that one side, you risk completely missing the other side.

For example: > Deprecate yourself. ... Don't own the code.

On one hand, this is true: ultimately, you want the code to be independent of you. But on the other hand, I've witnessed bad code quality proliferate in a repo many times because no one is taking ownership of the code they write: they just write enough to finish the task and move on. And it rarely gets caught in code reviews either because all the other members are operating in the same mindset too. His principle misses this side of the coin.

Over the years, I came to realize that at least for me, there is one root force behind almost everything for me, including how I code, how I interact with members, etc. It is: responsibility. I am simply trying my best to write responsible code, create a responsible product, be responsible to my team members and colleagues, etc. The rest of the stuff (like what you might call 'principles') are just specific manifestations of this feeling. e.g. code ownership: I need to own my code to the extent that I need to be responsible for my work, but on the other hand, it is also my responsibility to ensure the code does not depend on me forever. Albeit this probably may be too abstract to call a principle. In that case, I'd rather not have any.


> 10. When making decisions about the solution all things equal, go for this priority: Security > Reliability > Usability (Accessibility & UX) > Maintainability > Simplicity (Developer experience/DX) > Brevity (code length) > Finance > Performance

In my experience simplicity is the cornerstone priority that nearly all other attributes stem from. Simpler solutions tend to be easier to rationalize/debug, leading to systems that are more maintainable, more reliable, and generally more secure (see OpenBSD). This is of course not an absolute rule but I've found success by always being cautious when introducing complexity and investing extra time to explore/discover simpler approaches.

The author shows they have this insight in points 2, 11, 13, 17, and especially 19, but didn't really represent this in their list of priorities. Simplicity is more than DX.


That ordering just states that you should not sacrifice security for simplicity. As long as two solutions are equally secure, the simpler one is better, but not when simpler one is less secure.

> Simpler solutions tend to be easier to rationalize/debug, leading to systems that are more maintainable, more reliable, and generally more secure (see OpenBSD).

Yeah, IF they are more secure, THEN use simpler solutions.


I guess it depends on what "decisions" we are talking about. I took it to mean design decisions, in which case Security and Simplicity are generally orthogonal.

Plain text passwords are a lot simpler and less secure than salted, hashed passwords. Using bcrypt to hash your passwords is a lot simpler and more secure than some home-grown, self-implemented hashing algorithm.

Also as it says he is talking about priorities, if something is simpler and less secure then choose security. If something is simpler and more secure then hey double win.


Totally! Simplicity is core to the rest of the others in the list.

The other weird one is finance near the bottom. Typical engineering bubble style thinking... make sure your solution actually provides value and that people will pay for it! Otherwise you're wasting your time solving the wrong problem! The importance of that cant be understated!(unless your goal isn't to provide value of course)


Show me your flowcharts [code], and conceal your tables [schema], and I shall continue to be mystified; show me your tables [schema] and I won't usually need your flowcharts [code]: they'll be obvious. -- Fred Brooks, "The Mythical Man Month"


I keep telling people this: show me the data structures and I can make a pretty good guess about the algorithms that are used. Show me the code, and I still won't have much of an idea exactly what the data should look like.

In modern OOP, with code and data all mixed up and algorithms being injected at runtime with vastly different effects, things can get unreadable very quickly.


What does not separating concerns of data structures have to do with OOP?

Mixing state between a bunch of different backing stores, or not making in memory data structures explicit, makes data structures harder to discover, but what does that have to do with OOP?


> Never start coding (making a solution) unless you fully understand the problem.

If that were only possible.

For example, say you want to write a C compiler. You have a C Standard as a guide. Nobody fully understands it. And implementing the Standard is only a small part of implementing a C compiler.

Your implementation isn't going to survive first contact with a user, either.

P.S. Paul Mensonidas is the world's leading expert on the C preprocessor. That's a good indication that nobody else understands it :-)

Don't worry about it. Just start implementing the parts you do sort of understand, and keep iterating.


> Don't fight the tools: libraries, language, platform, etc.

That means you'll always be stuck doing things using existing paradigms. Which is fine if you're only developing an application, but not fine if you want to effect more fundamental change.

> Never start coding (making a solution) unless you fully understand the problem

Sometimes, I cannot understand the problem until I've started coding.

> Tech debt is like fast food.

Yes, in the sense that where I work, many people have it every day :-(

> go for this priority: Security > Reliability > Usability etc.

I don't buy that gradation.

> Don’t use dependencies

Contradicts a bunch of other points IMHO.

> Any function that’s not pure should be a class.

No. Verbs are not nouns. But - if he means functions with static variables, then maybe.

> Software is more fun when it’s made together. Build a sustainable community.

Very difficult in my experience. In a commercial setting, the company controls your software, and it's a totalitarian 'community', if at all. With your pet projects, it's often difficult to attract users to participate more actively.


Well you can extend the tooling to suit your needs. As Steven Covey says, sharpen the saw.


My big tip, always change the problem to fit the resources you have. If the problem is hard, don't solve it, change the problem.


Here’s a fun little story that’s stuck with me for years because it demonstrates this in a very hacker way.

(Forgive the lack of specificities, I tell this as a parable)

A fancy new office tower was opening downtown. Around this time, office space was at a premium so they soon sold out all their floorspace, and the project was considered quite successful except for one small problem; the architect seems to have installed slow elevators.

Tenants began to complain that the ride up to their high, and very expensive, offices was taking too long. These elevators were stupidly slow.

The building management frantically called around the big elevator companies, getting motors upgraded as fast as possible, but the complaints kept coming and in the end no one could quote anything less than $100mil to structurally alter the building to house bigger motors and more shafts.

Then one day an independent construction contractor showed up who offered to do a retrofit to fix the issue for only $10mil. The desperate building management decided to try it.

So the guy took $500k of the money to buy and install a bunch of mirrors in the elevator booths which had been sombre (and expensively) wood-panelled. Sure enough, the complaints stopped coming. (And the contractor got a $9.5mil payday)

The problem wasn’t that the elevators were slow. It’s that people got bored in them (in a world before smartphones). And what do people never get bored of doing? Looking at themselves in the mirror.

Of course, it’s annoying to save the equivalent of 95% of the cost but not see any of that money. You just had to deal with budget cuts that really should have killed any solution. Still a cool party trick I guess.


Fake progress bar?


Indeed. Or one that doesn’t show progress at all, it just cycles through tips / puns


I propose it's more like showing cat meme pics instead of a progress bar.

(presumably the lift has a "progress bar" equivalent in the form of the level indicator lights)


My 10 years have given me one principle - make money. I don't want to work like this forever.


I love this one:

> Bugs’ genitals are called copy & paste. That’s how they reproduce. Always read what you copy, always audit what you import. Bugs take shelter in complexity. “Magic” is fine in my dependency but not in my code.


> Any function that’s not pure should be a class. Any code construct that has a different function, should have a different name.

I definitely haven't been following this... anybody else using nearly 99% of functions in their frontend work? (Typically I'm in the React TypeScript world)


I’m from the same world as you and here’s how I understood this:

Any function that’s not pure should be a ~class~ component.

ie, when you have a bunch of functions that operate on the same kind of struct/set of state, and in fact you want to prevent other functions from messing with it (you want to make it internal), you group them in some way. The OO word for that is a class. React is a component; which are much more composable because we ditched the stricter concept of inheritance of OO classes.

Hell, the es6 class system is really just syntactic sugar around Brendan Eich’s nifty prototypal system. If it weren’t for the weak typing and lack of std.lib, JavaScript would be an amazing language.

…which is to say it would probably be Go haha


But what about hooks? These essentially break this point completely. Functions that literally have (side) 'effect' in the name! (useEffect)

Over the past few years, at least on the frontend, I've found in every case what is a class could be expressed in a function or variety of functions. So yeah, still unsure about this point, maybe it was more for other software domains (backend, embedded, etc)

Funny too about what you say about 'grouping functions in some way'... I just made a post about how it makes a lot of sense to put a single function per file... with my organizing, a named folder is the only thing that 'groups' my functions together... so far this style of organization is working well for us.

https://chrisfrewin.medium.com/advanced-code-organization-pa...


> Good code doesn’t need documentation

It's amazing how this POV lives on, even in an essay which decries technical debt. Documentation and well-written code fill different roles with only some overlap. Your code can be beautifully written, but if I need to read the whole 500+ lines to figure out what you're doing, rather than read a single paragraph at the top of the file, you've burned through a lot of my time. Good documentation explains the why, code explains the how.


"Realize that every code has a life cycle and will die. Sometimes it dies in its infancy before seeing the light of production. Be OK with letting go."

I cannot disagree more. Code lingers. I work on an 13 years old code base that is considered new by internal standards and it's not uncommon to see lines that haven't been touched for 10+ years.


> Bugs take shelter in complexity. “Magic” is fine in my dependency but not in my code.

I disagree with this. Dependencies become your code and any magic will always come and bite you, no matter if it's written by you or a downstream dependency.

The guiding principle should be: Reject magic at all cost


I disagree. For a huge majority of projects and teams, it is infeasible to build everything in-house. The way I see it, you can either

1. take an extremely impractical amount of time reinventing the wheel in house, or

2. take a (IMO a relatively much smaller) hit when you fail to take the 10 minutes to read the breaking changes on the dependency. Read this, make fixes, and everything is working again. If you hit a major issue like this with your own tooling, you're at the mercy of your team's ability to quickly build a fix.

A refined version of this point may be something like "only use dependencies which are very well tested, documented, list known issues, and have an active community of development and releases" - don't just install any package or library because "it appears to solve my problem"


My point is take dependencies, just skip the magic ones because they are vile


The thing I learnt at about year three was "if there is a solution that does not require new software, use that"

At the time it was early days for personal computers (1991) and the doctors in our organisation were very resistant so the clinical booking system became a board with cards in it.

These days people are much more used to computers, and have their own, but there is an underlying principal, closely related to KISS.

Software will not eat the world. If it can be achieved without software it will likely (not certainly) cheaper and more reliable.

An example? Controlling a machine. Hydraulic systems are well understood, very effective. Putting a computer into the system generally decreases reliability for very little benefit.

And has crashed quite a few aeroplanes.


I dunno, trying to independently rediscover what’s in The Pragmatic Programmer doesn’t seem like the best way to learn. At least not the fastest.

Every one of these is covered by lessons that OP didn’t have to learn themselves if they had just paid attention to the guidance available from luminaries in the field.

This leads me to the one lesson I’d give instead; let other people do your work for you, whenever you can. #13 on this list is far too strict, you’ll spend your days maintaining commodity code and discovering for yourself all the pitfalls someone else could have just solved for you.

If you’re looking for the definitive version of this, definitely check out The Pragmatic Programmer.


> Don’t only write code for the happy scenario.

I always have had problems communicating this to my colleagues, managers and stakeholders.

I remember having this discussion with a stakeholder once when I was a consultant:

PM: "How long would it take to have a page with a table with this payment data and this design?"

Me: "The page you're looking for doesn't exist exactly, it's more complex than that."

PM: "What do you mean?"

Me: "There is a page that maybe, eventually will have this table".

PM: "But I want this"

Me: "Okay, but when the user lands on the page, there is still no data, it is being fetched from our backend service. We need to give the user some feedback and tell him his data is loading".

PM: "Okay, add a spinner or something".

Me: "But that's not enough. What if the server hasn't answered after 5 or 10 or 60 seconds? How would the user react? Should we retry the call? Should we give him some other feedback? What if the response actually contains no data, because for the given filters there are no payments? What if the data is malformed? This may crash the application. What if the user is not authenticated or not authorized for the data? Isn't this an admin-only route? Estimating the table itself is one job, but defining all the requirements and handling all the non-happy-cases or edge cases is a totally different game".

Problem is, explaining this to stakeholders, and sometimes even colleagues that "don't care and just want the task to be over" isn't always simple and the solution is generally "we will fix backend so it sends you always correct data in acceptable times".

And then, you spend much more time fixing these edge cases after you go in prod and it becomes a giant spaghetti ball.

A seasoned and experienced backender very well understands that a (well formed) 200 is just one of possible answers he's gonna give back based on the request and database data, and yet we keep treating the less probable outcome as the only scenario.


If something feels wrong to you that means it is almost certainly wrong. There are so many things I’ve seen that made me go “hmmm, that’s odd”, which I ignored to my peril.

An assert that doesn’t quite make sense. An argument type that feels too specific. A hang in ssh once in a while.

7 times out of 11 they’ll turn out later to be the first sign of an error you wish you’d looked into sooner. I guarantee it, 82%.


yep agreed.

I've often thought about what interesting findings you would get back if time and people were infinite and all those little things you thought, 'huh? thats weird' and put it in a queue somewhere and a few days later it had been thoroughly investigated and findings documented by some smart person. I'd expect you'd find some weird networking problems for why ssh hangs sometimes, misconfigurations, waste, security vulns etc. So many things we skip over day to day bc it would be a distraction.


> Always have pet projects.

I strongly don’t agree with this one. In my opinion, a well rounded individual will have hobbies and a life outside of programming that won’t leave time for pet projects.

Having said that, I do have a pet project that I’ve been “working on” for about 15 years. Maybe I’ll find some time to make some progress on it after I retire.


Why can't someone be a well-rounded individual with hobbies and a life outside of programming, and also have time for pet projects?


I think #1 is:

Rule #1: Software is a social service. If you're not writing for the user, but rather for the sake of writing code, you're not a good programmer. Write for the user, whether that user is Joe Sixpack or Jane Hotdev, or even if you are the user - assume the role of servicing the end user. Software is a social service.


Is every job a social service?


Pretty much. The money to pay for the work, has to come from somewhere. Usually that is a person/organization/society.

But software specifically is social, because software is useless without a user.


That's a great list!

I feel like he "gets it."

#10 is my fave:

> When making decisions about the solution all things equal, go for this priority: Security > Reliability > Usability (Accessibility & UX) > Maintainability > Simplicity (Developer experience/DX) > Brevity (code length) > Finance > Performance But don’t follow that blindly because it is dependent on the nature of the product. Like any career, the more experience you earn, the more you can find the right balance for each given situation. For example, when designing a game engine, performance has the highest priority, but when creating a banking app, security is the most important factor.

I would add "Localizability," just before "Usability."

In my experience, security and localizability need to be integrated from the very first line of code. They can't be added after the fact.


Taken individually these items are valuable, but I am not sure about these kinds of lists. My own list would have changed each decade of my career. 10 years - idealism, 20 years - compromise, 30 years - realism, etc. It would be good to see a review by the author in 10 years.


It takes 20 years to learn to compromise? I would expect these kinds of lists to change very little after the first 10-15 years.


9 times out of 10 somebody else's code points the way. The tenth time, you didn't look hard enough. The exception to this rule lies in a very small % of coders who know who they are.

I hasten to add, I am not in that minority!


I realized at about 10 years into my career I would rewrite code as it was easier than learning someone else’s implementation.

Now that I’m cognizant of this I first thoroughly review other implementations to try and understand why decisions were made, ultimately saving me from repeating the same mistakes.

Inconsistent documentation is also a problem.


My friend at this stage you become a journeyman programmer, if you are able to understand and continue someone else's conception of a reasonably complex domain!


All good guidelines. A corollary to 1) don’t fight the tools, use the right tool is that you have to know lots of tools. Or at least, enough of several tools to know when to pull out which tool. Otherwise, “if all you have is a hammer, all problems look like a nail”.

I implement this by reading industry news, “spiking” projects for interesting frameworks, ides, etc. I find it helps to go broad in learning- learn a little about lots of things, but then deep dive when interest or need arises.


> Avoid overriding, inheritance and implicit smartness as much as possible. Write pure functions.

Amen, preach it from the mountain.

Can't say I agree performance is bottom of the priority pile. As someone who does numerical work semi regularly, performance can radically change the usefulness of software (interactive versus batch for example).


I'll add one:

Code is permanent. Don't rush it. If it takes you 1.5x the time that it should to complete a feature, that is a million times better than rushing something out that is broken. Even if people are waiting desperately and it's a blocking issue. Take your time, get it right once, and move on.


While living in Italy i learned this idiom "Chi va piano va sano e va lontano". It means "who goes slow and steady wins". It has fundamental impact on my work career. Now I don't rush to learn new tech stack but take things more slowly and immerse myself in it.

"The only way to go far is to go right".


I see much in there that aren't general principles at all but rather a reflection of the author's personality and preferences.

In other words: I'd be wary of sticking to these unless they fit who you are and the conditions in which you operate at your best.


After 20+ years as a dev I´ll say that the first item in the list should be: understand your organization, boss and coworkers first.

What you do should be based on that analysis. If not, frustration, dissapointment and conflict (internal, external) will follow.


Few problems with this:

> Don’t attach your identity to your code

Others in your team will. Example here is a craftsman taking pride in their work. Think about that the next time you are reviewing someone elses work.

> Security > Reliability > Usability (Accessibility & UX) > Maintainability > Simplicity (Developer experience/DX) > Brevity (code length) > Finance > Performance

Performance is last ? Code length is more important than performance? That's wrong. Also correctness is not mentioned at all surely that has to be in that list.

> Don’t use dependencies unless the cost of importing, maintaining, dealing with their edge cases/bugs and refactoring when they don’t satisfy the needs is significantly less than the code that you own.

https://en.wikipedia.org/wiki/Not_invented_here

It's impossible to determine the cost of using a lib using these metrics. I bet this person has had a problem with a dep a few years ago that caused a lot of emotional damage and the response is to never use 3rd party deps where possible. That's not a professional response to what happened it that's the case I'd hope most Staff Engineers don't have this attitude.

> Good code doesn’t need documentation

This is impractical at the end of the day you'll need to have something for the user to lookup for reference.

> Never start coding (making a solution) unless you fully understand the problem

Most devs don't get to decide this they will be told to do X by someone in a large org and given a brief description of why. Refusing to do your work until you fully understand a problem will lead to dismissal.

Some good stuff in there but also some problems.


> Good code doesn't need documentation.

Here's the full quote from the article: "Good code doesn’t need documentation, great code is well documented so that anyone who hasn’t been part of the evolution, trial & error process and requirements that led to the current status can be productive with it. An undocumented feature is a non-existing feature. A non-existing feature shouldn’t have code." Maybe you're just objecting to the first part, but he isn't saying "don't document".


I can see where he's coming from. I started around the same time he did and from my point of view from my experience, perhaps I can further explain how I see it.

>Performance is last ? Code length is more important than performance? That's wrong. Also correctness is not mentioned at all surely that has to be in that list.

Depends on the scenario. Early in my career I was maintaining a scheduling DSS. You would enter in some appointments and it would consider a bunch of rules the users would create and find an appointment with matched resources. It was very cool and I really got into it. One of our test cases was scheduling 12 related appointments all at once. After some tuning, our system could do it in about 10 seconds, give or take. I decided to give a prototype a whirl with different ways to store and process everything (in memory) and got it to schedule these 12 related appointments in under a second. I showed my boss and he was impressed, but ultimately said, "the user's don't care about the difference between 10 seconds and 1 second, it's 1000x faster than doing it on paper." When the OP says performance last, I feel this is what he means. Some developers tend to tunnel vision on picking the best set of algorithm and caching mechanisms above all else, allowing the product to face delays when no-one cares about the difference between 10 seconds and 1 second. Of course, this is dependent on the solution. If you are google or Netflix trying to serve millions of users, performance matters. Most of the time, great is the enemy of good in this regard, especially in enterprise apps. Also, you can always tune trouble spots later down the road.

>I bet this person has had a problem with a dep a few years ago that caused a lot of emotional damage and the response is to never use 3rd party deps where possible.

I write stuff in house whenever possible. I've been burned by external libs plenty of times, and there is one I have in my current codebase I wish I didn't. It depends on the quality of your developers, for really good teams, this isn't a big deal at all. I use third party stuff for more specialized libs like image conversion and whatnot. The benefits of in house libs is they only do what you need, so they are tight and when something goes sideways, a smaller codebase trying to satisfy 1 use case is easier to understand than a large lib trying to satisfy 10 difference use cases. Plus, you don't always have access to the source code of third party libs.

>> Good code doesn’t need documentation >This is impractical at the end of the day you'll need to have something for the user to lookup for reference.

Yes, I'm torn on this. I've written plenty of documentation that nobody has ever read but me though. If he means "code with good comments," I'm ok with that. External documentation outside of good tight specs is hit or miss. A lot of it is just check boxing.

>Most devs don't get to decide this they will be told to do X by someone in a large org and given a brief description of why. Refusing to do your work until you fully understand a problem will lead to dismissal.

I don't disagree this happens, but if it does, you're probably in an environment that doesn't respect you, your work or your growth in the company. Get out ASAP. I always explain to my devs why I'm doing something; mainly so they won't write some funky code because they don't understand the problem.


Eeek

> our system could do it in about 10 seconds, give or take

> user's don't care about the difference between 10 seconds and 1 second

They d,o but your boss doesn't. That is until they complain. Happened to me multiple times over the past 16 years. No one cares about Performance until it's too bad then suddenly it a huge problem. I've been in several projects where fixing performance was a rewrite due to this attitude. Performance should be much higher on the list period. Also not having correctness is insane.

> I've been burned by external libs plenty of times

Yes and you're not the only one but the "I've been burned therefore noone can use 3rd party libs" is childish.

> A lot of it is just check boxing.

That's presumably coming from someone who has a lot of experience and is very familiar with the domain. For those who are new to the company and new to working in tech in general good documentation is a godsend.

> Get out ASAP

Ah yes, heard this many times. It's completely impractical and often comes from people in an privileged position. Not all devs are paid high amounts and can quit their job on a whim.


>They d,o but your boss doesn't

You don't know. You don't work there. You're just making stuff up to bolster your position. Users never complained about the speed of the system based on that algorithm.

>Yes and you're not the only one but the "I've been burned therefore noone can use 3rd party libs" is childish.

No one said that but you.

>is childish

If you can't attack the position, attack the person, right?

Well we'll just have to agree to disagree. I'll run my team how I see fit and you run your team how you see fit.


imho simplicity ist the most important trait.

I have a rule i derive from that: like economy of movement there must be a requirement for every change/code.

when nobody says what is slow, how can I optimize. I would pick simplicity and maintainability every time over some undefined performance. Especially as high performant solutions tend to be rather complicated compared to simple ones...

3rd party libs come at a price that is often overlooked/ignored...


The life cycle is related to the solutions quality. Code that is well executed never dies. Specially when hardware has peaked.

I'm biased, but C (client) and Java (server) will never have substitutes.

The reason I can be absolute is that we have run out of energy.


Excellent rules (57 years). One quibble; I would rate:

Security > Maintainability > (...)


I have been programming professionally for 25+ years and I pretty much agree with the author. One thing I want to add is: “95% of tech problems are recreated by the developers themselves” :)


Inventing on principle: https://www.youtube.com/watch?v=PUv66718DII Thank you Bret Victor


#18 will take most people that long (a couple of decades) to actually follow. And it's the most important one. Even on OP's list is near the end, IMO it should be the first point.


Does FAANG still ask you to Leetcode after 20+ years of experience?


If course, and algo whiteboard sessions. It's even harder for seasoned devs since they graduated ages ago.


experience != skill

Experience is merely the prerequisite for skill, it doesn't cause it.


Yes, if you apply for a SWE position.


I have more experience than you and agree with what you say. Your guiding principles coupled with 12 Factor are a solid foundation for software development.


> Stay clear from hype-driven development.

Man i still remember when it felt like a new js framework was coming out every day


So at the end, it all comes down to "Fail fast" ?


"Pick the right tool for the job" --- exit


It’s my litmus test for idiots who think they have something useful to say.


A great list!


Lost me at #3




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: