More often than not a problem that looks simple and has insane complexity once you dig a bit deeper is based on some faulty assumptions. Assumptions that either changed over time or never made sense to begin with. The wast majority of software systems out there are significantly simplifiable. However, not all developers are willing to analyze and question fundamental assumptions about systems they deal with. Most, in fact, are far more comfortable learning to jump through arbitrary hoops. They then brag about their "expertise" in the "domain" (which often doesn't even exist outside of software).
> The vast majority of software systems out there are significantly simplifiable.
I think exactly the opposite, actually.
Most programmers don't just write lines of code for no reason. They generally stop when the problem is solved.
So, you might be able to simplify new programs, but I wouldn't put a lot of money on it. The programmer would have had to miss something architecturally up front for that to be the case. It happens, but it's not that common.
So, you might be able to simplify production programs. Maybe, but those have a lot of bug fixes already implemented. And they encode the political state of the entity that built it (see: microservices). So, a program that has existed a while may not encode the current political system, but I don't think fixing that is making things simpler.
So, you might be able to simplify legacy programs. Perhaps, but those have a LOT of business state encoded in them--much of which is still used occasionally but you won't know that until you do a deep dig on the code. That rule for disbursements to Civil War descendants is there because the system still needed it up until a couple years ago.
Oddly, the best way I have found to simplify computer systems is to give the humans more power. Humans are good at exceptions; computers not so much. This is, sadly, anathema to modern programming and modern political/corporate structures.
>The programmer would have had to miss something architecturally up front for that to be the case. It happens, but it's not that common.
It is very, very common indeed. Seldom is software created with all the requirements and complexities of the problem known upfront. In fact, very often the process of development of the software itself reveals all the different ways in which the original specifications were imprecise and not completely thought through. And new features and requirements get added onto later all the time.
Software is created in an iterative process of feedback and development. At each stage, the programmers takes the shortest, most straightforward approach to solve the problem. When the programmers are not careful (or just lazy) they inadvertently add dependencies that shouldn't exist and make the whole thing more complex. Of course, any single infraction seems innocent enough, but eventually you almost always end up with software that is much more complex than it needs to be.
Here is a talk on how software is too complex all the way from 2006. I am more than confident that the problem has only gotten worse in last 15 years.
> At each stage, the programmers takes the shortest, most straightforward approach to solve the problem.
Unfortunately, this is the often the best case. There’s a point in the life of most good programmers where they can’t help but to massively over engineering everything they make, adding pointless interfaces and abstractions everywhere, to chase the dream of reusability. This mindset will utterly bury your capacity to iterate.
I once saw a Java method trail to initialise something which was 19 levels deep - each level just making one method call to the next abstraction. You couldn’t just trace it in the IDE - lots of those calls called an interface method or something, so you had to hunt down the implementer. But it got worse. There was also a method trail alongside it for cleaning up that context. It had the same 19 levels deep trail, but after all that work the final callee was an empty function.
> Most programmers don't just write lines of code for no reason. They generally stop when the problem is solved.
If only. Over-engineering is definitely a thing. And programmers definitely miss something more often than not, even when over-engineering. Or even: especially when over-engineering.
You must have above average programming peers. It's my experience that architectural decisions that could cut future work by orders of magnitude are quite commonly missed.
Alan Perlis' old motivational-poster slogan "Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it." has a fair bit going for it.
I love that apart from the "Some can avoid it" bit — feels out of place with the rest. Who's "some"? Pithy slogans often come in three parts rather than four; I love the overall message, I think I'd just prefer it with that 25% removed.
I think this is a funny thing on several levels - one you see the part that doesn't fit so you remove it making you the genius - in this way it's a nice motivational poster level thought.
But you could be a some. But if you were a some it would mean that there was no possibility of actually removing the some from the statement because it actually did describe accurately that there were somes that could avoid it.
No, it doesn't, and no, it isn't. As much as some people like to pretend it doesn't exist, accidental complexity is a thing, and reducing it is part of our job. Also, there are ways of removing complexity of processes before they are turned into software, but that requires talking with people and agreeing. Not every program has to do everything.
>As much as some people like to pretend it doesn't exist, accidental complexity is a thing
Yes, and it's often the best feature of the software, as far as the sale teams is concerned, because it allows them to sell concultancy fees or because it satisfies the customer by matching 100% it's crappy requirements (which they don't really need).
Of course it is, in fact it is pretty much all-pervasive in our field, even in very successful software companies - which makes me question whether removing it is as valuable as people make it out to be.
Yes, and I think you provided a lot of the answer.
You can spend 5 years customizing off the shelf ERP software to do insane things because John in accounting and Jane in HR won't budge from weird processes they built over the decades... Or you can use that opportunity for business transformation, simplify their processes and their program, save money, reduce errors, improve responsiveness and timeliness, and generally make it a win win win.
Not all complexity can be reduced of course. I can imagine self driving cars, rocket science, etc are all about critical edge cases.
But when it comes to business, and particularly back office processes (where a huge amount of IT goes) , a lot of complexity is unnecessary. Each company each department each person is certain they are special, unique, and their requirements are paramount and unchangeable. But if you move around a bit you'll come to believe otherwise.
Of course it's is valuable. Complexity makes software expensive to make, so removing it is literally worth a lot of money that can be better spent making features. Nobody buys software because it's accidentally complex, they buy it because of features, quality (which is something that's also affected by complexity!), etc.
The only reason complexity is prevalent is due to a shortage and geniuses (and also a shortage of "Some" too, if we're going by Alan Perlis' quote).
Most things in life and programming exist on a trade off curve of goodness vs simplicity. Making the thing better means giving up simplicity. Making it simple means removing features people want.
Geniuses invent new ways to get both. Those ideas permanently move the trade off curve outwards.
In programming, some abstractions which have done this are: Operating systems (Abstracting away the hardware), Grace Hopper’s invention of program libraries, high level languages and compilers, HTTP and JSON, tcp/ip (replacing custom transmission protocols), and there’s lots more. Calculus and the Arabic numeral system are examples in mathematics. (It’s insanely difficult to multiply using Roman numerals!)
I once read a book called "simplicity" by de bono I think it was...
He made the point (I think) that you do want simplicity, you don't want simplistic. That is: it should be able to do all the things you want to, but in an easy to use manner.
To me, simplicity means that there's clarity of purpose for each component and how it interfaces with other components. A large system can still be simple if the interactions and interfaces between the different components are reasonably well understood and justifiable. I think thoughtful UI design is how you prevent complexity. The UI for developers and software is the API.
When you start adding features and instead of generalizing you simply (heh) add a special case to allow two unrelated components to communicate, and then another, and another, until the two components become dependent on each other such that neither can do anything without the other; then you have a complex system, and the "UI" for the developer is worse because it's full of seemingly random elements in random places that only do one very specific thing.
>Operating systems (Abstracting away the hardware), Grace Hopper’s invention of program libraries, high level languages and compilers, HTTP and JSON, tcp/ip (replacing custom transmission protocols)
How many of us get to work at such lofty problems though?
>Calculus and the Arabic numeral system are examples in mathematics.
Eye roll. Of course reducing complexity is the very essence of mathematics. Let us not pretend that software engineers are mathematicians.
> How many of us get to work at such lofty problems though?
How many of us choose to work on such lofty problems? You can work on whatever you want, whenever you want. But lofty, unproven ideas don’t pay Google salaries. Your career is a choice, not a prison.
And of course most software engineers aren’t mathematicians. We’re talking about genius - and most of us are a long way from that. Most of us are lucky if we manage to invent a couple complexity reducing concepts in our entire careers.
But lots of great computer science ideas have “come down the mountain” from mathematics adjacent work. Functional programming wasn’t invented by C lovers like me. But I love closures, pure functions, and map / filter / reduce. This stuff makes certain problems lovely. And we wouldn’t have any of this without mathematically minded CS.
What? IME removing complexity removes costs. That's a good choice. I can't think of a case where it didn't happen. Of course this only works for the removable complexity, which is discovered through analysis (possibly iterative/recursive)
Time spent removing complexity is time not spent developing features. And your users/clients care more about features than complexity (which is completely invisible to them) or even speed. They don't even care if your app runs 10 times slower than it needs to, as long as it runs at an acceptable speed.
That is why software engineers are seldom rewarded at their companies for "removing complexity", only for finishing projects.
You're right that, given a set amount of programming time, working on features versus working on performance is zero-sum.
But that doesn't mean you should always work on features. If anything, it only means that feature-bloat is doubly bad, not only adding logic and binary size, but imposing opportunity cost.
Most users hate features. Nobody has ever sworn at their computer because they had to make a mess in Excel to use VLOOKUP() before XLOOKUP() was invented (even though I love XLOOKUP()). Plenty of swearing happens because the computer just won't respond or it's taking whole minutes to send each email when you're trying to leave work on Friday night.
> Plenty of swearing happens because the computer just won't respond or it's taking whole minutes
Of course, my point was that it is usually not beneficial to worry about performance until that happens. And when it does, improving the performance of the product becomes a project in its own right. Misbehaving parts of the software are diagnosed and fixed, usually by introducing even more complexity.
Sounds like our companies and clients are very different. As for the feature/maintenance trade-off, a reasonable CBA will show the answer. I don't think it's quite as absolute as you've described.
The number 1 assumption is that all text will meet the format you're expecting it in. Be that an encoding (everyone in the US speaks English, but utf-8 is dangerously compatible with ASCII), or a character range (do we block input of certain characters?)
Presentatuon is huge too, How do we render a pasted newline into a single line block? Where do you put the cursor if someone pastes a block of Arabic text into your multiline input? What do you display if your font doesn't have the character they've copied and pasted from another source?
There's also just the basic stuff of "every keypress/chord equals a new character" - I have an app that I use every day that renders Ctrl + backspace rather than deleting the word?
Then there's input considerations; Macos uses alt for per-word navigation, windows uses Ctrl. Do you support the OS input type, or do you support a popular editor bindings (emacs) and how do you differentiate between an input to display and an input to take a navigation from? What about mobile? Most boards support input gestures, and autocomplete suggestions. How do you know to modify your current context over appending it to your input?
Finally, what about non-renderable input? If you're modifying a rich text string, how do you escape from a block, or how do insert between two separate blocks?
This. My name is Kayodé Lycaon. Note the é... how many places don’t support it? Some people consider names to be sacred and changing the spelling is more than a little offensive. Even UTF-8 can’t represent all characters in use. I believe there are Japanese names that can’t be represented by its character set.
I get it’s technically difficult but so many places treat people who are different as edge cases to be optimized away.
UTF-8 can represent any Unicode code point. If there is a character that it can’t represent, then that’s because that character is not encoded in Unicode; it has nothing to do with UTF-8 itself.
Perhaps the assumption by Unicode that we need emoji modifiers instead of just more emojis. All ~2000 valid combinations are enumerated anyway and many require different images so they're effectively different isolated characters.
That text is a sequence that we edit by having a cursor into it as opposed to a specialized form of graphics, maybe? We've inherited the controls of a typewriter (which provided a very simple graphical model for a narrow range of languages), but maybe they aren't what we need?