Hacker Newsnew | past | comments | ask | show | jobs | submit | zenbowman's commentslogin

100%. I do this all the time, this will be very useful internally.


This was the norm about a decade ago. When I was at Hulu, we built our own analytics platform on top of Hadoop, we almost certainly wouldn't have done that today with the amount of off-the-shelf stuff available.

Even things like protobuf or Avro weren't as broadly adopted (>80%) at the time, many many companies at the time persisted stuff using JSON or other text formats (which in retrospect was very dumb, but it was very normal for a while).


Beyond that, someone as emotionally brittle and allergic to criticism as he is will inevitably create a yes-men culture, which will destroy engineering discipline & lead to declining quality. No way around it.

The one thing you absolutely need to preserve engineering quality is the ability to take criticism.


It isn't clear Tesla ever had quality. His management persona was in place day one and has very consistently made decisions that negatively affect quality, much less the real challenge of building a car company from scratch. Teslas have pretty consistently ranked low on quality rankings.


What are you talking about? Removing turn signal stalks and adding a yoke with no progressive steering is engineering at its absolute finest. Do you really think Elon would just surround himself with yes men that implement stupid ideas just because he thinks it "looks cool"?


> Do you really think Elon would just surround himself with yes men that implement stupid ideas just because he thinks it "looks cool"?

Yes, I suspect a lot of us do think exactly that.


I think the commenter was being sarcastic via the "Removing turn signal stalks and adding a yoke with no progressive steering is engineering at its absolute finest" remark.


I was definitely being sarcastic!


Sorry for taking that wrong. The fanboy logic can be so ridiculous at times that it's really hard to tell.


I honestly struggle to tell these days.


Me too. I use the sarcastic mark (/s) nowadays when conveying that through text.


I would argue that it made sense for performance-sensitive code until the maturation of Rust, so it made sense even in 2015. But not today.


you pay a performance penalty

and i don’t want to learn another language and package management system


I understand wanting to defend your baby, and I think C++ was VASTLY better than most other languages for performance sensitive code until maybe 5 years ago - but the combination of memory safety and thread safety that Rust offers means that there are very few situations where C++ is the appropriate choice for a new project now.

I have written professionally in C++ for 20 years now, and I would pick Rust for a _new_ project/fresh codebase in a heartbeat. tokio alone is so vastly superior to anything you can do in async C++ that it makes zero sense to select C++ for a _new_ project (obviously if you have an existing codebase the price of interop may not be worth it).

Bjarne is being deliberately obtuse here.


This is a great writeup, I work on batch/streaming stuff at Google and I'm very excited by some of the stuff I see in the Rust ecosystem, Arroyo included.


I wouldn't say "infinite", its still susceptible to read hotspotting; and while fine-grained locking enables generally higher write throughputs, you can still get in a situation where interconnected updates end up being pretty slow.

That said, its way better than anything else I've used in my career.


Exactly. I used Unity starting from almost the first release, and attended Unite in 2007.

Unity was seen as a hobby engine then, we used it for the web, but for desktop used Unreal. Godot today is way ahead of where Unity was in 2007ish.


Same. Slowing down traffic is an inherent good, whether or not you have any intention of crossing.


Things like large functions or code duplication are not necessarily bad in the first place. A far bigger problem that I encounter regularly is the invention of extreme layers of abstraction to avoid a small amount of copy-pasting + edit in the name of DRY.

But an even bigger problem is lack of understanding of the problem domain and a lack of documentation on how you plan to fix the problem.


I have to admit: I am terrified of WET code. I do stop short of introducing abstraction monstrosities, but I usually do create what others would call unnecessary abstractions, to stay DRY.

Why? Because I tend to write all my code such that a complete stranger should be able to drop in and understand it. I constantly imagine that stranger looking over my shoulder while coding. I imagine the code should be maintainable and speak for itself without me there at all (I do write comments).

So, such a person SHOULD be able to change some value or logic somewhere, and rely on not having to do that anywhere else. That is the magic of local reasoning, as brought about by structured programming, after eradicating goto statements. WET code erodes that. I find it a very important principle though and value it highly.

An example where this falls apart is config files. For example, a port number might be repeated in different places. Comments are indispensable then, but they rot. So if possible, I encode it using actual language constructs.

In summary, I do err on the side of DRY rather aggressively, but don’t follow it all of the time.


I've never understood this mentality, the magic of local reasoning is completely and utterly destroyed by abstractions. If I'm looking at your code it's not because I'm doing literary analysis, it's because there's something wrong or because I need to change something. The abstraction only increases the number of locations I need to look to fully understand what's really happening. There is no clever naming of functions and methods that explains to me the reader better what's really happening than the code that's actually doing the real work. And worse it bites you in the ass when you realize that 5 layers down the call-stack you need to change the behavior of something only to realize that doing that breaks a bunch of unrelated places in the code that need the current behavior. So much for locality.

If your code isn't abstracted and WET I actually only have to look at the code currently in front of me on my screen to know fully what's happening and I can be absolutely sure that changing it won't affect anything else. True locality of thinking. Needing to use :vimgrep to update code in multiple places is smooth brain completely mechanical compared to the hell that's having to re-WET the code to split off and isolate the (potentially long) codepath that needs to change. And devs rarely put in the effort for that, more likely is they'll plumb down a flag all the way through the call stack to spooky action at a distance change the behavior of an unrelated function. Good luck figuring out that dependency later when you're starting from the lower function.

My motto has always been software is like pottery, once is DRYs it's much harder to change.


I agree with you in that DRY for "just not repeating yourself" is not good. But your local approach is flawed.

You still have to do the global analysis. You have to do that because the local code you are fixing might be a piece of business logic that has been dripped all over the code by a WET programmer. Now you fixed the logic in one place but all other places are still wrong.

The correct way to do it is to stay DRY when the reasons for changing a piece of code are going to be the same. An example would be this hypothetical business logic. If the code doesn't just look the same but is for something like business logic that needs to be the same in all 15 places it's getting applied then stay DRY. Other obvious examples are things like sorting algorithms. We banned those and put them in libraries for a reason.


I think this is the right mindset. As programmed, we need to be aware and able to distinguish between when two things look the same, and when two things are the same. Only one of those benefits from an abstraction.


Based on your name, I'd expect you'd be quite comfortable with producing WET work.


> Because I tend to write all my code such that a complete stranger should be able to drop in and understand it

This isn't an achievable goal for most complex systems. Even very well written and documented code bases (for e.g. tcmalloc, bigtable) require a good deal of background reading to develop a baseline understanding of what is going on.


> I am terrified of WET code.

and

> I usually do create what others would call unnecessary abstractions, to stay DRY.

Seem completely incompatible with

> Because I tend to write all my code such that a complete stranger should be able to drop in and understand it.

Now, I don't know the codebase you're in. It could be that your abstractions are perfectly fine, but DRY code != maintainable. You're sacrificing a ton of locality of behavior (LoB) to get that DRYness, and introducing potential spooky action at a distance. Not to mention the cognitive overhead of the abstractions.

I'm not saying to never abstract, but abstractions really only supply a benefit when the things involved are guaranteed to vary together. Usually via some sort of physical process. If it's just business logic having them vary together in the same place, eventually some dictate comes down from management to change one of them but not the other.

When this happens, you introduce weird bugs on the other side of the system. That's how a platform gets the reputation of being unmaintainable. In the bad old days, it used to be that it was globals being referenced by many different functions as well as unrestricted gotos being used to jump into the middle of a function, but I've seen it happen quite often with abstractions, even ones that seem like a good idea at the time.

The code we're talking about was actually pretty DRY. You need something that does the same thing as the last half of this function? Just push a different return address onto the stack and jump into it to re-use the code. Why repeat yourself? But it had terrible LoB, changing one function could break a completely unrelated function halfway across the project (and one that didn't even obviously call the function if you're using some sort of computed goto).

You've also identified that structured programming brought about an end to the worst of these abuses, but I think you've got the reason wrong. It's not about reducing the number of places that you need to change something. You can write perfectly structured code (actually, it's hard to write unstructured code these days) and still need to change logic in 5/10/20 places. And local reasoning is still preserved in this case, as locally would consider each of those places by themselves, assuming the logic is all in different functions/modules/etc. Structured programming changed so much because forcing functions to have defined entry/exit points allows for easier preservation of invariants. You can't have meaningful invariant checks if someone can just jump into your function just after those checks. It's also much easier to see what in the project depends on the code you're changing.


There's more than one way to implement DRY. Lots of times there is no superclassing to capture commonality, but there are functions that can be written only once. Organizing a set of complex algo steps that share some commonality and have some differences is just hard sometimes.


Yeah, it feels like we cargo culted too many “principles” like DRY without understanding what they actually mean. I see it all the time at my job (I review 5-10 PRs/day).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: