Hacker Newsnew | past | comments | ask | show | jobs | submit | silisili's commentslogin

> In practice what I see fail most often is not premature optimization but premature abstraction

This matches my experience as well.

Someone here commented once that abstractions should be emergent, not speculative, and I loved that line so much I use it with my team all the time now when I see the craziness starting.


Similar to when someone said that the rule of thumb should not be DRY (don't repeat yourself) but WET (write everything twice) - that is, be happy to repeat similar code once or twice, and wait for the need for abstraction to become evident.

It's advice I like, as I'm prone to falling into design paralysis while trying to think of the One True Abstract Entity.


Some abstractions are obvious though. There is value in reusing them across projects and not reinventing the wheel each time. I don’t think this is counter to what you propose necessarily, just adding the nuance that abstractions serve another purpose: communicating intent to the readers of the code. Sometimes a concrete type is unnecessary detail.

Absolutely. I don't take that quote to mean we should all start from ground zero every time...if so experience wouldn't have much value.

I more think the point is that the abstractions now obvious to you emerged at some earlier point that you now recognize, and application makes sense. Speculative implies predicting, and if you've been around long enough in the same domain, it's likely you aren't guessing.

It's probably something that applies more to younger devs than older, but not exclusively of course.


I completely agree with you, and that is an amazing quote.

> abstractions should be emergent, not speculative

Dang, I love that line, too. It is 100% correct, and the opposite of what OOP teaches.


But they should emerge right when they are needed, not some time later via a painful refactor.

I think every dev lives in fear of the situation where abstraction hasn't been done when it really should have. Maybe some junior dev came in after you and shoehorned in a bunch of features to your "simple" solution. Now you have to live with that but, guess what, there's no time to refactor it, in fact the business just wants even more features.

As usual these rules work best if everyone on the team understands them in the same way. If you work with people who can only see what is right in front of them (ie. the current feature), then it'll never work. You can always fit "one more feature" into the perfect codebase without thinking about the abstractions.


In a perfect world, sure. In my personal experience, refactoring to introduce an abstraction is far less painful than refactoring to remove or fix one.

Usually because intent is far clearer in the former case.


“Abstractions should be written in blood”

As much as I see these 'prediction markets' as thinly veiled gambling, I agree. What both have in common is a direct buyer and seller, a seller whom thinks the item will become worth less, and a buyer who thinks the opposite. Kalshi just skims the transaction fees.

If you view it in those terms, it's really not much different than the stock market/broker relationship. Surely someone will say 'well at least stocks are ownership', let me introduce you to derivatives.

The real question I guess is how we come to terms with house gambling/prediction markets/stock markets being three sides of the same coin and how to regulate that.


I find derivatives on stocks extremely weird. I do get idea of primary market with goods. That can be useful, even there secondary market might be reasonable. But once it goes beyond use by primaries maybe it should be banned. So you should hold or at least have clear plan to produce the goods you buy or sell. So at least one side of the trade would have to have real stake in the game. And everything should be delivered not just settled.

Having puts for stuff you do not have. Or buying calls from someone who does not have it just feels pure speculation thus gambling not hedging.


Derivatives are a de facto prediction market. It feels strange to regulate them any differently than Kalshi et al.

This is why Kalshi will win in the end. There's way too much money at play for this to turn into a slippery slope for Wall Street.


Oh this is clever. In writing my own short feelings about LinkedIn, it turned 'circle jerk' into 'synergize through mutual validation.'

Am I hopeless if LinkedIn Speak made me truly (as in how I can properly categorize people from the past) understand the meaning of 'Circle Jerk"....

I'm not sure the notion I keep seeing of "it's ok, we still architect, it just writes the code"(paraphrased) sits well with me.

I've not tested it with architecting a full system, but assuming it isn't good at it today... it's only a matter of time. Then what is our use?


Others have already partially answered this, but here’s my 20 cents. Software development really is similar to architecture. The end result is an infrastructure of unique modules with different type of connectors (roads, grid, or APIs). Until now in SW dev the grunt work was done mostly by the same people who did the planning, decided on the type of connectors, etc. Real estate architects also use a bunch of software tools to aid them, but there must be a human being in the end of the chain who understands human needs, understands - after years of studying and practicing - how the whole building and the infrastructure will behave at large and who is ultimately responsible for the end result (and hopefully rewarded depending on the complexity and quality of the end result). So yes we will not need as many SW engineers, but those who remain will work on complex rewarding problems and will push the frontier further.

The "grunt work" is in many cases just that. As long as it's readable and works it's fine.

But there are a substantial amount cases where this isn't true. The nitty gritty is then the important part and it's impossible to make the whole thing work well without being intimate with the code.

So I never fully bought into the clean separation of development, engineering and architecture.


Since I worked as an architect some comments.

Architecture is fine for big, complex projects. Having everything planned out before keeps cost down, and ensures customer will not come with late changes. But if cost are expected to be low, and there's no customer, architecture is overkill. It's like making a movie without following the script line by line (watch Godard in Novelle Vague), or building it by yourself or by a non-architect. 2x faster, 10x cheaper. You immediately see an inflexible overarchitectured project.

You can do fine by restricting the agent with proper docs, proper tests and linters.


> Then what is our use?

You will have to find new economic utility. That's the reality of technological progress - it's just that the tech and white collar industries didn't think it can come for them!

A skill that becomes obsoleted is useless, obviously. There's still room for artisanal/handcrafted wares today, amidst the industrial scale productions, so i would assume similar levels for coding.


Assuming the 'artisanal' niche will support anything close to the same number of devs is wishful thinking. If you want to stay in this field, you either get good at moving up a level, stitching model output together, checking it against the repo and the DB, and debugging the weird garbage LLMs make up, or you get comfortable charging premium for the software equivalent of hand-thrown pottery that only a handfull of collectors buy.

LLMs can build anything. The real question is what is worth building, and how it’s delivered. That is what is still human. LLMs, by nature of not being human, cannot understand humans as well as other humans can. (See every attempt at using an LLM as a therapist)

In short: LLMs will eventually be able to architect software. But it’s still just a tool


> LLMs can build anything.

This is only possibly true if one of two things are true:

1. All new software can be made up of of preexisting patterns of software that can be composed. ie: There is no such thing as "novel" software, it's all just composition of existing software.

2. LLMs are capable of emergent intelligence, allowing them to express patterns that they were not trained on.

I am extremely skeptical that either of these is true.


Fair enough; I can see the exaggeration.

It is not impossible, however, that an LLM could run enough “random” tests to find new ways of doing something, but I hear you.

Let me restate that to “An LLM can build most anything…” and I stand by the rest of my comment.


I think that makes sense, the rest of your comment is definitely true.

What is the use of software eng/architect at that point? It's a tool, but one that product or C levels can use directly as I see it?

Yes, for building something

But for building the right thing? Doubtful.

Most of a great engineer’s work isn’t writing code, but interrogating what people think their problems are, to find what the actual problems are.

In short: problem solving, not writing code.


Where's this delusion come from recently that great engineers didnt write code?

What a load of crap.

All you're doing is describing a different job role.

What you're talking about is BA work, and a subset of engineers are great at it, but most are just ok.

You're claiming a part of the job that was secondary, and not required, is now the whole job.


I never said great engineers didn’t write code. But writing the code was never the point.

The point has always been delivering the product to the customer, in any industry. Code is rarely the deliverable.

That’s my point.


And a horse breeder was important to transportation until the 1920s, but it doesn't mean their job was transportation.

They didn't magically become great truck drivers.

Programmers do not deliver products, they deliver code to make products.

If the code is no longer needed, nor is the job. A different job will replace it with different skills required.


> And a horse breeder was important to transportation until the 1920s, but it doesn't mean their job was transportation. They didn't magically become great truck drivers.

Again: unrelated and pointless analogy. The horse breeder would be analogous to chipmakers or companies that make computers. Turns out they have more of a job than ever. They don’t need to “become truck drivers.”

> Programmers do not deliver products, they deliver code to make products.

That’s not even a little bit true. Programmers deliver product every day: see every single startup on the planet, and most companies.

Moreover, you said programmer. I didn’t.

I said software engineer/architect, as that was what the parent comment asked.

I chose my words intentionally. I am referring to people who engage in the act of engineering or architecting software, which is definitely not limited to writing code.

Yes, a pure programmer (aka a researcher or a junior programmer) may not fare as well, for the reasons you mentioned.

But that was never who we were discussing.

If you still think the code is the point, I’m not sure we’re going to see eye to eye, and I’m going to just agree to disagree. And if that’s the case, then you’re right: you may be left behind, keyboard in hand.


> But writing the code was never the point.

Is that why most prestigious jobs grilled you like a devil on algos/system design?

> The point has always been delivering the product to the customer, in any industry. Code is rarely the deliverable.

That’s just nonsense. It’s like saying “delivering product was always the most important thing, not drinking water”.


It's well understood that programming interviews are a pretty shitty tool. They're a proxy for understanding if you have basic skills required to understand a computer. Notably, most companies don't rely on these alone, they have behavioral questions, architecture questions, etc. Have you ever done an interview at these companies you're talking about? They're 8 hours lol maybe 1 is spent programming.

But it's just very obvious to any software engineer worth anything that code is just one part of the job, and it's usually somewhere in the middle of a process. Understanding customer requirements, making technical decisions, maintaining the codebase, reviewing code changes/ providing feedback, responding on incidents, deciding what work to do or not to do, deciding when a constraint has to be broken, etc. There are a billion things that aren't "typing code" that an engineer does every day. To deny this is absurd to anyone who lives every day doing those things.


Gold luck passing Leetcode before you even get to all of these. Whether they’re shitty or not is irrelevant, they’re here and it’s a fact.

And what do you derive from that fact? The position is that coding is only one portion of the job. "But there's a coding interview" was used to rebut this position. I have pointed out that the coding interview is a fraction of the procses, once again indicating that the job involves much more than coding.

So you saying "but there's a coding interview" again... who cares? Why is that relevant?


Everybody who works for salary cares. You can lament how coding is just 1% of work, it’s irrelevant what percentage is “real” coding work when you can’t pass that coding round and you’re not hired.

I have literally no clue what point you're trying to string together. I tried to refocus things to the topic at hand but you're just saying completely irrelevant things. What is your point?

Yeah, this is precisely what I meant.

I'm genuinely blown away at the attitude lately that developers spend their time programming/ our primary value is code. I guess because we tend to be organizationally isolated people just have no idea? But like... it's so absurd to anyone who does the job. It's like thinking that PM's primary role is assigning tickets, just so obviously false.

I think there's some resentment. I've seen repeatedly now people essentially celebrating that "tech bros" are finally going to see their salaries crash or whatever, it's pretty sick but I've noticed this quite a lot.


> Is that why most prestigious jobs grilled you like a devil on algos/system design?

No. That’s because interviews have always sucked, and have always been terrible predictors of how you do on the job. We just never had a better way of deciding except paying for a project.

> That’s just nonsense. It’s like saying “delivering product was always the most important thing, not drinking water”.

That’s… not an argument? It’s not even a strawman, it’s just unrelated.

The thing a customer has always paid for was the end product. Not the code. This is absolutely trivial to see, since a customer has never asked to read the code.


> No. That’s because interviews have always sucked, and have always been terrible predictors of how you do on the job. We just never had a better way of deciding except paying for a project.

Who cares? They’re here, and they will stay here for foreseeable future.

> That’s… not an argument? It’s not even a strawman, it’s just unrelated. The thing a customer has always paid for was the end product. Not the code. This is absolutely trivial to see, since a customer has never asked to read the code.

Yeah, and they didn’t pay for the water that you drank. Without which, you know, you’ll fucking die. Code is part of the package, just like you eating and shitting in the process.


A software engineer will be a person who inspects the AI's work, same as a building inspector today. A software architect will co-sign on someone's printed-up AI plans, same as a building architect today. Some will be in-house, some will do contract work, and some will be artists trying to create something special, same as today. The brute labor is automated away, and the creativity (and liability) is captured by humans.

> It's a tool, but one that product or C levels can use directly as I see it?

Wait, I thought product and C level people are so busy all the time that they can’t fart without a calendar invite, but now you say they have time to completely replace whole org of engineers?


FWIW I find LLMs to be excellent therapists.

The commercial solutions probably don't work because they don't use the best SOTA models and/or sully the context with all kinds of guardrails and role-playing nonsense, but if you just open a new chat window in your LLM of choice (set to the highest thinking paid-tier model), it gives you truly excellent therapist advice.

In fact in many ways the LLM therapist is actually better than the human, because e.g. you can dump a huge, detailed rant in the chat and it will actually listen to (read) every word you said.


Please, please, please don’t make this mistake. It is not a therapist. At best, it might be a facsimile of a life coach, but it does not have your best interests in mind.

It is easy to convince and trivial to make obsequious.

That is not what a therapist does. There’s a reason they spend thousands of hours in training; that is not an exaggeration.

Humans are complex. An LLM cannot parse that level of complexity.


You seem to think therapists are only for those in dire straits. Yes, if you're at that point, definitely speak to a human. But there are many ordinary things for which "drop-in" therapist advice is also useful. For me: mild road rage, social anxiety, processing embarrassment from past events, etc.

The tools and reframing that LLMs have given me (Gemini 3.0/3.1 Pro) have been extremely effective and have genuinely improved my life. These things don't even cross the threshold to be worth the effort to find and speak to an actual therapist.


Which professional therapist does your Gemini 3.0/3.1 Pro model see?

Do you think I could use an AI therapist to become a more effective and much improved serial killer?


I never said therapists were only for those in crisis; that is a misreading of my argument entirely.

An LLM cannot parse the complexity of your situation. Period. It is literally incapable of doing that, because it does not have any idea what it is like to be human.

Therapy is not an objective science; it is, in many ways, subjective, and the therapeutic relationship is by far the most important part.

I am not saying LLMs are not useful for helping people parse their emotions or understand themselves better. But that is not therapy, in the same way that using an app built for CBT is not, in and of itself, therapy. It is one tool in a therapist’s toolbox, and will not be the right tool for all patients.

That doesn’t mean it isn’t helpful.

But an LLM is not a therapist. The fact that you can trivially convince it to believe things that are absolutely untrue is precisely why, for one simple example.


As you said earlier, therapists are (thoroughly) trained on how to best handle situations. Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

Training LLMs we can do.

Though it might be important for the patient to believe that the therapist is empathizing, so that may give AI therapy an inherent disadvantage (depending on the patient's view of AI).


Socialization with other humans has so many benefits for happiness, mental health, and longevity. Conversely, interaction with LLMs often leads to AI psychosis and harms mental health. IMO, this is pretty strong evidence that interaction with LLMs is not similar to socialization with real humans, and a pretty good indicator that LLM “therapy” is significantly less helpful or even harmful than human-driven therapy.

Precisely.

> Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

The word “just” is not in my comment anywhere. Being human is necessary, but not sufficient.

And no, you cannot train an LLM to be human.

An LLM is not a therapist. Please do not confuse the two.

You cannot train an LLM on how to be human.


While I agree with you, I also find that an LLM can help organize my thoughts and come to realizations that I just didn't get to, because I hadn't explained verbally what I am thinking and feeling. Definitely not a substitute for human interaction and relationships, which can be fulfilling in many-many ways LLM's are not, but LLM's can still be helpful as long as you exercise your critical thinking skills. My preference remains always to talk to a friend though.

EDIT: seems like you made the same point in a child comment.


Yeah, I agree with all of that. A friend built an “emotion aware” coach, and it is extremely useful to both of us.

But he still sees a therapist, regularly, because they are not the same and do not serve the same purpose. :)


Not OP, and nothing makes it impossible, but from my experience in big companies, being visible is way more useful than being productive.

I got further faster by just answering emails right away than by churning out code. I got constant kudos, which got me promoted, and invited to more meetings, which led to less actual work. All because I just started replying to emails sent to our group. In retrospect it feels pretty perverse.

In lean companies and startups...perhaps not so much.


Having worked in a very large company for the past two decades now, one of the best career advices I ever got is about how you measure if you are a „good employee“.

It is very simple: you are a good employee if your boss(es) think you are.

That’s it. Nothing else matters in terms of career advancement or retainment.


I'd argue it's more workload dependent, and everything is a tradeoff.

In my own testing of compressing internal generic json blobs, I found brotli a clear winner when comparing space and time.

If I want higher compatibility and fast speeds, I'd probably just reach for gzip.

zstd is good for many use cases, too, perhaps even most...but I think just telling everyone to always use it isn't necessarily the best advice.


> If I want higher compatibility and fast speeds, I'd probably just reach for gzip.

It’s slower and compresses less than zstd. gzip should only be reached for as a compatibility option, that’s the only place it wins, it’s everywhere.

EDIT: If you must use it, use the modern implementation, https://www.zlib.net/pigz/


Any claims about compressing programs are extremely data-dependent so any general claims will be false for certain test cases.

I do a lot of data compression and decompression, and I would have liked a lot to find a magic algorithm that works better than all others, to simplify my work.

After extensive tests I have found no such algorithm. Depending on the input files and depending on the compromise between compression ratio and execution times that is desired, I must use various algorithms, including zstd and xz, but also bzip2, bzip3 and even gzip.

I use quite frequently gzip (executed after lrzip preprocessing) for some very large files, where it provides better compression at a given execution time, or faster execution at a given compression ratio, than zstd with any options.

Of course for other kinds of files, zstd wins, but all the claims that zstd should be used for ALL applications are extremely wrong.

Whenever you must compress or decompress frequently, you must test all available algorithms with various parameters, to determine what works best for you. For big files, something like lrzip preprocessing must also be tested, as it can change a lot the performance of a compression algorithm.


The whole problem is trying to be a catchall where people with zero knowledge or skills can hang out. Twitter/X and Reddit especially suffer from it.

Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.


> After a couple million dollar lawsuits the city or state will learn to be more careful with their methods

You'd think, but watching how many millions my local police department and city paid out every single year leads me to believe they just don't care.


How many, exactly? Anyone can wave vagueness around. Do you have numbers or no?

I haven't lived there in years, nor do I have exact numbers, but they make national news enough for the same problem nearly every year. I'll drop you some links if you care.

1 - 38 million between 2017 and 2022.

2 - 29 million in 2023.

3 - 12 million in settlements in 2025.

Dare I keep going?

[1]https://www.wdrb.com/in-depth/louisville-payouts-for-police-...

[2]https://www.aol.com/louisville-paid-least-29m-settle-1030450...

[3]https://www.courier-journal.com/story/news/local/2026/02/04/...


The region's GDP is 100 billion dollars, so these are tiny amounts, although they may seem large to some.

And the first article you link proves that people are already worried about it. You think they can safely 10x that?


LMPD budget is $250m, if they lost ~10% of it every year, they'd surely notice.

> The region's GDP is 100 billion dollars, so these are tiny amounts, although they may seem large to some.

It's a fair point and easy to handwave away "it's only $100 per resident." But it's a lot of money still. And yet that city is shutting down schools and selling off school properties to make budget this year. I bet they'd love to have those wasted millions.

> You think they can safely 10x that?

I have no idea the reason for this question. The OP said cities learn after a couple million dollar suits. I'm showing that no, they do not. If anything suits are increasing.


> I have no idea the reason for this question

Well it does make sense, in the full context of the thread. I'll let future readers decide.


NYC has, over the last decade or so, averaged $1M/week in judgments against NYPD for abuses of authority.

And yet this article itself has all the hallmarks of AI slop. Fitting.

yeah, I wrote it myself, but since I'm not a native English speaker, I do use AI to "fix" and "polish". I think AI made me a worst writer in a way lol.

it's my bad, I should have been more careful at keeping the content how I wrote it, without much of the fine tuning GPT did.


In a way, you did what you preached. You used judgement to determine what to write about and then had AI touch it up for you.

I don't view it as a bad thing.


That's true! that make me feel better lol. The internet is so sensitive to AI slop, it's hard to know what's the right balanced of usage when writing.

i would have preferred your real writing over the ai-ified slop

Whatever points this author was trying to make were completely obliterated by the LLM it was run through or used to generate it.

A shame because it seems to have interesting points, but was too wordy and LLMified to keep attention. Stop telling me what it's not every other sentence, and just say what you mean. I wish folks would just use their own words.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: