Hacker Newsnew | past | comments | ask | show | jobs | submit | AnIrishDuck's commentslogin

As somebody that "learned" C++ (Borland C++... the aggressively blue memories...) first at a very young age, I heartily agree.

Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.

It's just so easy to write C++ that seems like it works until it doesn't...


> the options are to build more software or to hire fewer engineers.

To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.

More on all of these later.

> I am not convinced that software has a growing market

Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.

The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.

However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]

They are, of course, now doing very different things.

Let's now spitball some of those other scenarios above:

- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.

- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g traditional end to end tests.

- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.

Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.

And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.

I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.

1. https://www.economist.com/democracy-in-america/2011/06/15/ar... (while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)


The most critical skill in the coming era, assuming that AI follows its current trajectory and there are no research breakthroughs for e.g. continual learning is going to be delegation.

The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.

It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.


> Sometimes after a night’s sleep, we wake up with an insight on a topic or a solution to a problem we encountered the day before.

The current crop of models do not "sleep" in any way. The associated limitations on long term task adaptation are obvious barriers to their general utility.

> When conversing with LLMs, I never get the feeling that they have a solid grasp on the conversation. When you dig into topics, there is always a little too much vagueness, a slight but clear lack of coherence, continuity and awareness, a prevalence of cookie-cutter verbiage. It feels like a mind that isn’t fully “there” — and maybe not at all.

One of the key functions of REM sleep seems to be the ability to generalize concepts and make connections between "distant" ideas in latent space [1].

I would argue that the current crop of LLMs are overfit on recall ability, particularly on their training corpus. The inherent trade-off is that they are underfit on "conceptual" intelligence. The ability to make connections between these ideas.

As a result, you often get "thinking shaped objects", to paraphrase Janelle Shane [2]. It does feel like the primordial ooze of intelligence, but it is clear we still have several transformer-shaped breakthroughs before actual (human comparable) intelligence.

1. https://en.wikipedia.org/wiki/Why_We_Sleep 2. https://www.aiweirdness.com/


Not really, no. The founders were not omniscient, but many of them publicly wrote about the problematic rise of political "factions" contrary to the general interest: https://en.wikipedia.org/wiki/Federalist_No._10


> One thing that's been really off putting about the technology industry is how fake-it-till-you-make-it has become so pervasive.

It feels accidental, but it's definitely amusing that the models themselves are aping this ethos.


The grid actually already has a fair number of (non-software) circular dependencies. This is why they have black start [1] procedures and run drills of those procedures. Or should, at least; there have been high profile outages recently that have exposed holes in these plans [2].

1. https://en.wikipedia.org/wiki/Black_start 2. https://en.wikipedia.org/wiki/2025_Iberian_Peninsula_blackou...


An analogy is the difference between vector and bitmap graphics.

CAD programs aren't just a different set of operations on the same data, they use an entirely different representation (b-rep [1] vs Blender's points, vertices, and polygons).

These representations are much more powerful but also much more complex to work with. You typically need a geometric kernel [2] to perform useful operations and even get renderable solids out of them.

So sure, I suppose you could build all of that into Blender. But it's the equivalent of building an entire new complex program into an existing one. It also raises major interoperation issues. These two representations do not easily convert back and forth.

So at that point, you basically have two very different programs in a trenchcoat. So far the ecosystem has evolved towards instead building two different tools that are masters of their respective domains. Perhaps because of the very different complexities inherent in each, perhaps because it makes the handover / conversion from one domain to the other explicit.

1. https://en.m.wikipedia.org/wiki/Boundary_representation

2. https://en.m.wikipedia.org/wiki/Geometric_modeling_kernel


> CAD programs aren't just a different set of operations on the same data, they use an entirely different representation (b-rep [1] vs Blender's points, vertices, and polygons).

So with that in mind, there should be something that is possible to build in CAD, but impossible then to build in Blender?

I know the differences between the two, I understand they're fundamentally different, yet I seem to be able to produce similar results to others using CAD, so I'm curious what results I wouldn't be able to reproduce in Blender.

Any concrete examples I could try out?


Sure. Create a diamond polygon and revolve it around a point.

Blender has methods and tools to _approximate_ doing this. It has a revolve tool... where the key parameter is the number of steps.

This is not a revolution, it's an approximation of a revolution with a bunch of planar parts.

BREP as I understand it allows you to describe the surfaces of this operation precisely and operate further on them (e.g. add a fillet to the top edge).

Ditto for things like circular holes in objects. With blender, you're fundamentally operating on a bunch of triangles. Fundamental and important solid operations must be approximated within that model.

BREP has a much richer set of primatives. This dramatically increases complexity but allows it to precisely model a much larger universe of solids.

(You can kinda rebuild functionality that geometric kernels have with geometry nodes now in blender. This is a lot of work and is not a great user interface compared to CAD programs)


This doesn't seem right to me. From the article I believe you are referencing ("What if AI made the world’s economic growth explode?"):

> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.

It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.

There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.

This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.


I think any improvements to productivity AI brings will also create uncertainty and disruption to employment, and maybe the latter is greater than the former, and investors see that.


> Understanding twos complement representation is an essential programming skill

The field of programming has become so broad that I would argue the opposite. The vast majority of developers will never need to think about let alone understand twos complement as a numerical representation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: