I am actually surprised that the LLM came so close. I doubt it had examples in its training set for these numbers. This goes to the heart of "know-how". The LLM should should have said: "I am not sure" but instead gets into rhetoric to justify itself. It actually mimics human behavior for motivated reasoning. At orgs, management is impressed with this overconfident motivated reasoner as it mirrors themselves. To hell with the facts, and the truth, persuation is all that matters.
> It would be great if those scientists who use AI without disclosing it get fucked for life.
There need to be dis-incentives for sloppy work. There is a tension between quality and quantity in almost every product. Unfortunately academia has become a numbers-game with paper-mills.
> If LLMs are covering a gap here maybe there's an opportunity for better, local, lower-tech tooling that doesn't require such a huge tech stack (and subscriptions/rent) to solve simple, tractable problems?
I see this with every new technology stack. Way back, we had folks putting out browser "applets" to do the same things that could be done in excel. And then, we had these apps built in the cloud, in mobile, on ios/android, in react, on raspberry pi, on a gpu etc..etc.. ie, Simple apps reinvented with some new tooling. It is almost the equivalent of 'printf("hello world")' when you are learning a new language. This is not to undermine the OPs efforts, but I see it in the spirit of "learning" rather than that of solving a hard problem.
> Understanding (not necessarily reading) always was the real work.
Great comment. Understanding is mis-"understood" by almost everyone. :)
Understanding a thing equates to building a causal model of the thing. And I still do not see AI as having a causal model of my code even though I use it every day. Seen differently, code is a proof of some statement, and verifying the correctness of a proof is what a code-review is.
There is an analogue to Brandolini's bullshit asymmetry principle here. Understanding code is 10 times harder than reading code.
imo, The OP has bad ai-assisted takes on almost every single "critical question". This makes me doubt if he has breadth of experience in the craft. For example.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
I think this is a fair take (despite the characteristic HN negativity/contrarianism), and succinctly summarizes a point that I was finding hard to articulate while reading the article.
My similar (verbose) take is that seniors will often be able to wield LLMs productively, where good-faith LLM attempts will be the first step, but will be frequently be discarded when they fail to produce the intended results (personally I find myself swearing at the LLMs when they produce trite garbage; output that gets `gco .`-ed immediately- or LLM MR/PRs that get closed in favor of manually accomplishing the prompted task).
Conversely, juniors will often wield LLMs counterproductively, accepting (unbeknown) tech debt that the neither the junior nor the LLM will be able to correct past a given complexity.
I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.
I would draw an analogy here between building software and building a home.
When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.
Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.
> This substantially reduces the incentive for the creation of new IP
And as a result of this, the models will start consuming their own output for training. This will create new incentives to promote human generated code.
> I can confirm that they are completely useless for real programming
Can you elaborate on "real programming" ?
I am assuming you mean the simplest hard problem that is solved. The value of the work is measured in those terms. Easy problems have boilerplate solutions and have been solved numerous times in the past. LLMs excel here.
Hard problems require intricate woven layers of logic and abstraction, and LLMs still struggle since they do not have causal models. The value however is in the solution of these kinds of problems since the easy problems are assumed to be solved already.
reply