I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
One thing LLMs are really good at is translation. I haven’t tried porting projects from one language to another, but it wouldn’t surprise me if they were particularly good at that too.
as someone who has done that in a professional setting, it really does work well, at least for straightforward things like data classes/initializers and average biz logic with if else statements etc... things like code annotations and other more opaque stuff like that can get more unreliable though because there are less 1-1 representations... it would be interesting to train an llm for each encountered new pattern and slowly build up a reliable conversion workflow
This highly depends on your current skill level and amount of motivation. AI is not a private tutor as AI will not actually verify that you have learned anything, unless you prompt it. Which means that you must not only know what exactly to search for (arguably already an advanced skill in CS) but also know how tutoring works.
When iPhone came out the sentiment was clearly opposite. The “sweet solution” was ridiculed and workarounds found. When web caught up, it was plagued with self inflicted performance issues. And eventually Apple decided to not invest in good PWA support.
I was an app advocate for a long time, now I made a PWA and it’s maybe 90% there. But you still get behaviors that you can not fix.
IMO the worst however is products that have a fully functional website, but refuse to let you use it (e.g.: Instagram)
Yes. It's improved now, but the mobile web was bad for a long time. The early days of Android experienced a "web-first" ecosystem by force, as lazy businesses just threw a webview around their site, and it was awful
The main argument artists use isn’t that it is taking their job. The problem is that it was trained on their work without their consent and without compensation. This is fundamentally different from a Wordpress or squarespace and arguably different from models trained on open source software only.
A result of a prompt you can’t, I believe you can’t trace over a copyrighted work and claim it as your own either so I say that tracing over an AI generated image would not fly either. But IANAL so the details to be fleshed out. Also would probably break if one uses a model that is not trained on any copyrighted data.
AI generated images themselves can't be copyrighted, but if you modify them they can be considered copyrightable, that's the current landscape, though it's a pretty new legal standard so we'll see how it plays out
I know people love to make UIs stateless and functional. But they just aren’t. IMO UIs are fundamentally a bunch of state, graphically represented. So naturally all of the functional frameworks are full of escape hatches.
I’d rather have a honest framework than a chimera.
I have not followed SwiftUI recently but when it was introduced I quite liked to have the main composition in SwiftUI and then writing more complex components in pure UIKit. Both could be used what they are best suited for. But trying to shoehorn good interactivity into a SwiftUI component always ended in horrible code.
What about Elm? I think most people could grasp the elm architecture in an afternoon. To me this MVU style is pretty much perfect for UI.
I think a lot of the time React appears complex and hacky is because we tried to solve world hunger with one component. I've worked on plenty of React projects that were very easy to scale, modify and iterate because they focused so heavily on small independent stateless components.
Elm is awesome until you try to use it in an actual app. The amount of pain we went through trying to make a basic web app with a sidebar and a few pages... I don't remember the specifics, it was a few years ago, but I don't think Elm has changed much since then (it was 0.18).
> I know people love to make UIs stateless and functional. But they just aren’t. IMO UIs are fundamentally a bunch of state, graphically represented. So naturally all of the functional frameworks are full of escape hatches.
Functional does not mean no state, just constraining state to inputs and outputs. Breaking that is a choice, and not good design.
Elm, for example, provides all of that with one escape hatch: ports. It is really well-defined and that not fall into any of the impossibilities you mention.
The management has decided that the latter is preferable for short term gains.
reply