One thing that worries me about AI-generated code is that if there's an obscure bug that pops up later, there's no engineer to think "ok hang on, I remember something strange happening when I first wrote that code... let me have a look." Instead, there are only engineers who reviewed the code, which is of course a lot different from writing it.
> One thing that worries me about AI-generated code is that if there's an obscure bug that pops up later, there's no engineer to think "ok hang on, I remember something strange happening when I first wrote that code... let me have a look." Instead, there are only engineers who reviewed the code, which is of course a lot different from writing it.
I feel this is a very weak argument against AI. Professional software development rarely values crafting good code. You get it in to meet a deadline to make management that is technologically clueless happy. Even orgs that value good code have people leave because of the take a new job merry-go-round to get a good pay raise of years past. One of the reasons open source surpasses most closed source software despite a lack of funding is you a variety of individuals with different goals that are focused on making a maintainable and usable solution.
> Professional software development rarely values crafting good code. You get it in to meet a deadline to make management that is technologically clueless happy.
While this is very common, you also have professional software developers with a deep sense of ownership about a system: they animated it, so when it’s being quirky in particular ways, you have an almost supernatural sense of what branches it’s following. You don’t really get to internalise the logic of a program by reading it. It’s a byproduct of having to come up with it. When a part of that thinking is outsourced, some logic internalisation is lost.
How common is this situation? While it's nice to look at the history of some line of code and contact the person who wrote it, in a company with a lot of turnover or promotion, that person isn't going to be available or want to help you.
Nearly a quarter of HN submissions lately are just spammy ads for the AI sector. I seldom open them anymore, but I’ve noticed most of them get hundreds of comments. I wish people spent their time on something useful.
It sounds like a boastful lie for stock reasons, but if not, the AI overlords may decide to write algorithms creating effectively a social credit service and not tell any humans.
Because saying "more than a quarter of all new code at Google is generated by free crowdsourcing from internet scraping" doesn't roll off the tongue as easily ;)
Google has two billion lines of proprietary code, conformant to their style guides and proprietary requirements. I can't imagine they'd poison their model with non-conformant third party source.
Eh, I would imagine that more than a quarter of new code at any FAANG is boilerplate which is ideal for current AI systems. I'm pretty anti-current-AI, but I can't say that I'm not impressed at how well it handles boilerplate code.