Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now I just assume they're taking my feedback and feeding it right back to the LLM.

This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

It is especially frustrating that the second group doesn't become much more than a proxy for an LLM.

New juniors can progress in software engineering - but they have to take the road of disciplined use of AI and make sure that they're learning the material rather than delegating all their work to it... and that delegating work is very tempting... especially if that's what they did in college.





I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent. What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?

> I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent.

Hiring well is hard, specially if compensation isn't competitive enough to attract talented individuals who have a choice. It's also hard to change institutional hiring practices. People don't get fired by buying IBM, and they also don't get fired if they follow the same hiring practices in place in 2016.

> What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?

Software development is a multidiscinary field. It involves multiple non-overlapping skill sets, bot hard skills and soft skills. Also, you need multiple people vetting a candidate to eliminate corruption and help weed out candidates who outright clash with company culture. You need to understand that hiring someone is a disruptive activity, that impacts not only what skill sets are available in your organization but also how the current team dynamics. If you read around, you'll stumble upon stories of people who switch roles in reaction to new arrivals. It's important to get this sort of stuff right.


>It's important to get this sort of stuff right.

Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.


> Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.

I think you haven't spend much time thinking about the issue. Changing hiring practices does not mean they are improve. It only means they changed. You are still faced with the task of hiring adequate talent, but if you change processes them now you don't have baselines and past experiences to guide you. You keep those baselines if you keep your hiring practices then you stick with something that is proven to work albeit with debatable optimality, and mitigate risks because your experience with the process helps you be aware of some red flags. The worst case scenario is that you repeat old errors, but those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.


>Changing hiring practices does not mean they are improve.

No, but I'd like to at least see conversation on how to improve the process. We aren't even at that point. We're just barely past acknowledging that it's even an issue.

>but if you change processes them now you don't have baselines and past experiences to guide you.

I argue we're already at this point. The reason we got past the above point of "acknowledging problem" (a decade too late, arguably) is that the baselines are failing to new technology, which is increasing false positives.

You have a point, but why does tech pick this point to finally decide not to "move fast and break things"? Not when it comes to law and ethics, but for aquiring new talent (which meanwhile is already disrupting heir teams with this AI slop?)

>those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.

okay, so back to step zero then. Do we have a hiring problem? The thesis of this article says yes.

"it worked before" seems to be the antipattern the tech industry tried to fight back against for decades.


> No, but I'd like to at least see conversation on how to improve the process. We aren't even at that point. We're just barely past acknowledging that it's even an issue.

The current hiring practices are a result of acknowledging what they did before didn't work. The current ones work well enough that people don't wanna change it, the only ones who wanna change it are engineers not the companies.


Nit (not directed at you) : I don't appreciate being flagged for pointing out the exact issue of the article and someone just dismissing it as "well companies are making money, clearly it's not a crisis"

This goes beyond destructive thinking. Again, I hope the companies reap what they sow.


>What dumpster fire?

If you're not going to even acknowledge the issue in the article, there's no point in discussing the issue in a forum. Good day.


There's no fix for this problem in hiring upfront. Anyone can cram and fake if they expect a gravy train on the other end. If you want people to work after they're hired, you have to be able to give direct negative feedback, and if that doesn't work, fire quickly and easily.

>Anyone can cram and fake if they expect a gravy train on the other end.

If you're still asking trvia, yes. Maybe it's time to shift from the old filter and update the process?

If you can see in the job that a 30 minute PR is the problem, then maybe replace that 3rd leetcode round with 30 minutes of pair programming. Hard to chatGPT in real time without sounding suspicion.


That approach to interviewing will cause a lot of false negatives. Many developers, especially juniors, get anxious when thrown into a pair programming task with someone they don't know and will perform badly regardless of their actual skills.

I understand that and had some hard anxiety myself back then. Even these days I may be a bit shakey when love coding in an interview setting?

But is the false negative for a nervous pair programmer worse than a false positive for a leetcode question? Ideally a good interviewer would be able to separate the anxiety from the actual thinking and see that this person can actually think, but that's another undervalued skill among industry.


I don’t know why people are so hesitant to just fire bad people. It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.

Given how much these orgs pay, both direct to head hunters and indirect in interview time, might as well probationally hire the whoever passes the initial sniff test.

That also lets you evaluate longer term habits like punctuality, irritability, and overall not-being-a-jerkness.


Not so fast. I "saved" guys from being fired by asking to be more patient with them. The last one was not in my team as I moved out to lead another team. Turned out the guy did not please an influencial team member, who then complained about him. What I saw instead was a young silent guy, given boring work and was longing for more interesting work. A tad later he took ownership of a neglected project, completed it and made a name of himself.

It takes considerably more effort and skill to treat colleagues as humans rather than "outputs" or ticket processing nodes.

Most (middle) management is an exercise in ass-covering, rather than creating healthy teams. They get easily scared when "Jira isn't green", and look someone else to blame for not doing the managing part correctly


Sunk cost. You've spent... 20 to 100 hours on interviews. Maybe more. Doing it again is another expense.

Onboarding. Even with good employees, it can take a few months to get the flow of the organization, understanding the code base, and understanding the domain. Maybe a bit of technology shift too. Firing a person who doesn't appear to be preforming in the first week or two or three would be churning through that too fast.

Provisional hiring with "maybe we'll hire you after you move here and work for us for a month" is a non-starter for many candidates.

At my current job and the job previous it took two or three weeks to get things fully set up. Be it equipment, provisioning permissions, accounts, training (the retail company I worked at from '10 to '14 - they sent every new hire out to a retail store to learn about how the store runs (to get a better idea of how to build things for them and support their processes).

... and not every company pays Big Tech compensation. Sometimes it's "this is the only person who didn't say «I've got an offer with someone else that pays 50% more»". Sometimes a warm body that you can delegate QA testing and pager duty to (rather than software development tasks) is still a warm body.


It's really not obvious to calculate the output of any employee even with years of data, way harder for a software engineer or any other job with that many facets. If you've found a proven and reliable way evaluate someone in the first 2 weeks you just solved one of the biggest HR problems ever.

What if, and hear me out, we asked the people a new employee has been onboarding with? I know, trusting people to make a fair judgment lacks the ass-covering desired by most legal departments but actually listening to the people who have to work with a new hire is an idea so crazy it might just work.

> I don’t know why people are so hesitant to just fire bad people.

"Bad" is vague, subjective moralist judgement. It's also easily manipulated and distorted to justify firing competent people who did no wrong.

> It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.

I feel your opinion is rather simplistic and ungrounded. Only the most egregious cases are rendered apparent in a few weeks worth of work. In software engineering positions, you don't have the chance to let your talents shine through in the span of a few weeks. The cases where incompetence is rendered obvious in the span of a few weeks actually spells gross failures in the whole hiring process, which failed to verify that the candidate failed to even meet the hiring bar.

> (...) might as well probationally hire the whoever passes the initial sniff test.

This is a colossal mistake, and one which disrupts a company's operations and the candidates' lives. Moreover, it has a chilling effect on the whole workforce because no one wants to work for a company ran by sociopaths that toy with people's lives and livelihood as if it was nothing.


> manipulated and distorted to justify firing competent people

If you have that kind of office politics going on, that's the issue to be solved.

>toy with people's lives and livelihood as if it was nothing.

If the employee lies about their skills, it is on them.


Every style of interview will cause anxiety, that's just a common denominator for interviews.

The same could be said for leetcode. Except leetcode doesn't test actual skills in 2025.

The bar for “junior” has quietly turned into “mid-level with 3 years of production experience, a couple of open-source contributions, and perfect LeetCode” while still paying junior money. Companies list “0-2 years” but then grill candidates on system design, distributed tracing, and k8s internals like they’re hiring for staff roles. No wonder the pipeline looks broken. I’ve interviewed dozens of actual juniors in the last six months. Most can ship features, write clean code, and learn fast, but they get rejected for not knowing the exact failure modes of Raft or how to tune JVM garbage collection on day one. The same companies then complain they “can’t find talent” and keep raising the bar instead of actually training people.

Real junior hiring used to mean taking someone raw, pairing them heavily for six months, and turning them into a solid mid. Now the default is “we’ll only hire someone who needs zero ramp-up” and then wonder why the market feels empty.


It's the cargo cult kayfabe of it all. People do it because Google used to do it, now it's just spread like a folk religion. But nobody wants guilds or licensure, so we have to make everyone do a week-long take-home and then FizzBuzz in front of a very awkward committee. Might as well just read chicken bones, at least that would be less humiliating.

And who would write the guild membership or licensure criteria? How much should those focus on ReactJS versus validation criteria for cruise missile flight control software?

Guild members? Who else?

You’re asking these rhetorical questions as if we haven’t had centuries of precedent here, both bad and good. How does the AMA balance between neurosurgeons and optometrists? Bar associations between corporate litigators and family estate lawyers? Professional engineering associations between civil engineers and chemical engineers?


> Professional engineering associations between civil engineers and chemical engineers?

One takes the FE exam ( https://ncees.org/exams/fe-exam/ ). You will note at the bottom of the page "FE Chemical" and "FE Civil" which are two different exams.

Then you have an apprenticeship for four years as an Engineer in Training (EIT).

Following, that, you take the PE exam. https://ncees.org/exams/pe-exam/ You will note that the PE exams are even more specialized to the field.

Depending on the state you are licensed in (states tend to have reciprocal licensing - but not necessarily and not necessarily for all fields). For example, if you were licensed in Washington, you would need to pass another exam specific to California to work for a California firm.

Furthermore, there is the continuing education requirements (that are different for each state). https://www.pdhengineer.com/pe-continuing-education-requirem...

You have to take 30 hours of certified study in your field across every two years. This isn't a lot, but people tend to fuss about "why do CS people keep being expected to learn on our own?" ... Well, if we were Professional Engineers it wouldn't just be an expectation - it would be a requirement to maintain the license. You will again note the domain of the professional development is different - so civil and mechanical engineers aren't necessarily taking the same types of classes.

These requirements are set by the state licensure and part of legislative processes.


So what you’re saying is that it’s a solved problem. If we can figure out how to safely certify both bridge builders and chemical engineers working with explosives, we can figure out a way to certify both React developers and those working on cruise missile flight control software.

I'm saying the idea that you can do one test for software engineering and never have to study again or be tested on a different domain in the future isn't something that professional engineering licensure solves.

Furthermore, licensure requires state level legislation and makes it harder for employees (especially the EIT) to change jobs or move to other states for work there.

Licensure, the way that people often point to it as a way to solve the credentials problem vs interviews, isn't going to solve the problems that people think it would.

Furthermore, it is only something if there is a reason to do it. If there isn't a reason to have a licensed engineer signing off on designs and code there isn't a reason for a company to hire such.

Why should a company pay more for someone with a license to design their website when they could hire someone more cheaply who doesn't have a license? What penalties would a company have for having a secretary do some vbscripting in excel or a manager use Access rather than hiring a licensed developer?


You seem to be confused. The AMA doesn't control physician licensing. That's done by state medical boards.

But are you suggesting we have separate licenses for every different type of developer? We have new types coming up every few years.

The whole idea of guilds for developers is just stupid and impractical. It could never work on any long term or large scale basis.


Good catch on the AMA. I should have said medical licensing boards.

> But are you suggesting we have separate licenses for every different type of developer? We have new types coming up every few years.

I didn’t suggest that at all and I honestly can’t figure out how you came to that interpretation unless you are hallucinating.

> The whole idea of guilds for developers is just stupid and impractical. It could never work on any long term or large scale basis.

What a convincing argument! You should get a cabinet post.


Guilds and licensure perform gatekeeping, by definition, and the more useful they are at providing a good hiring signal, the more people get filtered out by the gatekeeping. So there's no support for it because everyone is afraid that effective guilds or licensing would leave them out in the cold.

Yeah, I'd be more than fine with licensing if I didn't have to keep going through 5 rounds of trivia only to be ghosted. Let me do that once and show I can code my way out of a paper bag.

I can understand such process for freshman, but for industry veteran with 10+ years of experience, with with recommendation from multiple senior managers?

And yet welcome to leetcode grind.


Yeah, I was told I'd get less of this as I got real experience. More additions to the pile of lies and misconceptions.

If you need to fizzbuzz me, fine. But why am I still making word search solver project in my free time as if I'm applying for a college internship?


I’ve started using ChatGPT for their take home projects, with only minor edits or refactors myself. If they’re upset I saved a couple hours of tedium, they’re the wrong employer for me.

And I’m being an accelerationist hoping the whole thing collapses under its own ridiculousness.


Also they explicitly say to not use AI assistance for such assignments.

Recruitment is broken even more than before chatgpt.


> there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

Hmmm. Is there any way to distinguish between these two categories? Because I agree, if someone is delegating all their work to an LLM or similar tool, cut out the middleman. Same as if someone just copy/pasted from Stackoverflow 5 years ago.

I think it is also important to think about incentives. What incentive does the newer developer have to understand the LLM output? There's the long term incentive, but is there a short term one?


Dealing with an intern at work who I suspect is doing exactly this, I discussed this with a colleague. One way seems to be to organize a face to face meeting where you test their problem solving skills without AI use, the other may be to question them about their thought process as you review a PR.

Unfortunately, the use of LLMs has brought about a lot of mistrust in the workplace. Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice as they see it as sermonizing when an “easy” process to get “acceptable” results exists.


The intern is not producing code that is up to the standard you expect, and will not change it?

I saw a situation like this many years ago. The newly hired midlevel engineer thought he was smarter than the supervisor. Kept on arguing about code style, system design etc. He was fired after 6 months.

But I was friendly with him, so we kept in touch. He ended up working at MSFT for 3 times the salary.


    > Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice
Hot take: This reads like an old person looking down upon young people. Can you explain why it isn't? Else, this reads like: "When I was young, we worked hard and listened to our elders. These days, young people ignore our advice." Every time I see inter-generational commentary like this (which is inevitably from personal experience), I am immediately suspicious. I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way. Why would this be any different in the current generation? In my experience, it isn't.

On a positive note: I can remember mentoring some young people and watching them comb through blogs to learn about programming. I am so old that my shelf is/was full of O'Reilly books. By the time I was mentoring them, few people under 25 were reading O'Reilly books. It opened my eyes that how people changes more than what people learn. Example: Someone is trying to learning about access control modifiers for classes/methods in a programming language. Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.


> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT.

The answer to this (throughout the ages) should be the same: read the authoritative source of information. The official API docs, the official language specification, the man page, the textbook, the published paper, and so on.

Maybe I am showing my age, but one of the more frustrating parts of being a senior mentoring a junior is when they come with a question or problem, and when I ask: “what does the official documentation say?” I get a blank stare. We have moved from consulting the primary source of information to using secondary sources (like O’Reilly, blogs and tutorials), now to tertiary sources like LLMs.


[Disclaimer: I'm a Gen Xer. Insert meme of Grandpa Simpson shouting at clouds.]

I think this is undoubtedly true from my observations. Recently, I got together over drinks with a group of young devs (most around half my age) from another country I was visiting.

One of the things I said, very casually, was, "Hey, don't sleep on good programming books. O'Reilly. Wiley. Addison-Wesley. MIT Press. No Starch Press. Stuff like that."

Well, you should've seen the looks on their faces. It was obvious that advice went over very poorly. "Ha, read books? That's hard. We'd rather just watch a YouTube video about how to make a JS dropdown menu."

So yeah, I get that "showing my age" remark. Used to be the discipline in this industry is that you shouldn't ask a question of a senior before you'd read the documentation. If you had read the documentation, man pages, googled, etc., and still couldn't come up with an answer, then you could legitimately ask for a senior mentor's time. Otherwise, the answer from the greybeards would have been "Get out of my face, kid. Go RTFM."

That system that used to exist is totally broken now. When reading and understanding technical documentation is viewed as "old school", then you know we have a big problem.


I like your sentiment about "first principles" of documents -- go to the root source. But for most young technologists (myself included, long long ago), the official docs (man pages for POSIX, MSDN for Win32 etc.) are way too complex. For years, when I was in university, I tried to grasp GUI programming by writing C and using the Win32 API. It was insane, and I did little more than type in code from a "big book of Win32 programming". Only when I finally tried Qt with C++ did the door of understanding finally open. Why? It was the number of simple examples that Qt docs provided they really helped me understand GUI (event-driven) programming. Another 10 years went by when I knew enough about Win32 that I was able to write small, but useful GUIs in pure C using the Win32 API. The very reason that StackOverflow was so popular: People read the official docs and still don't understand... so they ask a question. The best questions include a snip of code and ask about it.

To this day, I normally search on Google first, then try an LLM... the last place that I look is the official docs if my question is about POSIX or Win32. They are just too complex and require too much base knowledge about the ecosystem. As an interesting aside, when I first learned Python, Java, and C#, I thought their docs were as approachable as Qt. It was very easy to get started with "console" programming and later expand to GUI programming.


No. Just no.

If I have a problem with a USB datastream, the last place I'm going to look is the official USB spec. I'll be buried for weeks. The information may be there, but it will take me so long to find it that it might as well not.

The first place to look is a high quality source that has digested the official spec and regurgitated it into something more comprehensible.

[shudder] the amount of life that I've wasted discussing the meaning of some random phrase in IEC-62304 is time I will never get back!


> I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way.

Hot take: This reads like a person who was difficult to work with.

Senior people have responsibility, therefore in a business situation they have authority. Junior people who think they know it all don't like this. If there's a disagreement between a senior person and a junior person about something, they should, of course, listen to each other respectfully. If that's not happening, then one of them is not being a good employee. But if they are, then the supervisor makes the final call.


> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.

The tangent to that is it is also changing with the how much one internalizes about the problem domain and is able to apply that knowledge later. Hard fought knowledge from the old days is something that shapes how I design systems today.

However, the tendency of people who reach for ChatGPT today to solve a problem results in them making the same mistakes again the next time since the information is so easy to access. It also results in things that are larger are more difficult... the "how do you architect this larger system" is something you learn by building the smaller systems and learning about them so that their advantages and disadvantages and how and such becomes an inherent part of how you conceive of the system as a whole. ... Being able to have ChatGPT do it means people often don't think about the larger problem or how it fits together.

I believe that is harder for a junior who is using ChatGPT to advance to being a mid level or senior developer than it is for a junior from the old days because of the lack of retention of the knowledge of the problems and solutions.


They’re going to get promoted anyway. The “senior” title will simply (continue to) lose meaning to inflation.

Yeah Ive got to agree with this hot take. Put yourself in the junior's shoes: if s/he wasn't there you'd be pulling it out of Claude Code yourself, until your satisfied with what comes out enough to start adding your "senior" touches. The fact is the way code is written has changed fundamentally, especially for kids straight out of college, and the answer is to embrace that everyone is using it, not all this shaming. If you're so senior, why not show the kid how to use the LLM right, so the work product is right from the start? It seems part of the problem is dinosaurs are suspicious of the tech, and so dont know how to mentor for it. That being said, Im a machine learning engineer not a developer, and these LLMs have been a godsend. Assuming I do it correctly, there's just no way I could write a whole 10,000 line pipeline in under a week without it. While coding from outputs and error-driven is the wrong way for software Juniors, its fine by me for my AI work. It comes down to knowing when there's a silent error, if you haven't been through everything line by line. I've been caught before, Im not immune, its embarrassing, but every since GPT was in preview I have made it my business to master it.

I have a friend who is a dev, a very senior one at that, who spins up 4 Claudes at once and does the whole enterprises work. Hes a "Senior AI Director" with nobody beneath him, not a single direct report, and NO knowledge of AI or ML, to my chagrin.

So now I'm whining too...


This isn’t a question of the senior teaching the junior how to use the LLM correctly.

Once you’re a senior you can exercise judgement on when/how to use LLMs.

When you’re a junior you haven’t developed that judgement yet. That judgement comes from consulting documentation, actually writing code by hand, seeing how you can write a small program just fine, but noticing that some things need to change when the code gets a lot bigger.

A junior without judgement isn’t very valuable unless he/she is working hard to develop that judgement. Passing assignments through to the LLM does not build judgement, so it’s not a winning strategy.


There are some definite signs of over reliance on AI. From emojis in comments, to updates completely unrelated to the task at hand, if you ask "why did you make this change?", you'll typically get no answer.

I don't mind if AI is used as a tool, but the output needs to be vetted.


What is wrong with emojis in comments? I see no issue with it. Do I do it myself? No. Would I pushback if a young person added emojis to comments? No. I am looking at "the content, not the colour".

I think GP may be thinking that emojis in PR comments (plus the other red flags they mentioned) are the result of copy/paste from LLM output, which might imply that the person who does mindless copy/pasting is not adding anything and could be replaced by LLM automation.

The point is that heavy emoji use means AI was likely used to produce a changeset, not that emojis are inherently bad.

The emojis are not a problem themselves. They're a warning sign: slop is (probably) present, look deeper.

Exactly. Use LLMs as a tutor, a tool, and make sure you understand the output.

My favorite prompt is "your goal is to retire yourself"

Just like anything, anyone who did the work themself should be able to speak intelligently about the work and the decisions behind its idiosyncrasies.

For software, I can imagine a process where junior developers create a PR and then run through it with another engineer side by side. The short-term incentive would be that they can do it, else they'd get exposed.


Is/was copy/pasting from Stackoverflow considered harmful? You have a problem, you do a web search and you find someone who asked the same question on SO, and there's often a solution.

You might be specifically talking about people who copy/paste without understanding, but I think it's still OK-ish to do that, since you can't make an entire [whatever you're coding up] by copy/pasting snippets from SO like you're cutting words out of a magazine for a ransom note. There's still thought involved, so it's more like training wheels that you eventually outgrow as you get more understanding.


> Is/was copy/pasting from Stackoverflow considered harmful?

It at least forces you to tinker with whatever you copied over.


Pair programming! Get hands-on with your junior engineers and their development process. Push them to think through things and not just ask the LLM everything.

I've seen some overly excessive pair programming initiatives out there, but it does baffle me why less people who struggle with this do it. Take even just 30 minutes to pair program on a problem and see their process and you can reveal so much.

But I suppose my question is rhetorical. We're laying off hundreds of thousands of engineers and maming existing ones do the work of 3-4 engineers. Not much time to help the juniors.


having dealt with a few people who just copy/pasted Stackoverflow I really feel that using an LLM is an improvement.

That is at least for the people who don't understand what they're doing, the LLM tends to come out with something I can at least turn into something useful.

It might be reversed though for people who know what they're doing. IF they know what they're doing they might theoretically be able to put together some stackoverflow results that make sense, and build something up from that better than what gets generated from LLM (I am not asserting this would happen, and thinking it might be the case)

However I don't know as I've never known anyone who knew what they were doing who also just copy/pasted some stackoverflow or delegated to LLM significantly.


> Is there any way to distinguish between these two categories?

Yes, it should be obvious. At least at the current state of LLMs.

> There's the long term incentive, but is there a short term one?

The short term incentive is keeping their job.


> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

I've learnt that saying this exact phrase does wonders when it comes to advancing your career. I used to argue against stupid ideas but not only did I achieve nothing, but I was also labelled uncooperative and technically incompetent. Then I became a "yes-man" and all problems went away.


I was attempting to mock Claude's "You are absolutely right" style of response when corrected.

I have seen responses to PRs that appear to be a copy and paste of my feedback into it and a copy and paste of the response and fixes into the PR.

It may be the that the developer is incorporating the mannerisms of Claude into their own speech... that would be something to delve into (that was intentional). However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.


> However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.

This is nothing new. People rarely have independent thoughts, usually they just parrot whatever they've been told to parrot. LLMs created common world-wide standard on this parroting, which makes the phenomenon more evident, but it doesn't change the fact that it existed before LLMs.

Have you ever had a conversation with an intelligent person and thought "wow that's refreshing"? Yeah. There's a reason why it feels so good.


This. May you have great success! My PR comments that I get are so dumb. I can put the most obvious bugs in my code, but people are focused in the colour of the bike shed. I am happy to repaint the bike shed whatever colour they need it to be!

> Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

This is not limited to junior devs. I had the displeasure of working with a guy who was hired as a senior dev who heavily delegated any work they did. He failed to even do the faintest review of what the coding agent and of course did zero testing. At one time these stunts resulted in a major incident where one of these glorious PRs pushed code that completely inverted a key business rule and resulted in paying customers being denied access to a paid product.

Sometimes people are slackers with little to no ownership or pride in their craftsmanship, and just stumbled upon a career path they are not very good at. They start at juniors but they can idle long enough to waddle their way to senior positions. This is not a LLM problem, or caused by it.


> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

And then in the next PR, you have to request the exact same changes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: