My sheer productivity boost from these models is miraculous. It's like upgrading from a text editor to a powerful IDE. I've saved a mountain of hours just by removing tedious time sinks -- one-off language syntax, remembering patterns for some framework, migrating code, etc. And this boost applies to nearly all of my knowledge work.
Then I see contrarians claiming that LLMs are literally never useful for anyone, and I get "don't believe your lying eyes" vibes. At this point, such sentiments feel either willfully ignorant, or said in bad faith. It's wild.
> At this point, such sentiments feel either willfully ignorant, or said in bad faith.
I feel exactly the same, but in the opposite direction.
As someone who’s been programming for 17 years and working professionally for 10, I’m unable to get any huge productivity boosts from AI tools.
They’re better than Google+stack overflow for asking random questions, but in a specific context and they’re good for repetitive, but not identical, syntax. That’s about where the gains end for me.
Maybe at this point I’m just so fast about looking up documentation. Maybe the languages/problems I’m facing aren’t well represented in the training data, but I just don’t see this amazing advancement.
I’d really love to see, live, someone programming who really gets these big productivity gains.
Right, in my experience the time it takes to verify that the code it wrote for you is correct is more than just to write it in the first place. A big exception is if you're working in a new domain (e.g., new language or framework). Then it's obviously much faster, and I do derive value from it. But I don't spend a very large % of my time doing that.
I would speculate it's a productivity boost for programmers specifically working in areas that they are new to (or haven't really mastered yet). One question I have is whether overly relying on LLMs will reduce the ability to master a domain, and thus hurt your long-term skill. It might seem silly, like complaining that no one knows assembly anymore because of compilers, but I think it's different than just another layer of abstraction.
I have it, tried it for a while. I have it turned mostly off new except for rare boilerplate heavy cases.
It kept generating annoyingly wrong code. Things with subtly wrong misleading names, missing edge cases, ignoring immediate same file context etc. I found that it slowed me down so i turned it off.
Same experience, but with TypeScript and Go. They gave me a 60-day trial (IIRC), I used it for two days, disabled it for the next 58 days, and after that removed it from the editor.
I get really good results with TypeScript and Python. Like it knows exactly what I want to do, I feel like I think exactly as Copilot does. Maybe I am the statistical average...
Makes me wonder if people who don't like Copilot output will not like my natural output as well.
The projects I do are mostly frontend in React and backend with TypeScript/Node.js.
I have around 10+ years of professional experience although I did on/off hobby coding before that since 15 years ago.
It's mostly API endpoints, calling a database, third party APIs, data transformation, aggregation type of things.
Then either UI according to what designers provide or whatever I want to do for my side projects.
I think it's of course wildly more productive multiplier for side projects, since then it's mostly about typing things out since you know exactly what you want to do and being a little off doesn't matter.
I don't want to share any of my actual code right now, but I think one example for example is a React component that needs to fetch some sort of data, e.g. using @tanstack/react-query, then it does loading handling, error handling boilerplate things for me, which some of I change to what I specifically need for that situation, but I need very few keystrokes myself to get the initial boilerplate out that I then edit, and during edits it of course also gives me decent suggestions. And it will create the component prop types based on the args I pass to the component etc.
Then with backend, it's really good at data transformations. E.g. combining different datasets, reducing etc.
How well it picks the correct libraries and patterns depends on the project and I think how much I've navigated around, I'm not fully sure how the context is exactly passed, so usually I will feel it out and adapt code where necessary.
Yes I find copilot is nice for things like tansack query. It’s like better snippets.
At my job we have this pretty clean SOA type architecture backed by a mongo db.
Copilot has trouble building the more complicated, domain specific queries on its own, I’ve found.
I do occasionally ask chatgpt how to write a certain query in a general case and apply that to what I’m writing. I also don’t really like mongosh’s docs.
Hi there - I'm a PM at MongoDB that works on the MongoDB Shell. I'm curious to hear your thoughts on the issues you're currently facing with mongosh docs and how we could make them better for you. Thanks for taking the time to leave feedback!
I tried it for a while and thought it was helping a lot. Then I happened to use an IDE without it and realized it was increasing my rate of syntax tokens per hour but reducing the rate of features implemented per hour. In particular I was constantly rewriting boilerplate instead of ever writing helper functions.
> I see contrarians claiming that LLMs are literally never useful for anyone
While I don't doubt that there's at least one person that has said this, what you're saying doesn't conflict with the things I and many others in the "skeptic" camp have said. LLMs are useful for a very specific set of tasks. The tasks you've listed are a tiny sliver of all the tasks that AI could potentially be doing. Would it be a good idea to consult an LLM if your mother is passed out on the floor? Probably not. The problem I have is with extrapolating from the current successes to conclude that many more tasks will be done by AI in five years.
Thing is, I'm used to hearing a very similar sentiment on how e.g. using vim keybindings is so literally going to make me a 10x 100x whatever rockstar developer - and it's like what, enabling me to edit text a bit faster? And it's always anecdotes that yeah, from-qualia you feel so fast. But from-qualia I run like a marathon runner and sound like a radio host.
I personally did find some use cases for it and it does a decent job of cutting out minor gruntwork for me. But the experience itself screams to me that whatever gains I'm feeling I'm getting are all in my head.
> using vim keybindings is so literally going to make me a 10x 100x whatever rockstar developer - and it's like what, enabling me to edit text a bit faster?
Yes, to me LLM is exactly like this: from nano to vim.
I don't think basic vim usage (which is all I know, really) makes anyone super efficient. I don't think typing/editing speed is generally an important factor in programmer productivity or 'coding speed'.
It's just that every time I use nano it's (a) unintentional, as it's opened via EDITOR; (b) sort-of coerced, because most distros installing it by default also think it's somehow too much to install Vim or Emacs alongside it; and (c) extremely painfully awkward, because all other editors I use, I've invested at least as couple years of practice into.
If I spent a year using nano every day, and if I evolved a config file and read the manual during that time, I might eventually reach a place where using nano didn't feel cumbersome and irritating, but why would I do that if I already use Emacs and Vim every day? If I learn a 'new' editor it's going to be something extensible that I could see myself programming in every day: Emacs without evil; or one of the newer modal editors with a reversed sentence order, like kakoune and Helix; or, hell, VSCode.
So nano is likely doomed to remain forever cumbersome and irritating for me, somewhere on the level of typing on a touchscreen instead of a real keyboard.
I'm a contrarian who believes your anecdote, and could even imagine that 5% of LLM users feel the same way, but thinks (a) these systems are about half as good as they're ever going to get, (b) we're past the point of diminishing returns, and (c) what we do have isn't worth the energy costs of running it, let alone creating it in the first place.
I think there may be a set of people that have figured out, 1) how to interact with LLMs; and 2) what in their lives is improved when interacting with LLMs. I am in the group that has not found the best use case for my own life, and have never needed it for improving anything I need to get done. Always looking for suggestions, though!
Then I see contrarians claiming that LLMs are literally never useful for anyone, and I get "don't believe your lying eyes" vibes. At this point, such sentiments feel either willfully ignorant, or said in bad faith. It's wild.