Typing is not fun. It robs me of my craft of holding my pencil and feeling it press against the paper with my hand... LLMs are merely a tool to achieve a similar end result. The different aspects of software development are an art. But even with LLMS, I critique and care about the code just as much as if I were writing it line by line myself. I have had more FUN being able to get all of my ideas on paper with LLMs than I have had over years of banging my head against a keyboard going down the rabbit hole on production bugs.
It's not about typing, it's about writing. You don't type, you write. That's the paradigm. You can write with a pen or you can type a keyboard. Different ways, same goal. You write.
Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.
However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.
As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.
A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?
It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.
Only if you are already wealthy or fine with finding a new job
If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.
In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.
If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.
You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.
Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.
For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?
Maybe, but probably not. For me, an early goal of writing is to get my thoughts in order. A later goal is to discuss the writing with people, which can only happen in a high-quality way if my thoughts are in order. Achieving goals is fun.
Whether the LLM could do a better job than me at writing the essay is a separate question...I suspect it probably could. But it wouldn't be as fun.
I write what I want the LLM to do. Generating a satisfactory prompt is sometimes as much work as writing the code myself - it just separates the ideation from the implementation. LLMs are the realization of the decades-long search for natural language programming, dating at least as far back as COBOL. I personally think they are great - not 100% of the time, just as a tool.
A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.
have you actually done some projects with e.g. claude code?
completely greenfield entirely up to yourself?
because ime, youre completely wrong.
I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.
to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas
And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes
LLMs are generative and do not have a fixed output in the way past autocompletes have. I know when I accept "intellisense" or whatever editor tools are provided to me, it's using a known-set of completions that are valid. LLMs often hallucinate and you have to double-check everything they output.
I don't know what autocomplete you're using but mine often suggests outright invalid words given the context. I work around this by simply not accepting them
The high failure rate of LLM-based autocompletes has had me avoid those kind of features altogether as they waste my time and break my focus to double-check someone else's work. I was efficient before they were forced into every facet of our lives three years ago, and I'll be just as efficient now.
Personally, I configure autocomplete so that LSP completions rank higher than LLM completions. I like it because it starts with known/accurate completions and then gracefully degrades to hallucinations.
Who'd want an autocomplete that randomly invents words and spellings while presenting them as real? It's annoying enough when autocomplete screws up every other ducking message I send by choosing actual words inappropriately. I don't need one that produces convincing looking word salad by shoving in lies too.
Autocomplete annoys me, derails my train of thought, and slows me down. I'm happy that nobody forces me to use it. Likewise, I would greatly resent being forced to use LLMs.
Completely different context though - you have to feed through your own data for autocomplete and even then it’s based on your own voice as a writer. When you no longer have to write - nor think about those things you’re writing - then your voice and millions of others will be drowned out by LLM trash.
100% this. I've had more fun using Claude Code because I get to spend more of my time doing the fun parts (design, architecture, problem solving, etc) and less time spent typing, fixing small compilation errors, looking up API docs to figure out that query parameters use camelcase instead of underscores.
I'd rather spend my time designing and writing code than spending it debugging and reformatting whatever an LLM cobbled together from stack overflow and github. 'Design, architecture, problem solving, etc' all takes a backseat when the LLM barfs out all the code and you have to either spend your time convincing it to output what you could have written yourself anyway or play QA fixing its slop all day long.
Back when I would ask ChatGPT to write code, I would agree with you, but using Claude Code's planning mode is a night and day difference. You write out a list of specs, Claude writes up a plan (that for writing backend APIs has always been just about perfect for me if my spec is solid), and then Claude executes that plan to almost perfection, with small nudges along the way.
If you're doing anything UI-based, it hasn't performed well for me, but for certain areas of software development, it's been an absolute dream.
This is why I exclusively write C89 when handling untrusted user input. I simply never make mistakes and so I don't need to worry about off-by-ones or overflows or memory safety or use after frees.
Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.
I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.
People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.
For me I can use LLMs to go from "hmm, I wonder if..." to a working MVP while I take the dogs for a walk.
Either I launch a task before I go, or start one with Claude Code Web on my phone.
Today's project was a DVD/Bluray library project I've been thinking of since the app I used before went from buy once to subscription-based.
5-10 minutes of writing the initial prompt and now I have a self-hosted web application that lets me take pics of the front and back cover of a DVD on my shelf and it'll feed it to an LLM to detect what movie it is and use an existing (also LLM-engineered) project of mine to enrich the data from TMDB/OMDB.
About an hour total and now I just need to put on a podcast, sit next to my DVD collection and grab pics of each for processing.
Unironically this: isn't writing on paper more fun than typing? Isn't painting with real paint and canvas more satisfying than with a stylus and an iPad? Isn't it more fun to make a home-cooked meal for your family than ordering out? Who stomps into the holiday celebration and tells mom that it'd be a lot more efficient to just get catering?
Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?
Yes, exactly: I'm not saying everyone loves to paint or cook or whatever, but that a lot of people do, and it's weird and bad for the response to this kind of article, in which someone shares that they are losing something they enjoyed, to be some form of "well, not everyone enjoys that."
To some people this is a gain, to some people this is a loss. Objectively it is changing things, and I can agree on having empathy for those who it changes something for negatively.
I feel like we are in a period of low empathy, understanding and caring for others as an aside from just this piece.
If you get your enjoyment from the process of cooking, by all means cook. But if you enjoy being with people and just eating food, catering is better.
Is your goal to efficiently get your thoughts to a medium as fast as possible, use a stylus or a keyboard. Do you enjoy the process of writing your thoughts down, use a fountain pen.
Or the easiest comparison: coffee. Do you want your fix of caffeine as fast as possible? Grab some gas station slop on the go for .99€. But if you're more about relaxing and slowly enjoying the process of selecting the optimal beans for this particular day, grinding them to perfection and brewing them just right with a pour-over technique or a fancy Italian espresso machine you refurbished yourself - then do that.
Same with code. I want to solve a problem I have or a client has. I get enjoyment from solving the problem. Having to tell the computer how to do that with a programming language is just a boring intermediate step on the way.
Radical change in the available technology is going to require radical shifts in perspective. People don't like change, especially if it involves degrading their craft. If they pivot and find the joy in the new process, they'll be happy, but people far more often prefer to be "right" and miserable.
I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.
The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?
AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.
> Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
My meaning could be in higher purposes; however I still need a job to be enable/pursue those things. If AI takes the meaning out of your craft it takes out the ability to use it to pursue higher order principles as well for most people, especially if you aren't in the US/big tech scene with significant equity to "make hay while the sun is still shining".
I have so many personal projects that I've started over the years, and then left to wither on the vine. I've been able to complete a dozen or so over the last 2 years, and work on a handful consistently over that same period, using AI heavily, and it's a lot of fun. I can work on the high level ideas, create projects, spitball with various characters and simulations, and it's like having a team of digital minions and henchmen. There is fun to be had, and you can us AI well or poorly, so you can develop your own skills while playing with the systems.
There's still just something magical about speaking with a machine - "put the man's face from the first picture onto the cookie tin in the second picture, make sure he still looks like Santa!" You can have a vague idea or inkling about a thing, throw it at the AI, and you've got a soundingboard to refine your thoughts and chase down intuitions. I totally understand the frustration people are having, but at some point, you gotta put down the old tools and learn to use the new. You're only hurting yourself if you stay angry and frustrated with the new status quo.
Yeah, but about personal projects we're probably different. They don't always involve a computer and my joy is in the making, not in the completing. Wither on the vine is fine for me.
Now back to computing, since I've been doing this for 25 years as my main job and it's probably what you thought I had in mind:
> at some point, you gotta put down the old tools and learn to use the new
I have the habit of learning new tools out of curiosity and only keep the ones that actually solve problems I have. Over time I have kept some (example: dvcs) and ditched others I was told were the best thing since sliced bread (example: containers). So far, conversational AI has been very good at replacing google/stack overflow. But that's about it.
I'm sure I'll use more of this stuff as time goes by, but there is really no need to rush things. I'll let early adopters adopt and I'll harvest mature solutions in due time.
>LLMs are merely a tool to achieve a similar end result.
Advanced tools are never "merely tools".
Tools that are pushed onto people, come to be expected to even participate in social/professional life, and take over knowledge-based tasks and creative aspects, are even less "merely tools".
We are not talking of a hammer or a pencil here. An LLM user doesn't outsource typing, they outsource thinking.
I just had Claude Code finetune a reranker model to improve it significantly across a large set of evals. I chose the model to fine tune, the loss function, created the underlying training dataset for the re-ranking task, and designed the evals. What thinking did I outsource exactly?
I guess did not waste time learning the failure-prone arcana of how to schedule training jobs on HuggingFace, but that also seems to me like a net benefit.
I was about to write something really emotional and clearly lacking any kind of self reflect ; then I read you again ; and I admit there is a lot of part of this that is true.
I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".
But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.
Speaking as someone who despises writing freehand, and loves typing... what? I understand what you're trying to say, but you lost me very quickly I'm afraid. Whatever tool I use to write I'm still making every choice along the way, and that's true if I'm dictating, using a stylus to press into a clay tablet, or any other medium. An LLM is writing for me based on prompts, it's more analogous to hiring a very stupid person to write for you, and has very little to do with pens or keyboards.