Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not about typing, it's about writing. You don't type, you write. That's the paradigm. You can write with a pen or you can type a keyboard. Different ways, same goal. You write.

LLMs code for you. They write for you.





Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.

"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.


As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.

However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.


As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.

A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?

It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.


Well put.

And dishes and laundry can be enjoyable zen moments. One only suffers by perceiving them as chores.

Some people want all yang without any yin.


You don't have to use them.

Only if you are already wealthy or fine with finding a new job

If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.

In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.

If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.


You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.

Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.

For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?


Very well put

You're wrong in saying so. Many companies are quite literally mandating their use, do a quick search on HN.

That's not how technology works in a society.

When I do dishes by hand I think all kinds of interesting thoughts.

Anyway, we've had machines that do our dishes and laundry for a long while now.


We have machines that only do some parts of these tasks.

yet some people still do them by hand…

>"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves

Screens are absolutely not neutral and are bad by themselves. Might be a bad we've become used to, but they are a bad.


So finding out information was fun for you. Would it be also fun if said LLM write your essay for you based on your semi-coherent idea?

Maybe, but probably not. For me, an early goal of writing is to get my thoughts in order. A later goal is to discuss the writing with people, which can only happen in a high-quality way if my thoughts are in order. Achieving goals is fun.

Whether the LLM could do a better job than me at writing the essay is a separate question...I suspect it probably could. But it wouldn't be as fun.


I write what I want the LLM to do. Generating a satisfactory prompt is sometimes as much work as writing the code myself - it just separates the ideation from the implementation. LLMs are the realization of the decades-long search for natural language programming, dating at least as far back as COBOL. I personally think they are great - not 100% of the time, just as a tool.

> LLMs code for you. They write for you.

A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.


To get the LLM to code for me, I need to write.

have you actually done some projects with e.g. claude code? completely greenfield entirely up to yourself?

because ime, youre completely wrong.

I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.

to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas

And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes


So does autocomplete. Why not treat LLM as next autocomplete iteration?

LLMs are generative and do not have a fixed output in the way past autocompletes have. I know when I accept "intellisense" or whatever editor tools are provided to me, it's using a known-set of completions that are valid. LLMs often hallucinate and you have to double-check everything they output.

I don't know what autocomplete you're using but mine often suggests outright invalid words given the context. I work around this by simply not accepting them

The high failure rate of LLM-based autocompletes has had me avoid those kind of features altogether as they waste my time and break my focus to double-check someone else's work. I was efficient before they were forced into every facet of our lives three years ago, and I'll be just as efficient now.

Personally, I configure autocomplete so that LSP completions rank higher than LLM completions. I like it because it starts with known/accurate completions and then gracefully degrades to hallucinations.

Because they are not. Autocomplete completes the only thing you already thought. You solve the problem, the machine writes. Mechanical.

LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.


Sometimes. Usually LLM does exactly what I ask it. There is not like there are million ways - usually 4-10.

Who'd want an autocomplete that randomly invents words and spellings while presenting them as real? It's annoying enough when autocomplete screws up every other ducking message I send by choosing actual words inappropriately. I don't need one that produces convincing looking word salad by shoving in lies too.

I wonder why people have such completely different experience with LLM

You could build one like that, but most implementations I've seen cross the line for me.

Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.


Autocomplete annoys me, derails my train of thought, and slows me down. I'm happy that nobody forces me to use it. Likewise, I would greatly resent being forced to use LLMs.

Completely different context though - you have to feed through your own data for autocomplete and even then it’s based on your own voice as a writer. When you no longer have to write - nor think about those things you’re writing - then your voice and millions of others will be drowned out by LLM trash.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: