I thought a big part of hacker culture involved taking interest in new technology, exploring the edges of it (and beyond those edges) and figuring out what works and what breaks - and how to break it.
I don't understand why many software engineers are so resistant to exploring AI. It's fascinating!
I've explored AI, and will continue to do so. I deliberately overcame my resistance to it because I'm an old-school hacker and I do enjoy tinkering with technology. But regardless of how cool it is and whether it works or not (sometimes it does, sometimes it doesn't), it doesn't make me feel good. Like the brain stupor I get from watching a vapid movie, or eating too much sugar and bouncing off the walls.
Relatedly, this blog quote[0] really resonated with me:
> reaching the end state of a task as fast as possible has never been my primary motivation. I code to explore ideas and problem spaces...
Using AI to code is like mountain biking but on a motor scooter, and you're riding in a sidecar looking at a map while a golem drives the bike. It's amazing that the golem can drive the bike at all, to be sure, and yes I'm wearing a helmet so the crashes aren't too bad, but...what are we doing here? I like riding my bike! It might be more physical work for me (but also with the golem I often have to get out and push the motorbike anyway so it's not clear), but when I'm biking, I'm connected to the ground and I can get into a flow state with it. I'm also getting exercise and getting better at biking and learning the trails and it's easy to hop off and explore some useless cave that's not accessible by bike, just because it looks interesting. I know the AI-proponent answer is "you still can!" but when I'm in the sidecar, my modality shifts. I'm no longer independent, I'm using a different kind of agency that's map- and destination- focused, and I'm not paying attention to the unfolding world around me.
So I understand why some people are excited about AI, and I don't think it's necessarily bad (though it does seem insidious in some pretty obvious ways, that even its proponents are aware/wary of). But why are many of those people, like yourself, seemingly unwilling/unable to understand why others of us are bouncing off it?
I feel like people keep explaining this, often in direct replies to your comments like this one, some of which you specifically even respond to. So if you still don't understand, maybe you're reading but not actually listening? Or maybe long-term memory deteriorates as one merges with the AI?
I'm fine with people deciding that AI programming isn't for them, especially if they've given it a fair shake first and didn't drop it the second it made an obvious mistake.
What frustrates me is when people 1. claim it's entirely useless and that anyone who thinks it's useful is deceiving themselves (still very common, albeit maybe less so now than it was six months ago) or 2. claim that spending time writing about and understanding it "has turned a lot of them into drooling fanboys."
Hence my snappy response to the above comment. I took it a bit personally.
That's fair. I also made a snappy response because I get frustrated on the other side, when proponents say 1) "you're using it wrong" or 2) "it'll get better" or 3) "I don't understand why people are resistant". In that last case, there's two interpretations of resistance, the one in which someone doesn't overcome their initial knee-jerk response, and another one when someone develops a more warranted resistance after exploring it. The first kind is potentially conservative, or ideological, or fearful, or lazy, which I think is what you take issue with. The second kind is more balanced and reasonable. Do we need a better descriptor to differentiate the two?
Honestly at this point you could build a full periodic table of AI hesitancy/resistance/criticism and have it be a useful document!
I'm staunchly opposed to the whole "model welfare" thing, deeply skeptical of most of the AGI conversation and continue to think that outsourcing any critical decisions to an AI system is a catastrophically bad idea in a world where we haven't solved prompt injection (or indeed made much progress towards solving it) - so carve me out a bunch of squares for those.
Maybe there's room here for one of those political compass style quizzes.
That's because a lot of commenters here are not hackers in any real sense; rather, they're software engineers. Perhaps this hasn't always been the case.
It’s embarrassing. Don’t rely on AI, guys. Have pride in yourselves.