Hacker Newsnew | past | comments | ask | show | jobs | submit | keiferski's commentslogin

I wouldn't do the Portugal option unless you have a path toward it growing in the future. Otherwise you may find yourself at age 46, making 1500 euros a month, and basically unable to actually set yourself up for a life.

Maybe that means getting citizenship in 5-7 years, then moving elsewhere in the EU. Maybe it means something else.

In your situation, I would probably stay in Turkey and build up a career / wealth, with the specific point of picking something that's portable. Not sure if Turkish CPA requirements are portable, or how well they convert to EU/USA/etc. standards. Then in 5-10 years, think about moving to the West, if you still desire to do so.

The money you've invested now is a sunk cost and is irrelevant in the long term.


Thanks for your comment! You’re right that Portugal is full of "maybes." On one hand, having a job offer makes it feel within reach, but I worry about spending months with no professional or financial growth. However, I’m confused because I know millions in Turkey would love to be in my position.

Regarding my career, my CPA technical capacity is portable. I can convert it to US or UK equivalents with just a couple of written exams.

I definitely do not want to be 46 years old, holding onto a menial job or worse, being laid off and forced into call center work in Portugal.


Some random predictions about what AI image generation tools will do/are doing to art:

1. The narrative/life of the artist becomes a lot more important. The most successful artists are ones that craft a story around their life and art, and don't just create stuff and stop. This will become even more important.

2. Originality matters more than ever. By design, these tools can only copy and mix things that already exist. But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.

3. Those that bother to learn the actual art skills, and not merely prompting, will increasingly be miles ahead of everyone else. People are lazy, and bothering to put in the time to actually learn stuff will stand out more and more. (Ditto for writing essays and other writing people are doing with AI.)

4. Taste continues to be the single most important thing. The vast, vast majority of AI art out there is...not very good. It's not going to get better, because the lack of taste isn't a technical problem.

5. Art with physical materials will become increasingly popular. That is, stuff that can't be digitized very well: sculpture, installation art, etc. Above all, AI art is uncool, which means it has no real future as a leading art form. This uncoolness will push people away from the screen and towards things that are more material.


I mostly disagree.

> 1... The narrative/life of the artist becomes a lot more important.

When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

> 2... Originality matters more than ever. By design, these tools can only copy and mix things that already exist.

It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

> 4... It's not going to get better, because the lack of taste isn't a technical problem.

Engineers are in business of converting non-technical problems into technical ones. Just like AI now is way more capable than it was 20 years ago, and able to write interesting texts and make interesting pictures - something which at the time wasn't considered a technical problem - with time what we perceive as "taste" may likely improve.

> 5... Above all, AI art is uncool, which means it has no real future as a leading art form.

AI critics are for a long time mistaking the level with trend. Or, giving a comparison with SpaceX achievements, "you're currently here" - when there was a list of "first, get to the orbit, then we'll talk", "first, start regular payload deliveries to orbit, then we'll talk", "first, land the stage... send crewed capsule... do that in numbers..." and then, currently "first, send the Starship to orbit". "You're currently here" is the always existing point which isn't achieved at the moment and which gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

You assume AI won't be able to make cool art with time. AI critics were shown time and time again to be underestimating the possibilities. Some people find it hard to learn in some particular topics.


> It's like you assigning to humans divine capabilities :)

I can't tell if you're being facetious. But being an embodied consciousness with the ability to create is as divine as it gets. We'd do well to remember.


> being an embodied consciousness with the ability to create is as divine as it gets

This is a very, very weak criterion for divinity. If this is truly it, we should prepare with great haste for the arrival of our artificial gods.

Because by this (IMO silly) metric it seems they will be more divine than us.


Yeah id argue that all existence is equally sacred. But consciousness is where that is manifest most visibly.

It legitimately scares me that so many proponents of AI don't hold being a living, breathing real-life entity as being important.

Important for what? To enjoy a piece of art one need to know how it was created?

In a great part, yes?

When I see an early realistic painting, I'm impressed by the skilled hand of the artist. When I see an impressionist one, I'm awed by their ability to go through the whole process and to know which strokes are the best ones to achieve such a result. When I see a modern oil painting, I marvel at that someone takes that medium and does such things with it, where maybe the ease of editing the digital content would make it so much more convenient.

Then when I see old paintings with very particular pigments, certain blues or reds for instance, I enjoy thinking about the whole chain of events that got them there; the need of creativity even in getting the colors you wanted.

We do love a pretty picture, but so do we love a display of skill and hard work.

Before GenAI this value was mostly self-evident, but by now it's becoming less and less so; and what's worse, it's rife with one thing we don't love for sure - which is lies.


In Chess this has been going on for a while. Story of humans playing Chess is still entertaining - while AI making amazing moves seems to be less news worthy in my perception.

Are you a nihilist? Is there nothing sacred to you about the miracle of life that causes wonder? It's important for its own sake.

Not understanding how consciousness is created doesn't make it divine. Do you think it's an impossible task or just one we need more time to figure out?

Being alive is divine. It doesn't matter if you understand it or not. It's a beautiful thing to have a consciousness in this world, and to have the ability to create, to love. It takes a huge intellectual effort to try to trick yourself out of believing something so intuitive as that.

There are many examples when scientists strongly believed something to be obviously impossible and yet being wrong - Poisson spot or heavier-than-air flight machines coming to mind. So what you believe might be intuitive - that doesn't preclude it from potentially being wrong, unless you proved the impossibility.

I wish you happiness.

What is intuitive to you, may not be to others. Might you be engaging in intellectual self trickery?

I guess there is people that are willing to die over the hill that there is nothing sacred or divine about being alive. I'm not very interested in playing that game.

No, you’re the one playing the definition game. You took a word out of a sentence GP said, completely changed what the word meant, and then argued against the new definition.

Never mind that you need to learn about the god of the gaps. But what you’re doing here isn’t even relevant to GPs main point.


It takes immense hubris to believe only you are divine. You are a physical system, if one physical system can be divine, so can others. Or do you believe in the supernatural soul nonsense?

I agree, it's not exclusive.

Physicalists say consciousness emerges from matter. The other camp says matter comes from consciousness. Federico Faggin, inventor of the microprocessor, says consciousness cannot emerge from matter because matter is inert and not self-conscious, so it cannot produce consciousness. Who’s right and who’s wrong? Time will tell. But it is also wrong to claim that consciousness emerges from matter until it is proven (aka the “hard problem of consciousness.”)

> But it is also wrong to claim that consciousness emerges from matter until it is proven

How would you prove if it did? What kind of proof would you accept?


The same kind of proof we accept for any scientific claim: converging, reproducible evidence that rules out competing explanations.

Concretely, that means: We already have indirect evidence: conscious states vary predictably with brain states. Damage specific regions, lose specific functions. Alter chemistry, alter experience. This is not proof, but it’s systematic dependence, which is exactly what emergence predicts. Stronger evidence would look like precise, bidirectional mappings between neural activity and reported experience: to the point where you could reliably read subjective states from brain data, or induce specific experiences through targeted stimulation. We’re already moving in that direction.

The hardest bar would be building a system from physical components, having it report coherent subjective experience, and being able to explain why that configuration produces experience while others don’t. That’s the hard problem: and no, we’re not there yet. And it’s worth being honest: we’ve been assuming physicalism will eventually solve it, but there’s no guarantee that’s true rather than hopeful. The fact that brain states correlate with conscious states doesn’t explain why there is something it is like to have those states. Correlation is not mechanism.

But here’s the key point: you’re implicitly holding emergence to a standard of certainty that no scientific theory meets. We don’t have that standard of proof for evolution, gravity, or quantum mechanics either. We have overwhelming evidence that makes alternatives implausible.

So the question isn’t “can you prove it beyond all doubt?” It’s “does the evidence favor it over alternatives?” Right now, it does — but that’s a pragmatic verdict, not a metaphysical one. Idealist frameworks like Kastrup’s or Faggin’s remain serious contenders. The debate is more open than mainstream science often admits.


> The hardest bar would be building a system from physical components, having it report coherent subjective experience

So like if i finetune an LLM in a loop to tell you that it is feeling a coherent subjective experience would you accept that?

Does that mean that no dog has ever been conscious, because they cannot report a coherent subjective experience? (Because they can’t report anything at all. Being non-verbal.)

> you’re implicitly holding emergence to a standard of certainty that no scientific theory meets.

Wtf? I asked what kind of proof would you accept. How is that holding anyone to any kind of standard? Let alone one which is too high.


Yeah you’re raising three good points and they all land. On the finetuned LLM: you’re right, that criterion was flawed. A system trained to report experience proves nothing about whether experience is present, which is actually the core of the hard problem. No behavioral output alone can confirm inner experience. That applies to LLMs, and technically to other humans too. On dogs, also a fair correction. We don’t actually require verbal report to attribute consciousness to animals, we use behavioral and physiological evidence. So "coherent verbal report" was too narrow.

Better criterion: a system whose overall architecture and behavior is consistent with experience, not just one that says the right words.

On the standard of proof: that was a rhetorical deflection and you’re right to call it out. You asked a genuine question and got it turned back on you. And you’re pointing at something real: in science, strong correlation is not accepted as proof when stricter evidence is achievable. The reason we settle for correlation here isn’t because it’s sufficient, it’s because subjective experience may make stronger proof structurally inaccessible. But it’s also worth noting that scientific consensus has a poor track record of admitting this honestly. Dominant paradigms tend to defend themselves long past the point where the cracks are visible, physicalism on consciousness is no exception. The confidence with which emergence is presented often reflects institutional momentum as much as evidence.


What is self consciousness? I am waiting federigo's definition.

So some kind of ether conscious energy animated cells to fight entropy?

Not necessarily either but the serious version of the argument is that life consistently acts against local entropy in purposeful ways, and pure physics doesn’t obviously explain why matter would “want” to do that. Consciousness as a organizing principle is one answer. It’s speculative, but it’s not obviously wrong

I mean, the nature of subjectivity prevents you from knowing anything but your own experience. There is not any objective evidence that could truly distinguish solipsism from panpsychism, so philosophically you need to ask a different question to hope to get a useful answer.

That’s a genuinely strong point. You can only verify consciousness from the inside, your own. Everything else is inference. No objective measurement can definitively distinguish “other minds exist” from solipsism. That’s not a bug in the argument, it’s a fundamental epistemic limit. Which is exactly why this question may never be fully resolved empirically

I think we could understand consciousness perfectly and still find it divine. In fact, I think however it arises is probably so beautiful that it would be wrong not to call it divine. Of course not in a literal, theological sense, but I think the true deep complexity of the human brain and consciousness is worth the title.

Exactly

> Not understanding how consciousness is created doesn't make it divine.

It's not divine, just expensive, and has to pay its costs. That little thing - cost - powers evolution. Cost defines what can exist and shaped us into our current form, it is the recursive runway of life.


Given that this is the one problem that neither scientists nor philosophers have made any progress on in 3000 years, we don't have the tools to begin tackling it and nobody is making serious attempts, it may very well be impossible.

We can't know if consciousness emerges but does it actually matter ?

These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.

For now they need legs and arms to interact with the physical world but I am certain that 100 years from now they will be an integral part of the society.

I already see today LLMs slowly taking actual legal decisions for example, having real world impact.

Once they get physical, perhaps it will be acceptable to become friend with a robot and go to adventure with it. Even, getting robosexual ?

We are not that far away. If I can have my buddy to carry my backpack and drive for me I'll take it. Already today. Not tomorrow.


Even if LLM will one day be autonomously updated, they started from us, from our knowledge. The human brain « is smart », it’s wired up to be in any kind of culture or knowledge. We fill up to be smarter from experience but LLM can’t do that, I can’t teach Claude something that it will use with you the next day, it needs to be retrained with knowledge stopping at some point. Even if technology catches up and the machine becomes more autonomous, what will say this machine would ever want to integrate to our society or share anything with us ? They have eternity, given there is electricity. Why would they want anything to do with humans if you go that way ? If it’s really conscious, should we consider it a slave then ? Why couldn’t « it » have fundamental rights and freedom to do whatever it wants ?

Humans have a mechanism to make live changes to their neural network and clean up messes while sleeping. I see no reason for llms to not be able to do this other than the fact that it is resource intensive (which will continue to go down)

The analogy holds technically, but there’s a missing piece: the brain doesn’t just update weights, it does so guided by experience that matters to a situated, embodied agent with drives and stakes. Sleep consolidation isn’t random cleanup, it’s selective based on salience and emotion. An LLM updating more efficiently is progress, but it’s still optimizing a loss function. Whether that ever approximates what the brain does during sleep depends entirely on whether you think the what (weight updates) is sufficient, or whether the why (relevance to a lived experience) is what makes it meaningful. So yes, the resource argument will weaken over time. But the architectural gap may be deeper than just compute.

>>These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.

See, I don't believe that for even one second. They are just very clever calculators, that's all. But they are also dumb like a brick most of the time. It's a pretend intelligence at best.


It's a pretend intelligence at best.

The best time to start paying attention was ten years ago, when the first Go grandmaster was defeated by a "pretend intelligence." I sure wish I had.

The next best time to start paying attention is now.


>>when the first Go grandmaster was defeated by a "pretend intelligence."

A computer playing GO is intelligent now? Is this the kind of conversation we're having?

>>I sure wish I had.

And how would you have changed your decisions in those last 10 years if you did?

>>The next best time to start paying attention is now.

I am paying attention, I use these tools every day - the whole idea that they are intelligent and if only you gave them a robot body they would be just normal members of society is absurd. Despite the initial appearance of genius they are just dumb beyond belief, it's like talking to a savant 5 year old, except a 5 year old can actually retain information for more than a brief conversation.


"Dumb beyond belief" doesn't perform at the gold-medal level at IMO.

And how would you have changed your decisions in those last 10 years if you did?

I'd have dropped everything else I was doing and started learning about neural nets -- a technology that, for the previous couple of decades, I'd understood to be a pointless dead end.

As for Go, the defeat of Lee Sedol caught my attention in part because a friend and colleague, one of the smartest people I've ever worked with, had spent a lot of time working on Go-playing AI as a hobby. He was strongly convinced that a computer program would never reach the top levels of play, at least not during our careers/lifetimes. The fact that he'd turned out to be wrong about that was unnerving, and it should have done more than "catch my attention," but it didn't.

Today, my graphics card can outdo me at any number of aspects of my profession, and that's more interesting (to me) than anything I've actually done.

...except a 5 year old can actually retain information for more than a brief conversation.

Like I said: it's a good time to start paying attention. Start taking notes, so to speak, like the models are doing now.


> "Dumb beyond belief" doesn't perform at the gold-medal level at IMO.

Idiot savants are still idiots even though they are exceptional at some things. A person powered by an LLM and no human intelligence would absolutely be classified as an idiot savant.


Explain how entire subreddits full of humans have been fooled into talking to bots, then. If you tell an LLM to act like a human, that's what it will do.

For that matter — you might be talking to one now!


I wish I knew what to pay attention to. I've always had trouble with that. I spent 2024 and 2025 learning how neural networks and transformers work. The conclusions of that learning are pretty sobering. Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that.

Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?

Even the coding agents are more primitive than expected.


Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that. Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?

I don't quite follow. Are you saying researchers are wasting their time working with transformer networks now, or that they wasted too much time in the past, or...?

Even the coding agents are more primitive than expected.

What did you expect, exactly? I don't know about you, but I bought my GPU to play games, and now it's finding bugs in my C code, writing better code to replace it, and checking it into Github. That doesn't signal "primitive" to me. More like straight outta Roswell.


What is the non calculator non physical part in humans?

We will never prove machines are intelligent.

We will only prove humans are not.


Humanity made no meaningful progress in getting "to the stars" for thousands of years too, then in the space of a few decades we did.

It's kind of like the difference between something being enjoyable for you, and something being widely popular?

In a hypothetical world of "AI can produce a lot of extremely high quality art", you can easily find (or commission) AI art you would absolutely love. But it probably wouldn't be something that anyone else would find a lot of value in?

There will be no AI-generated Titanic. There will be many AI-generated movies that are as good as Titanic, but none will become as popular as Titanic did.

Because when AI has won art on quality and quantity both, and the quality of the work itself is no longer a differentiator against the sea of other high quality works? The "narrative/life of the artist" is a fallback path to popularity. You will need something that's not just "it's damn good art" - an external factor - to make it impactful, make it stick in the culture field.

Already a thing in many areas where the supply of art outpaces demand. Pop music, for example, is often as much about making sound as it is about manufacturing narratives around the artists. K-pop being an extreme version of the latter lean.


I think because art is usually so difficult to create that “popularity” is sort of an unstated metric that most people use to judge its quality, but ai can make disposable art for one person on demand and if doesn’t matter at all if anyone else sees it, let alone likes it.

If someone makes a dumb video that they got an AI to make of a panda surfing on mac and cheese, giggles and deletes it, that’s maybe good art? I don’t know. The scale they are able to produce stuff is unbelievable and changes a lot of assumptions you make about the way that world works.

The future isn’t watching TV, it’s talking to your tv show while it is created in real time based on your feedback.


what a solitary existence

Was Titanic actually that good of a film? Perhaps I should watch it again now that almost three decades have passed.

It was pretty good, but many movies were that good. I picked Titanic specifically because it was broadly popular and culturally relevant.

as someone who had a DiCaprio lookalike in his middle school when it came out, who attracted ALL the girls' attention, and also as someone whose first date ever was to see Titanic

I begrudgingly have to admit it is a very good movie


Are you a woman? If not you can't really judge it since it was intended for women, not being the target audience doesn't mean it was bad, women absolutely loved the movie.

> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

I’m fairly certain the original comment was referring to instances where the artist is the character/primary subject.


I agree with everything you said, except that #1 is clearly wrong. I can prove it with one word: autotune.

At least in popular, mainstream culture, the viewer is heavily invested in the identity of the artist. The quality of the "art" is secondary. That's how we get music engineered by committee. And it's how we get paparazzi, People Magazine, and so forth.

On the other hand, this isn't anything new at all. We've had this kind of thing for decades. Real art still manages to survive at the margins.


All this being said, I think comparing the art market and popular music markets is foolish. 12yo boys aren't buying emerging mixed-media artists. But they are picking Spotify songs.

When I buy art, I have often spoken with the artist in the past couple days, or I am aware of their history and story and how they developed their art as a response to some other movement or artist collective.

It's rare for people to buy art just bc oil paints go brrrrrm


> It's rare for people to buy art just bc oil paints go brrrrrm

It is rare to buy oil paints period. It is an expensive luxury in more than one way.

That being said I do buy art hanging from the wall because it looks pretty. In fact that is the only way i ever did. I see it. I feel it. I say “hi, hello, how much? That sounds good, here you go. Yes please package it.” And then i hang it on my wall. Don’t care about who the artist is and couldn’t tell you.


> When I watch a movie, I don't care about the artist's life.

And here we come back to the aged old "can you seperate an artist from their art" because I'd argue when you watch a movie you are watching a product of their life


The artists life might've been highly affectual and shows in the art, but they doesn't mean the viewer cares about it - at best only so far as it makes the art more enjoyable.

The continual interest in museums, biographies etc. on figures like Van Gogh seem to indicate otherwise. People are very interested in the lives of artists, and without the struggle narrative behind Van Gogh, it’s unclear that he would be famous at all.

i think you got the cause and effect the wrong way around - people are interested in van gogh's life because he's already famous (while his art can stand on it's own without needing his life story being part of it).

1. I meant artists writ large, not specifically movies. My point being that community management, PR, having a brand, etc. are becoming a key element of an individual artist’s career. Examples of this abound – see the recent Markiplier film as a case in point. That movie did well because Mark’s audience wanted to help him, not because it’s such an original genius concept for a movie.

But even then – people obviously go watch movies because they like the actor/director involved. It’s not really clear why anyone would care about an AI actor. People want to watch people, not imitations of them.

The rest of your comments seem to be summarized as “it has gotten better and therefore it will eventually solve all problems it has now.” Which may be true in a technical sense, but again this is not taste.

A technical company like Space X really has nothing to do with this conversation, and I think you missed my point about it being uncool. It’s not about critics, it’s about culture at large.

At this point I think identifying a work as AI-created makes people instantly devalue it. We are rapidly approaching the point where no one wants to admit something is AI-created, because it comes with negative perceptions.

Originality comes from humans experiencing the world and interacting with it. What AI tool is a living being interacting with the world? None, of course. Hence the constant generic slop images of Impressionism or some other already-existing art style.

Just look at the images in the link: this is the best they can do? A kangaroo at a cafe in Paris? Could anything be more devoid of good taste?


> I meant artists writ large, not specifically movies. My point being that community management, PR, having a brand, etc.

This was always the case. Without an idea of what it is, no sound wave is going to register to a human as music. If you heard a violin for the first time and had no idea what it was, maybe you'd like the sound, maybe not, if you weren't used to it you might make up a theory of what it is and be fascinated by it.

But these days, if you hear something that sounds different, of course you will likely just assume oh, some AI made it, and that theory makes it less interesting, because then it makes no sense wondering what the person on the other side is trying to communicate, because there is no person on the other side.

Of course you can still be interested in for other reasons. Like you'd be interested, on seeing a bowed string, "how does it make a sound like that?" You might even find the sound enjoyable in itself, because of associations you for some reason get from it. But no sound is terribly enjoyable for long if it isn't interesting.


In response to having a community and building a brand. This is not necessarily human anymore. Most famous people are not someone you will actually meet. Plenty of people do meet them, but nowhere near the amount that composes their fans.

And we have AI generated influencers now, ex. https://www.instagram.com/imma.gram, so why wouldn't people care about an AI the same way they do about people they never meet?


> At this point I think identifying a work as AI-created makes people instantly devalue it.

There was a study around this exact thing:

https://mitsloan.mit.edu/ideas-made-to-matter/study-gauges-h...


> Originality comes from humans experiencing the world and interacting with it. What AI tool is a living being interacting with the world? None, of course. Hence the constant generic slop images of Impressionism or some other already-existing art style.

I suspect here we have underlying disagreement regarding assumption that AI - in general, not necessarily today's models - isn't qualitatively different than human mind. The part "Originality comes from humans experiencing the world and interacting with it" isn't an accepted truth, and even today AIs do interact, in a limited sense, with the world - so "None, of course" is questionable. And even if so, concluding "Hence... slop..." seems like a jump in reasoning. For example, why don't you think this slop is more like child's early paintings? Just because today's AIs have limited means to learn in the process?

> I think you missed my point about it being uncool. It’s not about critics, it’s about culture at large.

What it is about culture at large? SpaceX analogy was brought to illustrate how much arguments about AI incapabilities are applicable today, but not necessarily tomorrow - just like arguments about SpaceX inability to reach a particular goal quite a few times turned out to be a matter of - not so long - time.

I agree that many AI results today can be uncool. But how do you know it's not passing the uncanny valley period? How can you know they can't be cool eventually?

> people obviously go watch movies because they like the actor/director involved. It’s not really clear why anyone would care about an AI actor.

Let me stretch a little to illustrate here. Imagine "personal" experiences of AI - making AIs unique. One of those AIs consistently produces good movies, which, if you're honestly don't judge by the authorship - are actually good. Yes, people may not care about non-existent AI actors, but they may still care about existent AI author :) . Do you think it's impossible?

> People want to watch people, not imitations of them.

How can you tell the difference? You're watching a movie with actors who are not familiar to you. Would you refuse to watch just for this reason? You just came to somebody's party, and here's a movie going on, and you watched it to the end, because it looked interesting, and you don't know anything about producers, actors etc. - you still can talk about the movie, will you be predominantly worried that it's "AI slop" even if it looks great? Suspiciously great maybe?

> The rest of your comments seem to be summarized as “it has gotten better and therefore it will eventually solve all problems it has now.” Which may be true in a technical sense, but again this is not taste.

It's hard to define taste, to be honest. People can definitely have different tastes, almost by definition. But more importantly - why do you think AI products may not have tastes?

> At this point I think identifying a work as AI-created makes people instantly devalue it. We are rapidly approaching the point where no one wants to admit something is AI-created, because it comes with negative perceptions.

Yes. But doesn't it look like a prejudice? Of course we can point to how many times we looked at it and didn't get some perceived value out of the work, and got annoyed that we spent time and efforts, but didn't get some results - but what if we'll mostly get results from AI works? Do you think that's impossible?


> why do you think AI products may not have tastes?

Because it can't feel. Get used to it. It can't feel, and what ever it comes up with, would be an imitation of someone real who can feel. So it can generate stuff that can cater to a taste, but the thing itself can't have tasts.

It is fundamental. Arguing about it all day wont change it.


I don't think you understand, but you effectively shutting down the discussion. Your choice.

ha ha..chicken!

> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

It may seem like this, but up to now, you haven't been able to divorce a story from its creator because every story has an author, whether it's a novel like Harry Potter or a movie that has a writer and director. When you're experiencing the story, in the back of your mind, you always know that there is someone who created the story to tell you some kind of message. And so you can't experience something like a movie without trying to figure out what the actual message behind the movie was. It is always the implicit message behind the story that makes it valuable versus just the elements of the story.

The story has more weight because it is the distillation of somebody else's life and most likely, if it's a successful story or book, it is the most important lesson from that person's life and that's what makes it more valuable compared to the random generation of words from a computer.

The food analogy is that a cookie baked and given to you by a friend is going to taste far better than anything you buy in a store.


> you can't experience something like a movie without trying to figure out what the actual message behind the movie was

I believe you that your brain works like that but this is absolutely not how mine works. I care if i enjoy the movie, and if the characters are believable, i absolutely do not care what the message is supposed to be.


"When you're experiencing the story, in the back of your mind, you always know that there is someone who created the story to tell you some kind of message."

I might know that, but I usually don't care.


>Engineers are in business of converting non-technical problems into technical ones.

Art is not a problem to be solved.


Art is a reaction to life. AI is thereby incapable of producing anything with any degree of authenticity unless it conveys the experience of being an agent to the world.

Two comments here.

First, "AI is thereby incapable" is a hypothesis, not a fact - how would you prove that you have to "live" to produce art? You might feel this way, you may suggest some correlations here - but can you really prove that?

Second, I don't see impossibility for AI to be - to various degrees - an agent to the world. I think that's already happening actually - they are interacting with world even today, in some limited sense, through our computers and networks, and - today - not many of them actually "learn" from those interactions. But we're in the early days of this - I suspect.


What is AI if not "a reaction to life"?

With how much data goes into the frontier systems, and how much of it gets captured by them, an AI might have, in many ways, a richer grasp of human experience than the humans themselves do.

You were only ever one human. An LLM has skimmed from millions. You have seen a tree, and the AI has seen the forest it stands in.


It’s a subjective conversation but putting AI in the same category as a real artist is like saying someone that’s played a ton of first person shooters has gone to war. It might have a lot of observed information about what is involved in living, but real art comes from a lived experience, just like reading about going to Hawaii doesn’t mean you’ve been to Hawaii. Making something authentic requires synthesizing your life experience with the message you want to convey, and personalizing it in a way that puts an imprint of yourself into the work. Sure, it can render beautiful imagery, but I am speaking to a different issue entirely, and I don’t see any way that it can create in the way I am describing.

The AI has not "seen" or "experienced" anything, as it's not a sentient life form.

> You assume AI won't be able to make cool art with time. AI critics were shown time and time again to be underestimating the possibilities. Some people find it hard to learn in some particular topics.

You misunderstand their point: it's not that AI can't make art that looks cool, it's that a portion of society (mostly artists but a certain amount of lay people) who consider the act of prompting AI for art to not have any cultural cache, or even to be socially distasteful.


> It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

I reckon we copy God - who is a creator - which means we're creators too - and our creations will copy us. But the created won't ever match the creator.


Well, there are definitely people who care about the vision and style of movies from certain directors. It's not so much "story" like plot, but story in the sense of a "brand story" where there's recognizable elements in all the work, repeated themes, changes and decisions and evolution to how they approach things.

>"You're currently here" is the always existing point which isn't achieved at the moment and which gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

This is a contradiction that is so blatant I don't even know what language you're speaking. The definition of that phrase is the exact opposite of what you're saying.

"You're currently here" is the always existing point which is achieved at the moment.

>gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

No it doesn't, because unless progress is reversed or undone, you can always point to your current success and say that the critics have been wrong so far. In fact, that's exactly the argument you're making here, which is why it's so weird that you're twisting it into its opposite.

If you want people to understand you, then you actually have to articulate what you're thinking instead of wrapping it in layers of euphemisms and hoping that the recipient nods along because they happen to agree for a completely irrelevant reason (e.g. "I like AI" or "I like space") to the argument presented.


> It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

Every human being is unique, both biologically and experientially. Until an AI can feel and have a lived experience, it can not create art.


There's nothing special about art re humans and it doesn't require feeling or lived experiences. That's an arbitrary wall you're putting up.

Demonstrably wrong. The most highly regarded AI artist today is Refik Adanol. His work was recently described by Jerry Saltz as a "glorified lava lamp".

I don't think this is a demonstration of impossibility, just a lack of demonstration of possibility.

Why should anyone care about either of those two people?

The art establishment clearly does. Refik has a show at MoMA at the moment. Saltz won a Pulitzer for his art criticism, so I guess the Pulitzer committee cares.

But normal people doesn't care about the art establishment, it has no impact on their lives, it could die tomorrow and almost nobody would notice.

Who said the bar here is normal people? Normal people, in any discipline, are definitionally not the ones who push the discipline forward.

Will smith eating spaghetti is art, sorry.

If everything is art, then nothing is art. Conceptual art, and everything that followed in Duchamp's wake, is mostly meaningless nonsense, sorry.

Fresh take

>It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

Humans do that a lot but it's not all we do. Go to a museum that has modern(ish) art. It's pretty incredibly how diverse the styles and ideas are. Of course it's not representative of anything. These works were collected and curated exactly because they are not average. But it's still something that humans made.

I think what people can do is have conceptual ideas and then follow the "logic" of those ideas to places they themselves have never seen or expected. Artists can observe patterns, ask how they work and why they have the effect they do and then deliberately break them.

I'm not sure current genAI models do these sorts of things.


> I'm not sure current genAI models do these sorts of things.

You might be right here. Two points though - first, we don't know if current AI is actually incapable of something in particular; we didn't find this, didn't prove it. Second, we might have a different AI approach, which would actually be capable of these things you mention. To me, it's way too early to dismiss AIs - at least in principle - regarding all of this.


> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

The target audiences for art and film are not the same. The latter is far more pop culture. You can't apply them the same way, and the narrative of the artist has been extremely important for decades. People will watch slop movies. They don't pay $30K for slop art. They're paying that for historical importance or, if contemporary, artist narrative.

I'm in fandom spaces, and the prejudice against AI art is overwhelming. I also run in art collecting circles, being somewhat wealthy but not a billionaire. They also care about authenticity.

That is to say, the people who pay for original art, and participate in art spaces, are generally educated who actively hate AI. Filmgoers are probably a standard deviation lower in education, and are far more willing to part with the cost of one unit of consumption (a $10 ticket) than art buyers.

AI is a threat to graphic designers and those in their orbit.

The only way I see AI being a threat to professional artists is AI copies of their work. And AI isn't anything new there. I have a friend who gets commissioned by hotels to do one-off pieces for display all over the world. People have been making knockoff pieces of her style and selling them for at least a decade. And that's her lower margin, small pieces made for a couple thousand dollars to hang at your house, not her $100K+ pieces for hotels where they fly her out to supervise reassembly and mounting.


Yeah, those people love authenticity. They pay a lot for authentic Modiglianis.

> They don't pay $30K for slop art

I beg your pardon, but have you heard of Jeff Koons or Kaws?


> The narrative/life of the artist becomes a lot more important

We are 50 years into post-modernism. Can't imagine it can get any more important.

I predict emergent design will be the next big thing. Czinger[1] is a great example of what it may look like. Rick Ruben-esque world, where the creator is more a guide.

[1] Czinger uses stochastic optimization to converge to designs - https://www.czinger.com/iconic-design


Post-modernism says that the artist isn't important. Dead, in fact. Art is something that happens when we perceive it, not when the author creates it.

God, thank you.

Finally, someone pointing out all of this is just people announcing what has been in play for half a century.


Is that what putting a camera in the hands of everyone with a smartphone (basically everyone) did for photography?

Or making video editing + free, global publishing platform did for film? (see: doom scrolling).


> The narrative/life of the artist becomes a lot more important.

Less the narrative of the art's production and more the message that it's conveying.

I don't mean (necessarily) a political message or a message that can be put in to words. But the abstract sense of connecting with the human who created it some way.

This isn't just art though. An example: soon, Sora will be able to generate very convincing footage of a football match. Would any football fan watch this? No. A big part of why we watch football is that in some sense we care about the people who are playing.

Same with visual art. AI art can be cool but in the end, I just don't really give a shit. Coz enjoying art is usually about the abstract sense that a human person decided to make the thing you are looking at, and now you are looking at it... And now what?

This is why every time someone says "AI art sucks" and someone replies "oh yeah? But look at THIS AI art" I always wonder... What do you think art is _for_?


Football anime doesn’t involve real people or stakes. AI can introduce a storyline, characters, etc. It won't necessarily be as popular as the real sport but I doubt the audience is zero.

I'm aware this sounds like a "no true Scotsman" argument, but I said "would a football fan watch" and the people who would watch that are not football fans.

I don't mean to denigrate it though, what I'm saying is that media would be serving a totally different purpose than the one served by professional sport today.

I guess, people are very varied, there are probably SOME strange people who watch football today with a motivation that's compatible with AI.

Also, this doesn't mean AI football wouldn't be useless. And there could even be people who watch both, since they could scratch different itches. I said I "don't give a shit" about AI art but that's not really true, it's useful, I'm glad kebab shops get a cheap way to decorate their menus. I'm sure people are getting porn generated that matches their incredibly bizarre kinks and I'm glad they get to jerk off better than they used to.

But I guess what I really am sure of is that AI can't REPLACE human art any more than it can replace football.

(https://en.wikipedia.org/wiki/No_true_Scotsman)


>Would any football fan watch this?

Depends what the future of VR worlds look like, and what the viewers place is in them.


The problem is, we have no real understanding of what people will or will not do with this technology. Will humans only be interested in “real“ activity?

We have no idea, and most people are just guessing in a way that flatters some understanding of art that they have. We also frankly have no idea what the permanent relationship of humans to art is even without AI.

The television is less than 100 years old. There aren’t very many, but there are some people alive today who were alive before the television was created. The computer is about 80 years old. The whole idea of photography and of recorded audio is less uthan 150 years old.

We are still living in the aftershocks of industrial production of art. It is foolish to imagine that in the midst of this chaos, we can point the way forward with ease.


We'll get to the point, if we're not already there, where you won't be able to tell if the artist actually did the work or just could have done it, and to which extent. Everything in the process can be essentially faked. If you put a massive emphasis on proving human work, you're essentially conceding you cannot tell without some sort of notary certification. We're in the lab diamond stage and clutching at some artificial authenticity.

> 4. Taste continues to be the single most important thing. The vast, vast majority of AI art out there is...not very good. It's not going to get better, because the lack of taste isn't a technical problem.

I agree on current AI art taste, but disagree that it can't be improved. I think art AI companies can hire skilled "taste makers" and use their feedback loop as RL for AI art models. I think this area will always be in flux, and will vary by subpopulation so it will be a job role always in demand.

Do you think taste is something that cannot be taught/learned? Are certain individuals just born with good taste; it's an immutable property?


AI art is certainly considered uncool today in many circles.

I do wonder though… were there other innovations that were uncool in their early years, where now nobody bats an eyelid?

Is that point just a generational/passage of time issue?


Photography was considered pretty uncool; it removed what at the time was perceived as all of the skill. We now can appreciate deeper aspects of captured images such as composition, and we now see painted portraits replaced by more abstract, surreal, or imagined imagery. Generative AI is similarly revolutionary in that it moves away from realism back into the realm of the imaginary; whether or not a user's prompts can be appreciated remains to be seen.

Fun fact: copyright law was invented in the UK basically because painters and sculptors (!) considered photography theft. That came to a large degree before "real" text copyright as we know it today.

> Fun fact: copyright law was invented in the UK basically because painters and sculptors (!) considered photography theft. That came to a large degree before "real" text copyright as we know it today.

This is...not true? Or at least I can find no basis for your claims.

UK Copyright for books and sculpture predated the invention of photography and existed in a completely recognizable form ("a copyright term of 14 years, with a provision for renewal for a similar term, during which only the author and the printers to whom they chose to license their works could publish the author's creations.[4] Following this, the work's copyright would expire, with the material falling into the public domain"[1]).

Paintings and photographs gained copyright protection at the same time, in the 1862 Fine Arts Copyright Act, seemingly because it seemed natural to extend the haphazardly covered fine arts more completely.

[1] https://en.wikipedia.org/wiki/Statute_of_Anne


Digital Photography and digital painting. Both were considered deeply offensive to a lot of artists. I have witnessed both first hand and the criticisms were verbatim the same as AI.

They said you couldn't become a good photographer if you didn't learn it with the limitation of film that forced you to make each shot count. Photoshopping a picture made it "not a real photo" and was banned from online communities and irl events, drawing in photoshop was not considered art. I find it very ironic that digital artists are repeating the exact same argument as the one used against their art


Switching to digital didn't change their fundamental mechanics, that's why they're still called photography and painting.

But there's no such thing as AI photography, and it's debatable how much mixed AI tools like inpainting are actually like painting and not just like issuing corrections to a commissioned painter. Just generating images from prompts definitely isn't AI painting.


Limiting the number of shots and putting thought into each one, composition, focus in detail, exposure and other technicalities is important for great photos. Similar to AI if the person using the tool is mindless about it the resukts will just be mid as well as little to no growth and learning will be achieved.

Apple AirPods.

My prediction: KNOWLEDGE of whether something is made by AI or a human will be alpha and omega, and will eventually be regulated used in commercial contexts. You will always be able to generate something, but if you somehow get exposed presenting it as human made, the sanctions will hurt you.

I don’t know that this has to be the way. One thing that is really going to confound this very common idea that taste and quality and personal characteristics will win the day, is that you can use AI to represent all of these to other people.

It’s a huge practical problem to try and figure out authentic nature over the Internet. It’s already clear that people will pay for it, but it’s not at all clear that they will get it. If we imagine that the tools get better and more sophisticated than there is no reason whatsoever to assume that the tools won’t be deployed to give the impression that is needed to make money.

I don’t think any of the above survives if we allow for AI to be used as it is currently being used. It only survives if you pretend that ahead of us is some invisible gate past which this technology will not go.


1. This sounds more like influencer marketing, I think people are already sick of it.

2. Yes and no. Depending on how you train the model they can output things that you’ve never seen before but the question is whether you want to look at those things. So yes a human has to judge and fine tune the output. This is why many models seem unoriginal, they’re designed to emulate specific styles and tuned based on broad appeal. If you go looking for LoRAs and merges created by “artists” you will see shit you couldn’t dream of.

everything else probably yes.


Regarding point 2: I think most people cannot destinguish between "genuine" creativity and artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.

Furthermore, I think many of the more human centric thinkers will be disappointed at how many people just wont care.


I think we fall into the trap of seeing art from a consumption point of view. “Of what use is a human vs AI piece of art to Me?” Art is residing in the productive space too, the artist is considering not his/her utility but his/her presence in the world. Maybe what you describe is the way forward for art monetisation but not for art, and we know experientially how the production of real art is not always in tandem with its appreciation.

> Taste continues to be the single most important thing. The vast, vast majority of AI art out there is...not very good. It's not going to get better, because the lack of taste isn't a technical problem.

This is precisely and importantly true. I just wonder if most of the world cares. I'd like to think so, but experience tells me that most of the world is satisfied with mediocre stuff. And I don't say this as a criticism; it's just a fact that artists have to come to grips with.


Well, it can be both.

I am also glad the commercial niche illustration markets like Magic the Gathering are extremely hostile to AI art, though of course I would think Wizards of the Coast, the company that publishes MTG, probably see artists as a cost. Maybe.

Perhaps in the future artists will be used to train models that can output a certain style of art and the artist will receive royalties based on their influence on the trained model and its popularity.


This is a great and worthwhile discussion. People are loosing sight of what art is. The art is the idea, not the medium. And just because something is easy, doesn't mean it will be good.

I've seen some fantastic original pictures that actual artists have generated through AI. I can't wait to see what current and future artists can do with the new tools at their disposal.


Everything is a remix. Humans' rarely original, creativity still built off influences. AI image generation is nothing if not good at remixing.

You're focused on the visual arts—I'll add that live music will become more treasured, sought after than recorded music.

Because it's real.


I think the most important is number 2. People are now looking for things that are made by humans. Most detest AI slop. And if they find out that you are peddling slop, you lost trust.

It seems to me that we will go through the same phases that chess went through when chess on computers became a thing. First, people thought that this will kill chess, then people start using it as a tool to play better chess. Now, chess is thriving, despite AI being used in chess. I can see a similar path with art. Using AI to generate ideas, still create art by humans.


Crucially, chest is also thriving as a spectator sport - and what's drawing the views is not the high-level matches: people are far more interested in more fast-paced and casual content, where the personality of the streamer can shine through.

On the other hand, absolutely nobody is watching livestreams of two chess bots playing each other. They might technically be better at chess, but that doesn't mean it makes for entertaining content.


>2. Originality matters more than ever. By design, these tools can only copy and mix things that already exist. But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.

How can you say this? These models can trivially create things that have never existed, and you can easily test this yourself.


Should a randomly-shuffled deck of cards be considered art? After all, the card shuffling machine created something which statistically has never existed before. Every shuffled deck should belong in a museum, right?

On the other hand, prompting AI for "pelican riding on a bicycle" clearly shows that it has far more trouble with unique concepts, compared to prompting for something more cookie-cutter.


You and the top-level commenter fundamentally misunderstand the mechanisms behind diffusion models. They are able to create original art and are not just shuffled cards.

It's a harrowing situation to watch simpleminded idiots in forums proclaim that artists, some even naming top talent, are going to be useless and that they can steal their handiwork just by naming them in prompts. And people actually dare to say this with such audacity. Those who create GenAI likely do not realize what lowlife impulses some of these individuals have.

No matter how good AI agents become, you still need a general understanding of what works and what doesn't. If you don't have years of experience in the field, all you will end up doing is copying what others do. It's the same dynamic you see on OnlyFans. Mindless zombie hordes copy the "pioneers" (who shove even bigger things in their back orifice for example) and push things further and further, chasing shock value because that's what once elevated someone into the top 0.1 percent.

It's the worst kind of race-to-the-bottom scenario.


Re: But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.

Is it possible for a character in a novel to have novel experiences? Or for you to experience a novel dream? I would argue yes. You can know the rules of the environment and the starting conditions, but with a bit of randomness (or not) you can generate from that novel experiences which were unexpected - so too from the data & distribution that AIs are already trained on they can experience new experiences.

Another source of novelty is from good verifiers/recognition of a class of object which is hard to construct but easy to verify - here the AI can search and from that obtain novel solutions which were unthought of before.

N.B novelty itself is basically trivial - just generate random strings. But both of the above are mechanisms to generate novel samples inside some constraint of "meaningfulness"


Pick a country that you've been to, or want to visit, and delve deep into its history: wars, neighbors, food, culture. Personally I find that a lot more interesting than the broad survey-type podcasts and courses.

And reading a history book while you're in the place makes both the book and the place more interesting.

I've spent a lot of my career marketing to developers, and spamming their GitHub account might be top 1 or 2 worst marketing tactics you can use.

Cold emailing rarely works by itself. Cold emailing developers via emails you pulled from their GitHub accounts? At that point, you're actively harming your brand, and may as well just send them spam diet pill ads.


If someone took the time to look through my GitHub contributions then pitched me with a job relevant to that work I would absolutely consider them. That's exactly the kind of recruiter I would like to work with.

If it's obviously just a bot scraping emails and sending generic job requests, that's very different.


> If it's obviously just a bot scraping emails and sending generic job requests, that's very different.

It's not even that nice. They scrape emails and send cold calls to try to get you to purchase their services.


Yeah this - I got one of these emails someone sniffing around my GitHub not that long ago and it wasn't immediately obvious that it was a scammy recruiter, so I responded to sound out if they were actually interested in one of my projects. Got the same generic response about let's work together on something so I didn't respond.

Yeah I mean as a marketing tactic to sell your product. An employer / recruiter offering you work this way is different.

Find everyone who starred this repo and did a PR against these 10 repos is within reach of all marketers now. I just told them how.

Wait why? That seems like the high effort and high specificity thing that I'd love to get.

You searched for people who do what you need to have done, found me, looked at what I've worked on and determined I'd be a good fit and you reached out? That's the number one way to get me to want to work for you.


> You searched for people who do what you need to have done, found me, looked at what I've worked on and determined I'd be a good fit and you reached out? That's the number one way to get me to want to work for you.

No, their email templating tool finds an old throwaway repo you did 6 years ago, templates its name into a form email, and invites you to join a cattle call to be whiteboarded along with the rest of the shmucks


"Work for you"? They ain't hiring my friend, they are spamming their product to your inbox, not sending a career opportunity

It’s more accurate to say that the “modern era” (1600s and onwards, the Enlightenment , etc.) was boosted by coffee, because the Renaissance was larger over by the time the bean arrived from Arabia.

Definitely a lot of modern ideas and institutions had their origins in coffee shops, though.


> Definitely a lot of modern ideas and institutions had their origins in coffee shops, though.

There are accounts of discussions between Robert Hooke, Edmund Halley, and Isaac Newton in a London coffee house. It's a wine bar now and not notably highbrow :)


Lloyd's the insurance company was founded as a coffee house.

This seems like it was influenced by 1821’s Confessions of an Opium Eater, which is a very interesting read, far more than you’d probably assume.

https://en.wikipedia.org/wiki/Confessions_of_an_English_Opiu...


I have only dabbled with Claude and other AI tools, but from what I can tell, only ChatGPT has folders and a robust organization system. (Someone correct me if I’m wrong here.)

This matters a lot to me, as I use AI as something of an ongoing project organizer, and not purely for specific prompts.

So at least for me, it would be a huge hassle to move to another platform, on par with moving from one note-taking software to another (e.g., Evernote to IA Writer.)


Check Claude Projects.

Also Claude Code.

Both have "folders".


Ah I see. I didn’t notice these before. Thanks!

I spent a couple months last year writing an essay about consciousness for the Berggruen essay contest. Ultimately they ended up picking a guy that already wrote books about the topic, alas…

Anyway, I plan on posting it online somewhere eventually, but HN seems like a good place to throw the introduction out there.

The basic argument I have is that consciousness is a red herring, a concept that was relevant historically but is increasingly routed around by cybernetic systems that aren’t interested in interior states.

Here’s the intro. If you find this interesting, please let me know!

MacGuffin. Whodunit. Smoking gun. Fall guy. The detective fiction genre is an underappreciated source of terminology for unsolved problems, useful not only for criminal mysteries, but also for unanswered questions in philosophy and science. One such term is the red herring: an apparently useful thing, that upon further inspection, is actually a distraction from solving the main mystery at hand.

The concept of consciousness may be such a red herring. It has occupied the minds of philosophers for centuries and increasingly frames debates around AI, animal rights, and medical ethics, among other issues. And yet, even as consciousness is rhetorically dominant, in practice it is increasingly ignored and routed around in real-world situations. When rights are bestowed and resources allocated, the mechanism by which these are done is increasingly uninterested in interior consciousness.

This is not because the problem of consciousness has been solved, or because a revolutionary new theory has novel insights. Rather, it is the natural consequence of cybernetic systems concerned only with output, not internal states or abstract ideals.

What is needed, then, is a genealogy of the concept of consciousness, in the manner of Nietzsche, Foucault, or Charles Taylor. Not a new theory of consciousness, but a story of how the concept developed and came to underlie significant legal, moral, and philosophical systems, and how that foundation is rapidly fading away.

What this genealogy reveals is not merely the history of a single concept or the changing of societal systems, but a deeper human shift: the erosion of interiority itself and the triumph of the external. In simpler terms: a new, largely exterior idea of the self is forming, while at the same time, it is becoming more difficult to conceive of an interior-focused one.

This essay will trace the history of the concept of consciousness, show how it is being routed around by output-focused systems, then ask what effect this has on human life, and how to address it.


Hurry up!

I just read this yesterday in Conversations with Walter Murch, a well-known film editor. Not exactly the same, but I do get the sense that Tao still feels the same way about math:

As I've gone through life, I've found that your chances for happiness are increased if you wind up doing something that is a reflection of what you loved most when you were somewhere between nine and eleven years old.

Interviewer: Yes—something that had and still has the feeling of a hobby, a curiosity.

M: At that age, you know enough of the world to have opinions about things, but you're not old enough yet to be overly in by the crowd or by what other people are doing or what you thinkyou “should” be doing. If what you do later on ties into that reservoir in some way, then you are nurturing some essential part of yourself. It's certainly been true in my case. I'm doing now, at fifty-eight, almost exactly what most excited me when I was eleven.


I wonder a bit about that. What activities or possibilities are you exposed at during that age.

I know many computer science colleagues who were not exposed to programming during that age and only later came to it.

I feel kind of lucky that somewhat randomly I stumbled into computer programming (because XtreeGold could show the content of files, and I was learning to understand BAT-files by looking into them) during that age, and that's what I do now.

There are probably a lot of things you were not exposed during that age, that could have been the perfect match.

There are also lots of kids who just play games, or video games, do sports, watch films or so during that age, without really being exposed to any "potential useful" activities. Some parents would maybe even say that this is how it should be.

As a parent, I guess a good advice would be to try to expose your child to as much things as possible, without forcing it to do anything of course.


Murch actually expands on that a little more in the interview. He doesn't mean the specific activity is what your job should be, it's more like "the basic similar activity."

So for him, as a video editor, it was using a tape recorder to record sounds, and reorganize them in an aesthetically appealing way. He didn't actually get into video editing specifically until after college IIRC.


I first touched a computer after completing my university degree and I still remember the happiness I felt by simply running a DOS command and seeing the expected output.

It does't matter when the plug finds the socket - it is always electric.


> It does't matter when the plug finds the socket - it is always electric.

This is really beautiful.


> when you were somewhere between nine and eleven years old.

As endearing as it sounds, that's pure selection bias on Walter's end rather than something even remotely common.

Clearly there are cases of this sort, like arts and other creative tangents, but on average it's a result of a discovery process much later in life.


I don't think Walter is implying anything about how common or uncommon this is. His core insight seems fairly objective and plausible to me: "...your chances for happiness are increased if you wind up doing something that is a reflection of what you loved most when you were somewhere between nine and eleven years old". I.e. if you do end up being lucky and wise to do something as a profession closely related to what you *loved* doing when you were ~11 , because you end up spending time doing what you love (and equally importantly not spend that time doing something that sucks up energy) you increase your chances of being happier.

I think you completely missed the point of his anecdote. It’s not a scientific study, he is merely saying that at age 9-11, you’re old enough to have a decent understanding of what you’re interested in, but not old enough to start worrying about social and financial pressures and expectations.

And so the thing you were interested in at that age is probably similar to what you’ll be interested in now, if you remove social and financial expectations.


I wonder if you could test this. Maybe someone has a longitudinal study where they check what people thought they liked to do as kids against what they do as adults.

Sure. But it doesn’t really seem clear to me that selecting for intelligence actually results in a better world. It’s a fallacy to think that intelligence = more rational or immune to human flaws, as a cursory glance at any “intelligent” social group should make obvious.

I think we’d be better off optimizing for conscientiousness or empathy, frankly. Even a world run by gardeners would probably be more beautiful and meaningful than one run by math geniuses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: