IMHO the proliferation of "lucky" does more harm than good. It's factually correct but bad policy to emphasize it if you want the most people to avoid learned helplessness.
Some might be motivated knowing the only thing that separates them from greatness is luck, but more often I saw students feel defeated.
It is not factually correct that free will is an illusion. It is just a difficult subject to discuss.
Why difficult?
Discussion that is considered rational culturally descends from argument techniques rooted in Aristolean ideas about logic. However, as shown by Kurt Gödel formal systems break down under self-reference. Through Turing we see can see this type of thing starting to happen when two Turing Machines need to predict the output of each other.
Through John Von Nuemann and Nash we can get a proof of non-determistic policy functions in multi-agent decision problems being optimal. So we find an analogous concept to free will and we find it justified in this abstract domain not on the basis of some wishy washy concept, but on the basis of analogical reasoning (logic) and causal outcome modeling (dynamic functions).
So what do we reject to claim it is an illusion? Logic? Or causality?
It’s often difficult to discuss free will, among other reasons, because it’s ultimately a philosophical concern and people today tend to lack philosophical literacy—they unquestionably adopt whatever belief is a nail to the hammer of their education and upbringing: the STEM crowd here will by default go with physicalist monism; those with more religious upbringing will adopt dualism; and in the end all will be easily aggravated by questioning how they got there and asserting that alternatives are equally possible (and in case of STEM perfectly compatible with natural sciences, being entirely outside their scope through Gödel’s incompleteness if nothing else).
I think philosophical literacy is probably higher now than at any point in the past. For one, a greater deal of the philosophical writing is now existent than was in the past. For another, the selection effects have weeded out much of the writing that was bad. For yet another, a far greater percentage of the population is literate than in the past. For yet another, the population is healthier, wealthier, and generally better able to apply themselves to the material than they were in the past. For yet another, being after rather than before the Enlightenment, questioning of assumption is actually a much more prevalent reflex now than it was in the past. For yet another, our information retrieval systems help us do a better job of organizing the and discovering the relevant writing.
I also think philosophical literacy is less relevant than it was in the past. For example, we are talking about how an agent's decision making algorithm is in actuality. Is philosophy the best subject to discuss this? Control theory, learning theory, game theory, scientific investigation into the brain's processes are all mature enough that their usage produces more rewarding outcomes.
So lets say we see what at first glance appears to be degenerate twitch chat zoomer spam about philosophy sus no cap? Are they really philosophically unsophisticated? Or were they raised in a world of such greater philosophical sophistication that self-replicating knowledge structures - like, say, memes, were something they had extreme and constant exposure to? What if they are engaging in some sort of coordinated omegalul gambit?
I often find that explaining observed incompetence with genius works better than explaining observed incompetence with incompetence. So I'm more fond of claiming a problem is hard for good reasons then that people just weren't educated - which is also often true, by the way, just explaining why I ended up putting the emphasis somewhere else.
> For example, we are talking about how an agent's decision making algorithm is in actuality.
What is “agent”? If a deterministic universe without free will is considered, whatever “agent” means is likely nonsensical so if you are discussing whether or not the universe is deterministic then presuming existence of an agent (with agency) is a mistake. So no, in a discussion on whether the universe is deterministic we would not be talking about that, whatever that is, if apparently presumes existence of an agent.
> Are they really philosophically unsophisticated? Or were they raised in a world of such greater philosophical sophistication that self-replicating knowledge structures - like, say, memes, were something they had extreme and constant exposure to? What if they are engaging in some sort of coordinated omegalul gambit?
I didn’t mean just newer generations by “people today”. Discussing this with older people is as difficult and they tend to equally just make assumptions instead of reasoning. I like discussing topics like these the most with a gen Z philosophy professor a few years younger than me.
Not arguing nondeterministic universes. Within a universe agents see computationally irreducible phenomenon. It is just game theory decision problems embedded in cellular automata. The agent which says they could have decided another way is correct - their modeling of the problem did in fact have that property. The agents which thinks this insane and improper modeling are suffering from a hindsight fallacy.
Try considering a Turing Machine which is running several Turing Machines within it. Call it A. Within it are B and C. B gets an input state that is the entire Turing Machine. Meanwhile C gets the output of B. There are multiple B and multiple BC. So we get something like this when we write it out:
A = B + B + B + BC + BC + BC
You might be tempted to say that A = A, therefore A is deterministic. This would even be logically true. However, it is actually far more dangerous than it appears. Why? Well, you aren't A. You are within A. Lets say you are B within A. Does it matter that you know the dynamics function? It is deterministic, but does it follow from it being deterministic that it is deterministic? No! Even though B is within the deterministic system A, B cannot claim that A is deterministic, because B cannot claim that C is deterministic, because C is indeterminable by B through self-reference.
Now the standard mistake is to let B run then let C run then to pretend B could determine C, because now obviously we can tell that A was determined by B because C was determined by B. Which sounds really compelling, but ask yourself this to see why it isn't as great as you think it is: is the universe still subject to a deterministic dynamics function right now? If so, can you tell me what C is, not in the toy problem, but in the real world?
Like, I realize this is an impossible problem: I'm asking you for the current configuration over the course of the next second for all matter in our universe even the parts you can't model because you haven't observed it. But that is kind of my point. The universe hasn't stopped running yet. You can't determine C from the information context of B.
Now lets say you try to counter, ah, but A so therefore all the other stuff is meaningless.
It seems like it works, but lets talk about the parable of the time you were given a deterministic dynamics function and you computed it according to what it was because you wanted to claim it was deterministic by treating it as what it was.
So you start calculating B + B + B + BC + BC + BC. And you see me looking at the same problem and idiotic like I am, you see me write down A = A' = BC. Then I get ready to solve it. And you are like, pfft, what an idiot. He isn't even talking about, like, actual reality.
unfortunately, in the calculations that follow, you realize something strange: Though you may B, I C you. And because I see you before you be such that I can determine what C should be such that you cannot see C while you B be, I choose a C for your B so that your B doesn't see C.
Basically, in choosing the slowest path, you also choose to be determinable, but in choosing being determinable, you implicitly make yourself vulnerable to an A' that makes your context - during the process - undecidable.
You thought choosing A saved you, but actually it was your curse:
You let me set A = B + B + B + A' because you didn't want to claim that we could divide A into the pieces which it was in actuality composed. But because it was in actuality composed of those pieces BC is now calculatable by A' such that your conceit is actually what traps you in the non-deterministic perspective. If B was calculated, not via A's conception of B, but A' then now it isn't just B that can't predict C. C probably can't predict B either.
So lets escape this! Lets say the process ends! Now B has fully determined itself such that C is determined and now A is determined as well.
Is A now deterministic? No, A isn't anything anymore.
Nothing is happening.
A isn't now deterministic; A was determined and now our physics is stopped. Where is the determinism at? Not within A, because clearly during A B wasn't deterministic because of self-reference to C. Not afterward either, because now it isn't doing anything.
It really isn't a problem. The analogical congruence holds.
Typically free will is defined according to the action being taken without the constraint of necessity based on a person's own desire.
In the game theoretic model that Nash showed optimal the decision making act that the agent has to do isn't about deciding over actions. It is over strategies. That might be a bit confusing so lets just use an example.
If you play rock paper scissors in the agent's modeling of the problem it isn't modeling it as an action choice of rock, paper, or scissors. It is actually modeling it as an action choice over probability vectors. The action choice of rock corresponds with the probability vector of [1 0 0], paper with [0 1 0], and rock with [0 0 1]. There are an infinite number of different policies, but it turns out that the rational one to pick is [1/3 1/3 1/3] playing each option with equal probability.
Notice that here the agent is taking an action, but not one that is constrained by necessity. Notice that the choice of it is due to the modeling of what is in the preference of the agent. Compare that with the definition of free will. The same thing is happening.
So the congruence does hold, but what makes it appear to not hold is that most people rejecting free will neglect computationally irreducible phenomenon. In actuality, computationally irreducible functions which allow for stochastic signals show up in cellular automata without the requirements of dualism. Selection and variation are then much more then enough to show that surviving agents will protect these information sources so as to make them unobserved, because failure to do that isn't optimal and so agents which don't aren't selected for.
> Ah, this is where the problem starts.
We could try and reject, not the analogical congruence which holds, but the system of using analogical reasoning in the first place, but this goes badly. The first big problem is that all of our knowledge comes via theory-laden proxies and removing analogical validity removes the validity of evidence in general. A stranger result is that compression is justified through analogical congruence. So you can no longer claim to have knowledge over the state and dynamics function, because you have knowledge over the compressed form of it. To claim knowledge, you now need to physically be it.
> Ah, this is where the problem starts.
There are many invisible octopuses. Fish are going to encounter decision problems wherein these camouflaged predators are both on their right and left yet the decision context shows them the exact same thing in both cases. So which should they pick? Always left? Then they always die. Always right? Then they always die. The only winning solution is to pick both, because then sometimes the agent lives to reproduce. How do we pick both? Well, if we do it in a way that is computationally reducible then the octopus which observes the fish invisibly can anticipate it. So now it is wherever the fish decides to go. So it has to decide to do both, but it has to do so in a way that is unobservable. This decision problem doesn't go away when someone chants the magic word of dualism. The agent still needs to decide in an unpredictable manner or is going to die. So when optimization processes build things? They end up approximating a solution to this problem.
This type of decision problem has played out billions of times. It has played out over billions of years. I don't know which time it happened which was the first, but where it was, that is where the problem really starts. And since that problem and onward a filter has been killing the things that answer incorrectly.
You make an interesting point, and I think which side of it is correct depends what the end goal is. It definitely doesn't help people, particularly children, to tell them they're "lucky" or "gifted" in a particular ability. Early in life it means that they can downplay the effort needed to develop that ability, and later in life when they hit the actually difficult bits it can result in them assuming they must just not be as good at it as everyone says, rather than applying themselves to understanding.
There is also value in people understanding that not everything is down to innate ability and application though. That road leads to people looking down on impoverished people because they should have just put more effort in. Get a better job, learn a skill, regardless of the fact their situation means they're already working 3 jobs just to make enough money to ensure their kids can eat tonight.
As with so many things people have polarised. You're either in Camp Skill & Dedication or Camp Luck & Circumstance. In reality it's always a bit of both, with luck giving some people an advantage when it comes to having the time and space to dedicate themselves to developing their skills.
Some might be motivated knowing the only thing that separates them from greatness is luck, but more often I saw students feel defeated.