Do you think an AI will ever be able to correctly answer a prompt like: "Prove Fermat's last theorem in a rigorous way. Produce a proof that can be checked by Coq, Isabelle, Mizar, or HOL in a format supported directly by any of them" and have its output really work and prove it? If so when?
Fermat's Last Theorem was proven by Andrew Wiles in 1994 after centuries of attempts. While artificial intelligence (AI) has made significant strides in mathematical research and theorem proving, the proof of Fermat's Last Theorem required a highly specialized understanding of mathematics, particularly in the field of number theory.
It's challenging to predict with certainty when AI might independently prove such a complex theorem. While AI systems can certainly assist mathematicians in exploring mathematical concepts and even discovering new theorems, the level of creativity, insight, and intuition required to solve problems like Fermat's Last Theorem is still largely within the domain of human mathematicians. However, AI may contribute indirectly by aiding in the exploration of vast mathematical spaces and providing tools for verification and validation of proofs.
I don't think this will happen in the near future, but "ever"? Almost certainly unless humanity stops working on AI systems for some reason. The current approaches to AI have poor reasoning ability. This could possibly be due to a mechanistic flaw or simply be a matter of iteration, i.e., repeatedly trying things and getting feedback about why they are wrong, possibly internally. It's not clear that when we reason, our brains aren't implicitly and explicitly trying all sorts of things and discounting things that seem wrong (lots of neurochemical mechanisms are unclear here). Naive LLM inference isn't doing this, but can iterate within a system; it could be that a proof of a complex theorem requires an extreme version of that kind of iteration. Humans don't tend to spit out nontrivial proofs in one-shot either, so there's currently a significant asymmetry between the amount of human brain computation that goes into a proof of a complex theorem and silicon computation that goes into responding to a prompt.
But you're not using a^n+b^n=c^n in your argument at the bottom of p.4. You just say "therefore g_1(n) divides a+b-c". My example shows that yhe implication doesn't follow in general. And it's not clear why it follows specifically if a,b,c are a solution to FLT.
I won't reply further to this question about g, i do think i've been clear. and at this point you can be on your merry way still thinking it's wrong. but you simply misunderstood.
it's a proof by contradiction. g would divide a+b-c IF a+b-c are integers.
for n=2, g(2)=(c-a)(c-b)g_1(2) and g_1(2)=2.
So only when n=2 is it true that g divides a+b-c.
Otherwise we get a contradiction that it divides. since then, g_1(n) for n>2 is not a factor of a+b-c, we can safely assume at least one of them was not an integer.
It doesn't follow from anything on p1-3. Certainly not directly. If you were being genuine about this I think you would appreciate an opportunity to improve the proof rather than resort to insults!
I don't think any capitalist would like to fund that service. In my opinion AI movement is going to migrate in easier fields then Math, such as k*ing people with a drone swarm. It can not grow into something more than one guy's farm of videocards.
It's challenging to predict with certainty when AI might independently prove such a complex theorem. While AI systems can certainly assist mathematicians in exploring mathematical concepts and even discovering new theorems, the level of creativity, insight, and intuition required to solve problems like Fermat's Last Theorem is still largely within the domain of human mathematicians. However, AI may contribute indirectly by aiding in the exploration of vast mathematical spaces and providing tools for verification and validation of proofs.