> revolutionary breakthroughs in essentially all field
This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.
But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.
I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.
"But when AI got finally access to a bank account and LinkedIn, the machines found the only source of hands it would ever need."
That's my bet at least - especially with remote work, etc. is that if the machines were really superhuman, they could convince people to partner with it to do anything else.
It is interesting that, even before real AGI/ASI gets here, that "the system wants what it wants", like capitalism + computing/internet creates the conditions for an infinite amplification loop.
Feedback gain loops have a tendency to continue right up to the point they blow a circuit breaker or otherwise drive their operating substrate beyond linear conditions.
It starts to veer into sci-fi and I don't personally believe this is practically possible on any relevant timescale, but:
The idea is a sufficiently advanced AI could simulate.. everything. You don't need to interact with the physical world if you have a perfect model of it.
> But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals ...
It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.
Didn't we prove that it is mathematically impossible to have a perfect simulation of everything though (i.e. chaos theory)? These AIs would actually have to conduct experiments in the real world to find out what is true. If anything this sounds like the modern (or futuristic version) of empiricism versus rationalism debate.
>It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.
Emphasis on perfection, easier said than done. Some how this model was able to simulate millions of years of evolution so it could predict vestigial organs of unidentified species? We inherently cannot model how a pendulum with three arms can swing but somehow this AI figured out how to simulate evolution millions of years ago with unidentified species in the Amazon and can tell you all of its organs before anyone can check with 100% certainty?
I feel like these AI doomers/optimists are going to be in a shock when they find out that (unfortunately) John Locke was right about empiricism, and that there is a reason we use experiments and evidence to figure out new information. Simulations are ultimately not enough for every single field.
It’s plausible in a sci-fi sort of way, but where does the model come from? After a hundred years of focused study we’re kinda beginning to understand what’s going on inside a fruit fly, how are we going to provide the machine with “a perfect model of all interactions between biological/chemical processes”?
If you had that perfect model, you’ve basically solved an entire field of science. There wouldn’t be a lot more to learn by plugging it into a computer afterwards.
Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly. An ant isn't asking us how we measure the acidity of the atmosphere. It would simply do it via some mechanism we can't implement or understand ourselves.
But, again with the caveats above: if we assume an AI that is infinitely more intelligent than us and capable of recursive self-improvement to where it's compute was made more powerful by factorial orders of magnitude, it could simply brute force (with a bit of derivation) everything it would need from the data currently available.
It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.
> Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly.
This does not answer the question. The question is "how does it become this intelligent without being able to interact with the physical world in many varied and complex ways?". The answer cannot be "first, it is superintelligent". How does it reach superintelligence? How does recursive self-improvement yield superintelligence without the ability to richly interact with reality?
> it could simply brute force (with a bit of derivation) everything it would need from the data currently available. It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.
This assumes that the digital encoding of all recorded observations is enough information for a system to create a perfect simulation of reality. I am quite certain that claim is not made on solid ground, it is highly speculative. I think it is extremely unlikely, given the very small number of things we've recorded relative to the space of possibilities, and the very many things we don't know because we don't have enough data.
>The idea is a sufficiently advanced AI could simulate.. everything
This is a demonstrably false assumption. Foundational results in chaos theory show that many processes require exponentially more compute to simulate for a linearly longer time period. For such processes, even if every atom in the observable universe was turned into a computer, they could only be simulated for a few seconds or minutes more, due to the nature of exponential growth. This is an incontrovertible mathematical law of the universe, the same way that it's fundamentally impossible to sort an arbitrary array in O(1) time.
The counter-argument to this from the AI crowd would be that it's fundamentally impossible for _us_, with our goopy brains, to understand how to do it. Something that is factorial-orders-of-magnitude smarter and faster than us could figure it out.
We aren't that far away from AI that can interact with physical world and run it's own experiments. Robots in humanoid and other forms are getting good and will be able to do everything humans can do in a few years.
This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.
But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.
I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.