The technological struggles are in some ways beside the point. The financial bet on artificial general intelligence is so big that failure could cause a depression.
You could hit AGI by fastidiously simulating the biological wetware.
Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.
General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.
The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.
Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.
I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That’s probably both inefficient and unnecessary. What it does do is get us past the idea that it’s impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.
It might be helpful to make one full brain simulation, so that we can start removing parts and seeing what needs to stay. I definitely don’t think that we should be mass-producing then, though.
Well, think about it this way…
You could hit AGI by fastidiously simulating the biological wetware.
Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.
General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.
The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.
Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.
I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That’s probably both inefficient and unnecessary. What it does do is get us past the idea that it’s impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.
It might be helpful to make one full brain simulation, so that we can start removing parts and seeing what needs to stay. I definitely don’t think that we should be mass-producing then, though.