The Myth of the Uploaded Mind
Persistence of personal identity and consciousness isn't possible in simulation. The map is not the territory.
Back to my own photos this week. Really can’t abide the AI illustration trend at this point. Fun, but not close enough to use.
“He was almost incapable of general, platonic ideas. It was not only difficult for him to see that the generic symbol ‘dog’ took in all the dissimilar individuals of all shapes and sizes, it irritated him that the ‘dog’ of three fourteen (seen from the side) should have the same name as the dog of three fifteen (seen from the front).”
“To think is to forget a difference, to generalize, to abstract. In the overly replete world of Funes, there were nothing but details, almost contiguous details.”
— Jorge Luis Borges, Funes the Memorious
I've been exploring the nature of personal identity. In the last few weeks, we've been looking at the preservation of identity using some classic science fiction descriptions in order to understand a bit more deeply how continuity of an identity is maintained. In the cases we looked at, the brain is dematerialized into energy or information, and then reconstructed somewhere else, some later. Questions about duplicated minds arise, but for the most part, it seems that personal identity is preserved by physical continuity. It's a continuous identity within the brain, and the brain or at least its contents move.
But what about this idea of mind uploading? That would involve dematerialization of the brain and reconstitution in a different substrate, namely some sort of digital computer. I'm pretty convinced that this mind upload would not preserve personal identity. Consciousness would end and not resume in the computer. Any awareness of that computer version would be something different. The subject gets pretty technical, so stay with me here.
Thinking about the thinking computer
First, let’s be clear about what we're talking about physically. Right now, I'm experiencing my own personal identity with conscious awareness because I have a typical, healthy human brain. I've already outlined the basic apparatus in previous posts, but the basic idea is that in the evolutionary transition to vertebrates living on land, when amphibians left the water and became reptiles, one of the novel developments was a supervisory structure in the brain consisting of a central nucleus, the thalamus, as a controller of activation of an overlying map which in reptiles (and then birds) is called the palladum, but in mammals is greatly elaborated as the cerebral cortex. We've learned that this thalamocortical network is the mechanism for generating not only awareness, but coordination of what we experience as single-channel sensory input— the way we can focus exclusively on a particular motor act, or on a sound or on a visual feature. We knit together pieces into what seems to be an integrated environment of a physical world out there and a self and body in here.
Our consciousness as we experience it arises from a particular set of brain structures and networks that function in order to create this single channel focused awareness. I have no idea what kind of experience an LLM, a rock or an octopus might have since I can only experience what it’s like having a human brain. By analogy, I assume everyone else with a normal human brain experiences more or less what I do, with lots of variation that's normal (perfect pitch, synesthesia) or variant (psychosis, color blindness). I've discussed how I can assume that any vertebrate with a similar set of brain structures has some analogous experience, so that I assume my dog has awareness, but I can only imagine how her experience compares to my own.
Now, we know that these brains are complex at every level from chemistry to genetics to proteins to membrane structure to signalling to connections and structure. This particular set of complex interactions is the only one we experience. The only one we can experience.
Can only brains support minds?
So we wonder quite naturally whether this requires the brain as constructed or we could have the same function in a computer? Does the magic trick only work in "wetware"? Could you build a brain out of electronic circuits that recreate all of the functions, reconstituting a brain in a different substrate?
In theory, it seems the process should be independent of substrate. If there’s nothing beyond physical processes, no spirit or soul, there's no a priori requirement for neurons, glia, and synapses, for neurotransmitters, ion channels, or second messengers. But that's the way we're built, so we're sure that this is at least one way to do it.
Once again, this is my simple physicalist concept of awareness as an emergent phenomenon that arises from being a brain. There's another common view that brain functions are calculations, so to reproduce brain functions, one needs to reproduce the input-output patterns. These patternist assumptions want to reduce the complex system of the brain to algorithms.
I don't think the complexity can be reduced this way. We can't see into the brain. Nothing is labeled, and it isn't organized in a way that can be scanned and reproduced as calculations. An LLM is a model of language as used by brains, but it's not the brain's use of language. The fact that the model simulates human responses is both useful and surprising, but it's not reproducing a brain or the way humans use language. The LLM is not an isomorphic system; it is just functionally equivalent. This is the Turing test of emulation, not recreation. Without the structure and function of a thalamocortical network, I'm not willing to entertain that it has awareness that resembles my own. Again, just like an octopus or a rock. It could be aware in a different way, but I'll never know.
When you change the substrate
Okay, so if you wanted to construct a brain like ours with a different substrate, an isomorphic mind, let’s think about how we would go about it.
Last week, I used the example of Sylveste in Alastair Reynolds’s Revelation Space universe. Sylveste uploaded minds to computers by scanning brains at some unimaginably high resolution and then simulated the brain to achieve immortality. In order to successfully scan at this resolution, it was necessary to destroy the brain in the process. The person died to create a brain scan to live in a computer. Clearly, continuity of personal identity was desired. We want an effect like general anesthesia or the Star Trek transporter where you lose awareness in one place and regain it after a gap in time somewhere else.
If you were going to scan like Sylveste, what resolution would you need? Naively, you might think that a complete map of connections would be enough. But we know that the shape of the neuron, the location of the synapse on the neuronal surface have a profound effect on the strength and duration of an input, so you'd need complete anatomy, not just connections. But at the next level down, these synapses release one or more transmitters and growth factors. No anatomy is not enough; you need to map down to a molecular level.
Of course, as we go down a level from anatomy to the molecular, we see more and more complications. How do you map synaptic strength? Even more difficult is mapping the control of synaptic strength with activity supporting vital function like memory with mechanisms like Long Term Potentiation? So map receptors, all of the second messengers, and ion channels. There are also gene control proteins supporting responses.
One ends up requiring a snapshot at the atomic level, mapping every atom and its position in every molecule in the entire brain. Because don't forget that being human includes distribution of circuitry for affect, motor control, and sensation into the brainstem, spinal cord, ears, and eyes. Let's mention the interface issue. One doesn't want to end up with a brain in a sensory deprivation tank. We need to simulate a responsive environment if the upload isn't going to be simply connected to a robot in the real environment. No wonder so many of the uploads that Sylveste created ended up in madness.
The map is not the territory
I think that following this train of thought, the scan is really no different than transporter dematerialization, which much more simply provided for continuity of personal identity. The scanning and destruction of the brain create a map that somehow is used to reconstruct a snapshot of the physical artifact, with the twist that you now have a process that moves reality into a computer. Since you haven't abstracted from the atomic level, this is of necessity an atomic-level simulation. No wonder so many of the uploads degraded and ceased to function.
In essence, the uploaded mind is reality itself physically reproduced in the form of computer logic. Now I don't think that anyone is considering an atomic-level model when they think about uploading a brain. But could you upload a cubic meter of beach? If you scanned every grain of sand? You'd need the shape, weight, and every individual grain of sand. Of course, no one would do this in a modeling or simulation setting. They'd model a distribution of sand grains and their interactions and come up with a pretty good predictive model of a pile of sand. How high could a pile be? How far would it spread if dropped from a meter?
When we contemplate mind uploading, we're not thinking about a model at the level of atomic interaction, but rather an abstraction simplified down to some level, probably connection and maybe synaptic strength. That would be a neuronal network simulation, just modeled on an actual individual. It is a map, not the brain itself.
And we know that the map is not the territory. Any simulation discards detail in order to be predictive of the actual system. But a hurricane model is not wet, no matter how realistic or predictive. You get a map of the likely hurricane path, not a hurricane. The map now has to be compared to the world itself. Quite simply, you always end up with a simulation of brain circuits, not even anything approaching a simulation of mind. Our LLMs are a great illustration that this kind of isomorphic creation from scanning is not going to give you a mind. The output of the LLM needs our minds to understand the meaning of the output.



"The map is not the territory".
It can be tough to formulate an elegant one liner to describe the impossibility of uploading the mind, but you nailed it.