Building the computer simulated mind
A realistic machine minds needs preferences and memory. Plus an appropriate body.
“Do Androids Dream of Electric Sheep?”
— Philip K. Dick
Zoe Graystone died in a terrorist bombing. Her father, a tech mogul, rebuilt her as a humanoid robot controlled by a digital avatar that Zoe had built herself from her online presence. The moment the avatar was activated,it panicked. Zoe's avatar expected to be a young woman, but the body was a massive mechanical thing. Failing to integrate simulated mind with a body was a mistake that led to catastrophe.
This is the premise of the show “Caprica," which first aired in 2009 as the prequel to the show Battlestar Galactica. While it didn't work out in the fictional world, I think it illustrates one path from simulating human language and art that we have now to simulating a human mind in a human body.
Making a human-like mind
We've decided it’s not practical to dematerialize a brain and upload the brain itself to run on a computer. It would be an atomic level simulation that would truly be a physical continuation of the brain in simulation. But the size of that simulation would be too large and probably never accurate enough to function over a long period of time.
So we're not going to move a mind, we're going to make a mind modeled on a particular person. We're left with creating a map at some level of detail and using that to simulate the responses of a real person. We have this already generically in the realm of language with our LLMs. I believe that they actually surpass the conventional Turing test since when I interact with current LLMs the only way I know there isn't a person typing back to me is that these simulations know more than anyone possibly could and respond faster than anyone could type, let alone organize thoughts. Not only that, but these LLMs are talking to thousands, maybe millions of users simultaneously, overcoming the serious bandwidth issues of the human brain.
This is a next level Turing test. Putting robotic appearance aside, how do we simulate a brain so that we can't tell the difference between our creation and a real brain? Just as with the LLM, it’s likely that once we're successful, its abilities will be superior to ours— faster and free of the limitations of biology in ways that may be obvious or may be subtle but extremely powerful.
To fully emulate a human brain, I would assume that the emulation will need to make use of a robotic form for sensory and motor feedback. It’s true that people can lack input like being blind or being deaf and still be fully functioning people. And awareness of some aspects, like feeling your heartbeat, level of exertion, body position, can come and go depending on context, but maybe aren't necessary for function if lacking.
However, since the essence of our conscious awareness is a map of the world "out there" and a sense of self as "me, in here," we want our model to include both those aspects to accurately emulate a human brain rather than just language or object recognition. I think it's widely recognized that our efforts to simulate the various abilities of the brain like speaking, reading, recognizing objects, sounds, and words are already quite advanced, often matching or surpassing human ability in a limited range of tasks. So taken as a whole, let’s assume that a lot of what we need is available in the parts bin, and the challenge is linking them together into a functioning system to simulate the brain as a whole and adding what's missing.
Adding valence and salience
Off the bat, I can think of a few things we've not yet simulated. We've been focusing on the tasks performed in the matrix of the cerebral cortex— object recognition, speech recognition and production— and not at all on the functions of the rest of the forebrain, which I'll summarize as valence and salience.
Valence is how we feel about any stimulus in the environment, whether it’s a real thing like an apple or an internally generated idea like a trip to the dentist. While a psychologist would talk about the emotional or motivational value assigned to it, how it makes me feel inside or whether I approach or avoid, I think of it like a tag. Our brain assesses everything as good or bad, good or bad, attractive or aversive, beautiful or repulsive. These are vector-like with multiple dimensions (tasty, attractive, useful) and strength ranging from intense reaction to neutrality.
In the brain, valence is computed by structures like the amygdala, hypothalamus, ventral striatum, and dopaminergic midbrain nuclei, which together inform the cortex about rewards, punishments, homeostatic needs, and emotional relevance. These systems don’t just provide feelings; they determine our actions.
Without valence, there's no way to make decisions because every object, every goal, every outcome is the same. You have to have relative values to choose this not that. In our simulated person, we'd need to accurately emulate a person’s context-dependent utility functions for everything the simulation will encounter so that it will react as the person would, make the same choices, and talk about the experience as the person would. This is more than just adding a layer of "emotion“; it is mapping the general human valence functions plus the idiosyncratic responses of an individual that come from life experience and the vagaries of genetic expression and accident during development.
The other critical aspect I think you need to add in to act human is salience, which is a brain signal of how important a stimulus is and thus deserving of attention. Our robot needs to attend to novel events, but after evaluation, either take action or sit out and capture attention, independent of whether it’s good or bad. It’s driven by novelty, contrast, unpredictability, and context, and is computed in brain systems like the superior colliculus, pulvinar, anterior insula, and salience network hubs in the anterior cingulate cortex and anterior prefrontal cortex. These regions guide the allocation of cognitive resources and help determine what enters working memory or conscious awareness. In artificial systems, salience could be implemented through attention mechanisms that dynamically shift focus based on changes in input streams, relevance to current tasks, or prior learning—for example, reorienting sensors or processing power toward a rapidly approaching object, even if its valence is unknown.
What about personal identity?
I think that would give you most of a generic human if you had generic preferences and generic alerting to stimuli. It's one of the annoyances I have with current LLMs. They are totally mid. These chatbots will generally present advantages and disadvantages of choices, but not come down on one side or the other. Even if I ask whether fecal matter is disgusting, it agrees for the most part but has to point out that for medical testing and some other purposes, it can be quite valuable. Plus, they always complement my own opinions, which I'm sure they've found increases loyalty to the service.
But we want to replicate a particular individual; we need to emulate their preferences, not a generic human. My favorite example of this is the show “Caprica," which was produced after Battlestar Galactica but serves as a prequel, telling the story of the creation of the humanoid robot adversaries of BSG, the Cylons.
The Cylons of BSG, especially the humanoid models, are portrayed as fully conscious but enhanced machine minds. They experience emotions, form complex relationships, question their existence, and make moral decisions. Being self-aware tends to get in the way, especially when faith and guilt raise questions for them in the war against mankind.
In Caprica, we learn that the Cylons originated with a humanoid robot created to replace a lost child. Zoe Graystone died in a terrorist bombing. No question of continuity because she's dead. Daniel Graystone, her father, is a wealthy tech mogul who seems to be working on robots with the kind of neuromorphic machine control we're talking about. Daniel discovers that her daughter had already created a personal avatar built from online data and a personality simulation.
So let’s just imagine that what Zoe had was a logical next step from our LLMs, a personalized agentic model for online use that can simulate an individual's likes and dislikes— basic valences with a solid set of memories. It’s not so far-fetched to look at purchases, actions, and speech to derive revealed preferences rather than endless questions that may not be entirely honest. By using data, I think you could come up with a passable simulation of a person.
Personal identity as avatar
Maybe the avatar is the final component of the neuromorphic brain, the personal identity module with memories and preferences. It can inform the more generic sensory, motor, and executive function modules about what to approach and what to avoid. What to fear and what to appreciate. Who to like and who to dislike. If new experiences feed into learning, the avatar will drift from the original over time, just like the Star Trek transporter copies we talked about a few weeks earlier. It starts with history and a snapshot, then evolves from there. But I'm satisfied that this is a pretty good human emulation in robotic form with a control system that simulates human brain function without the messy scanning and atomic or network-level recreation of a brain. It's at an appropriate level of mapping like our current LLMs.
Unfortunately, things didn't go as expected for Zoe, or more precisely, the U-87 prototype. The show portrays the robot as immediately self-aware. Zoe had died. Her avatar was not in Zoe's body but in a bulky robotic form. Whether such a construct could ever be self-aware is unknowable. I've said it here more than once that I understand awareness as experience by a human brain like I have and know other people around me have. I don't know what the subjective experience might be like for my dog or an octopus or an alien.
But whether or not she was actually self-aware, the control system and its body were not coherent. If my avatar was in a robot, it would be shocked because my avatar expects to be a short-bearded elderly guy. And when queried, would probably note the difference. Because Zoe's avatar was instantiated in a neuromorphic brain with the expectations, goals, and emotions of a young woman, one could see how severe body dysmorphic syndrome might result. Presumably, this psychology carried through Cylon history and was at the root of the conflict with humans. I think BSG is the story of the two creations, biological and cybernetic, coming to grips with their relationship and purpose.
We're there right now with our current ability to emulate human behavior in language and art. In many ways, as we'll discuss, the abilities of our LLMs and image creation systems surpass humans in certain aspects. But certainly not in others. Extrapolating progress, it seems likely we have systems with more agency, able to more directly emulate human goals. Probably not with the psychological baggage of the Cylons if we're smart, but as extremely useful tools where we complement each other in ability and enhance human achievement.