The synthetic self
We like to say that LLMs are very different from humans. LLMs don't understand. LLMs don't feel.
At the same time, the way we express understanding is through words. So if a language model can simulate words perfectly, it could express understanding.
John Searle's Chinese room thought experiment tried to explain just that.
(from Wikipedia) "Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?"
The human and the machine in the thought experiment are just going through a set of steps. It doesn't understand what it is doing.
What is the self? What makes us human?
Tony Prescott recently explored this by looking at what is needed to construct a synthetic self, a robot.
A robot, for example, needs to understand what is physically 'it' and what is not. It needs to have agency. And it needs to have the ability to understand itself 'through time', by having some sort of persistent memory.
I see parts of this in AI agents. ChatGPT, Claude, Gemini, each have a memory setting. Claude code also has agents that can do things for you (agency). They don't, however, understand things the way we do.
That brings us to the problem. How do we know that another human has this understanding? Prescott mentions Blade Runner, where there's the baseline test ('Interlinked!'). They ask emotionally provocative question to see if the protagonist is a human. It's a fictional test called the Voight-Kampff test. (example by Prescott)
Isn't that weird? You don't know whether someone has the same experience as you do? It's called the 'problem of other minds'. Something for another time.
As Prescott concludes:
"While LLMs may have no sense of self, their capacity to use self-referential language so fluentlt does provide a further insight – there may be no strong distinction between perceiver and what is perceived, beyond that which is constructed in language."
I think it's crazy how far we've come with generative AI, compared to how little we actually understand about our minds.