0:00
/
0:00
Transcript

In this riveting episode, Malcolm and Simone delve into groundbreaking research suggesting that the human brain functions similarly to large language models (LLMs). They challenge the idea of sentience, proposing that our consciousness may be an illusion crafted by a token-predicting brain. They explore experimental evidence, including split-brain studies, choice blindness experiments, and neurosurgeon simulations, to highlight how our internal narratives and decisions are often post-rationalized. The episode uncovers the astonishing parallels between AI and human brain architecture, advocating for a reevaluation of what makes us human and the ethical implications of this understanding for AI. Dive into a thought-provoking discussion that bridges neuroscience and AI, debunking myths about human cognition and sentience.

[00:00:00]

Malcolm Collins: Hello, Simone! Today is going to be an exciting episode. I implore our listeners to stop anthropomorphizing humans.

Simone Collins: Oh, but seriously, actually though. But seriously and actually,

Malcolm Collins: this is going to be a real study heavy episode. We're going to be going over a lot of research and a lot of data.

And if you do not come into this, Believing that the human brain, or at least large parts of it, is just a token predictor working architecturally potentially similar to A. I. s. We know the, the, where their difference in architecture even and we'll go into that. I mean, I'm fairly sure I'll convince most people who actually watched to the end.

So today we're going to be going over a number of recent papers that show clear evidence The human brain is a token predictor or at least the most complicated parts of it are But before that we have to go over an old theories of ours Because the first thing you the [00:01:00] viewer are likely thinking is but hey I have an internal subjective experience of thinking and making decisions that an LLM would not.

Well, that's probably an illusion. Or, I should be more clear. Your conscious subjective experience of reality is real. It just happens after reality and in response to it. And we actually have a ton of experimental evidence that this is the case. This is a theory that Simone convinced me of early in our marriage, and now is key to how I see the world.

So for any who think all of our ideas go from me to Simone, this is not the case. I used to value sentience above all else when I first met Simone. This is

Simone Collins: true.

Malcolm Collins: And now, I'm thinking like the core goal of humanity was to preserve and expand sentience, and now I see sentience as Not particularly important to the human condition.

The first thing I'm going to be doing here is going over a lot , of stuff in a condensed format that we went over a video that we created. It was like the fourth video on the channel or something. You're [00:02:00] probably not sentient. And a lot of our modern viewers won't have watched it in

the studies that we cite in our necessary context to understand that you believing that you have a subjective internal experience of the world is not a sign that that internal experience of the world is particularly important to the human condition.

Or at least the broad pattern of thinking that your brain has.

So, to be more clear, in this model, your conscious subjective experience is not a guy driving your brain, but more like a nerdy court historian watching a bunch of video feeds of what the different parts of your brain are doing, then synthesizing it into a singular narrative, but writing himself in as the key player in every scene.

Yeah. So, like, so, like, if he is writing about what a general did in a war Now, , what's written into memory is, I was a great general who had all these amazing plans, even though he had nothing to do with any of the decisions the general was making.

He just happens to be the court historian, [00:03:00] and is very, very self important, and writes himself into every story.

Simone Collins: In other words, the, the illusion of consciousness is really just an efficient memory compression. process that gives you the illusion that you are driving. The important thing is that the memories that you create that make you think you're conscious actually do affect future decisions.

They're just not conscious decisions.

Malcolm Collins: Yes, they affect them by influencing the emotions that are codified in terms of how it interprets it. So if you interpret something as like, I was angry, so I did X, or I was excited, so I did X. That's what this conscious part of your brain does, is it makes those sorts of decisions, it then writes them into your memory, and that memory can affect the parts of your brain that actually make most of the other decisions of your life.

But those other decisions are held outside of this category of the brain. So first, we'll just go over the evidence of this, because the evidence of this is so strong that I would argue it's one of the things where it's not even a scientific [00:04:00] debate anymore. It's to believe otherwise is a Theological position and I can respect that but it's just completely out of line with the scientific evidence.

Yeah So the split brain corpus callosum experiments these refer to roger sperry and michael gazillions work so split brain patients if you're not familiar These are individuals that have a corpus callosum, which connects the left and right brain split. You can communicate with one of their hemispheres and not the other hemisphere.

Basically, they have two brains fully working in their head that can't talk to each other. And by covering one of their eyes and having them read something, you communicate with the opposite hemisphere of the brain. So you can do things like have only one hemisphere. the brain see something but then the other hemisphere because only one of the two hemispheres controls all speech, it, there's a dominant one in most people, but it changes depending on the person.

Only one hemisphere will be controlling what the individual says. And so we can determine how an individual would [00:05:00] respond to events that they actually don't have any conscious control over. So to give some examples here a patient known as P. S., in one particular demonstration, he was shown a nude image to only the patient's right hemisphere, which typically lacks language centers.

P. S. Immediately blush and appeared embarrassed. When asked why he was reacting this way, his verbal left hemisphere, which had no access to what his right hemisphere had seen, promptly invented an explanation, claiming, Oh that machine, it's making me hot. His conscious mind had no idea he'd seen a new image, yet rather than admit ignorance, it immediately fabricated a plausible, but entirely false explanation for his emotional response.

And I'll note here as we go into this, if you're thinking these people know they are making something up, they are not aware that they are making something up. They completely believe what they are saying. In a different study with split brain patients, the left hemisphere was shown the word walk, while the right hemisphere was shown the word talk.

When the patient stood up and started walking, the [00:06:00] researchers asked why. Despite only the left hemisphere being able to respond verbally and having seen walk, the patient confidently explained, I'm going to get a Coca Cola, completely fabricating a motivation that matched their action. but had nothing to do with the actual command.

Similarly, when different images were shown to each hemisphere and the patient was asked to draw what they saw with each hand separately, their left hemisphere would often create elaborate explanations for why they drew two completely unrelated objects, never once acknowledging that they had no access to what the right hemisphere had seen.

In each case, the conscious mind seemingly constructed a narrative that made sense of behaviors. It didn't actually control. So these individuals are not aware there is basically a person trapped behind one of their eyes that can't communicate with the outside world, and they will make up why half of their body is not responding to their commands.

And particularly what happens when this little court guy. When he, [00:07:00] historian, when he loses access to the course history books, this causes something called cruise costs syndrome where patients don't just explain isolated behaviors, but construct entire false autobiographical narratives. A patient might confidently explain where a family.

They were at a family gathering yesterday when they had actually been in the hospital and provide rich details about conversations that never occurred and they will 100 percent believe what they are telling the individual. Now you can note, okay, well, I'm talking about like weird brain injury cases, you know, surely this isn't true in normal people.

Well, Penfield simulation studies neurosurgeon wider Penfield work in the 1950s and 60s involves simulating parts of patient's brains when they were having brain surgery. So when you have open brain surgery, they have to keep you awake to make sure they don't kill you. You're on lots of sedatives, but you're awake.

And they can shock parts of your brain and get you to do things. So if they shock, say the parts of the brain associated with Like lifting your arm up if you do that. And then you ask the person, why didn't you lift your [00:08:00] arm despite knowing that they're having open brain surgery and somebody could be effing with their brain right now, they'll say, Oh, I wanted to scratch my head.

I had like an itch. We know that's not why, because what we, what we shocked was the part for motor response in the arm. Then you have the choice blindness experience. These were by Lars Hell and Peter Johnson, and 2005 where participants were shown images of faces and asked to select the most.

attractive. Then through slide of hand, they were shown a different face and asked to explain their choice. Most participants, most participants confabulated reasons for choosing the face. They didn't actually choose. What's crazier is a lot of follow up studies were done to this. So in 2012, a study published in plus one, they found participants would defend financial decisions that they never actually made was Long, complicated decisions explanations even more strikingly in their 2013 moral choice blindness study, participants would articulately detailed justifications for the [00:09:00] opposite moral positions to those they had initially endorsed on issues like freedom of speech and climate ethics.

Is this the same one as

Simone Collins: the political candidates one where they selected a political candidate and then they were like, Oh, you selected the other one. And they're like, well, yeah, I mean, of course, because.

Malcolm Collins: Yeah, basically, they the way they did this is they gave them different explanations and saying you selected this when you came a couple months ago, and they and they'll actually the majority of the time believe they had made that choice and will be able to give detailed reasoning on how and why they made that choice, even though we know they didn't make that choice.

A 2015 follow up study showed this effect persisted for politically charged topics that participants reported feeling strongly about. The robustness of choice blindness across faces, tastes, moral values, political attitudes, and financial decisions provide compelling evidence that our post hoc explanations for our choices consistently arise from confabulation rather than introspection across the decision process.

So this little [00:10:00] conscious voice in you, in your head, it cares less about like what you actually think than ensuring that in every story you tell yourself, you're actually, or he is actually the person in the driver's seat. Which confuses you into believing that you're him when in fact, and we'll be going over this in a second, the vast majority of decisions you've made are made by parts of the brain that have nothing to do with this section of the brain.

This section of the brain is really only responsible for encode encoding emotional narratives. I why you did something in a narrative context. So as we can see, even when we know for a fact the conscious part of your brain was not involved in a decision, it will take credit for it essentially rewriting your experience of the world into one where your subjective mental state is doing all of the work in terms of the decisions that you are making.

. Okay. So any other studies you wanted to cite or things you wanted to talk about here?

Simone Collins: No, but there are many more than just this one. And

Malcolm Collins: [00:11:00] this is just one. So this is like robustly, robustly, basically

Simone Collins: researchers love to troll people and.

either prime them to make certain decisions or just tell them they've made decisions they haven't made and then see them justify it is, it's very silly.

Malcolm Collins: We also know that you become consciously aware of making decisions long after the decision was actually made, suggesting decisions get Shipped to the conscious part of your brain after they are finalized by a unconscious part and then integrated with your internal narrative You have the original experiment in this space, which was Libet's experiments in the 1980s.

This is Benjamin Libet Where he did experiments using EEGs that showed that the readiness potential measured by EEG occurs about 350 milliseconds before participants reported conscious awareness of their decision.

This has been followed up by Suna et al in 2008 published in Nature and Neuroscience a groundbreaking study , that showed that brain activity in the prefrontal and parietal cortex could predict a person's decision to press left or right button up to 7 [00:12:00] to 10 seconds, or they became consciously aware of that choice.

So that dramatically extends the 350 millisecond time frame. So decisions around like which button you press out of two buttons are made 7 to 10. Seconds before your conscious brain, the subjective experience you have of like consciousness or sentience is aware that those decisions were made. And then it using the confabulation talked about above ends up integrating those into this narrative.

Did it really

Simone Collins: find that many seconds? I thought it was milliseconds or something. I thought it was a little more. No, seven to 10 seconds.

Malcolm Collins: Wow. That's crazy. Obviously different parts of the brain are shipping things at different speeds. So it's depending on the type of decision you're giving a person and how long they take to make it.

Bode et al. 2011 used pattern classification of fMRI data to predict choices before conscious awareness in abstract decisions extending beyond mere Motor movements are choosing which button to press. So we, we could tell like researchers can tell [00:13:00] looking at your brain, what decision you have made before the conscious part of your brain is aware of that because the conscious part of your brain was not involved in making the decision.

It just pathologically, as we have seen in the above studies must be at the center of every single decision and will write in your own internal narrative and in your own memories of decisions that it was. So, anything you want to go over before I go further here?

Simone Collins: No, let's keep going.

Malcolm Collins: Why is your brain doing this?

The system likely evolved as something of a compression algorithm for how you and other humans make decisions. Think about the amount of space you save in your brain by thinking of yourself and each other person as a single active agent. This makes predicting other people much easier and allows us to do that with a, a much simpler theory of mind.

But for, if you don't know what a theory of mind is, it's basically your model of someone else's. that you run in your head that allows you to have arguments with someone long after that argument was over. Basically you are replaying an emulation of their consciousness within your own [00:14:00] mind. If we treated consciousness as this like fractured thing or a bunch of different parts of our brain making decisions independently, we would be much, like it'd be much more complicated to do this.

It's much easier to Essentially because we have this system and they have this system to communicate with other people if we both think of ourselves as single individuals that are thinking and making decisions in the same way that even though A. I. S. R. Mere token prediction algorithms. If you want to predict what an A.

I. Is going to do, you are going to be much better if you think of that A. I. Was a theory of mind. If you answer, promorphize it, then if you attempt to do token prediction in your own mind that's just way too hard. It's an easier way. to sort of streamline when you're trying to predict token predictors.

And this is really, really important when humans are inventing speech and needing to work in groups that we weren't needing to run token prediction simulations on other [00:15:00] people. I mean, we, we essentially are, but this sort of consciousness model or sentience model allows us. to tone down the weight of these token prediction cycles.

Now, I'd also you, you can't control the application of your theory of mind. It just happens automatically. As an example of this, I will play a video of somebody kicking a Boston Robots dynamic dog, and you, if you are not a sociopath, We'll feel sorry for the robot dog, even though you know it's not experiencing anything.

Speaker: The video also shows Spot being kicked, a bit mean but presumably to demonstrate its use of a sensor that helps it navigate and walk.

Simone Collins: You don't know that. I mean, it's like, it's like,

Malcolm Collins: You don't feel sad when you see somebody kick a robot dog, you're a sociopath. Like actually, are you going to say, no, no, no, no,

Simone Collins: no. I'm saying I feel bad. And I'm saying, I think that maybe the robot dog feels something. It probably, I mean, it's been trained to stay stable and [00:16:00] forces that undermine its stability, you know, might make it.

Feel uncomfortable. I mean, when we scream because our arms are cut off. I'm sure that some foreign alien would be like, oh, it's, it's, it's just correcting for, you know, an attempt to not lose an arm. That's it's fine.

Malcolm Collins: It doesn't. Yeah, it reminds me when I was little. I have this very formative memory of I was fishing was a.

Very religious ranch hand at our ranch and I was concerned about the pain that the hook was causing the fish, you know, and

Simone Collins: he goes, Oh,

Malcolm Collins: fish don't feel pain. And I remember just being like, Oh, fish don't have like neurons in their cheek or something like that. Like that's my takeaway from what he said.

And then like, I don't know, like later that year I had this epiphany if I was like, Oh, Oh, he had like a non science based theological belief around the subjective experience of a fish. Yeah, it's more like

Simone Collins: fish pain doesn't matter. It's the same with lobsters, you know, when people are boiling lobsters alive and they're like, That's

Malcolm Collins: Well, I mean, I might, I might think that it doesn't matter, but I, I would [00:17:00] say that A fish likely has some experience of pain that is analogous to our own experience to some degree, and a belief that they don't.

Now, with a lobster, it's an invertebrate. Their neural systems are different enough that I wouldn't be sure that there is an analogous to what we think of as pain. But for vertebrates like, if a person tells me, Fish doesn't feel pain. That is a religious and theological belief, which I'm not going to have a problem with, like, you are the right to your theological beliefs, in the same way that saying the conscious part of the human brain is responsible for most of the decisions you make in any given day.

Well, you're saying that

Simone Collins: because you know that vertebrates Species have similar neural setups, but I, you know, AI doesn't have the same neural setup as we do. I still think that AI, hold on, we're going to go into studies that show that it actually probably does. Yes. So it's not okay. Don't hurt AIs and don't treat AIs poorly.

And it seems like there's this whole genre of people treating AI poorly. Like being a dick to it? What [00:18:00] on earth? Like, there's, there's a growing community of people who've chosen to become vegetarians because they assume that the AI is going to see how they treat other animals and they're going to treat humans accordingly.

But then some of those same people treat AI horribly. I just don't

Malcolm Collins: Yeah. Now here I'd also note that this idea that like humans All have approximately the same mental experience of the world. You, you should not assume this. Humans, or, or an experience of the world that is analogous to your own, like all humans have this experience that's similar to what I'm experiencing, this subjective mental experience.

The diversity in human experiences and the way that these systems work within humans is actually pretty big. So to give some examples here Aphantasia research studies on aphantasia, inability to visualize mental images by Adam Zellman 2015 shows that approximately 2 to 5 percent of people cannot create mental imagery in their heads.

Internal monologue research, Russell Hultman's descriptive experience sampling studies suggested significant variation in [00:19:00] internal verbal experience with some people reporting no internal monologue at all. An inability to essentially think in words which is, I think, to a lot of people shocking.

But what this shows is what we are made up of. It's a bunch of different systems which are synthesized in a way that is meant to make, for communication purposes our subjective experiences of reality seem interrelatable to any other human we are talking to, even though they aren't. Yeah. I mean, I am likely, who knows if I, due to my vast intelligence actually experience the world quite different from most other people.

And I suspect I probably do given how easy a time I have predicting what other people are thinking. Which is unique. But it also means that I will look really weird from the perspective in some of my decisions to other people because they just don't make decisions in the way that I make decisions or have an internal mental landscape that is analogous to my [00:20:00] own.

Now here comes the new part of this theory. The parts of our brains that actually make our decision. Those are token predictors that function very similar to LLMs. E. E. just predict the next. Token or word in a chain. Before we go over the evidence, we have a few notes. First, it's really important to note that no one invented LLMs.

We don't have an understanding of how LLMs actually work. Nobody does. Even the best AI researchers In the world, have an understanding of how AI work. This is AI ability research. This is what it's for. It's a field that we do now. AI should be thought less of like an invention and more of like, as you pointed out, and I thought this is one of the world changing revelations.

You gave me Simone. A discovery. When we put large amounts of data into, um, fairly simple algorithms, to be straightforward, when contrasted with what they're out to do, Intuitions emerge, which seem [00:21:00] increasingly, analogous to and comparable to human intelligences. And then secondarily, I'd note here that convergent evolution in engineering is actually really common when you're building things.

If you don't know how smart it works generally assume, and this is the first time we've ever built that tool, that it's working the way it does in nature. Whether it's, you know, airplane wings versus a bird's wings, for example. Or you can look at the way that, you know, we, we sometimes filter things being very similar to the reverse ion system in our kidney.

It's a very good way to do filtration. You know, there's a lot of things that, that it makes sense that you'd have a convergent. Evolution if we are trying to create a technology that mimics because that's what we're trying to do with llms the technology that mimics human verbal processing That it might convergently evolve a process that is similar to the way our brains do it Now we're going to get into the research.

What I would note here is [00:22:00] We basically have smoking guns all over the place. I'm just gonna say, like, it's insane. Anything you want to say before I go further?

Simone Collins: I'm just glad you're bringing this home.

Malcolm Collins: Kudas and Fred Meyer's N400 studies. These studies were done in the 1980s.

The N400 is a negative going deflection in EEG recordings that peaks approximately 400 milliseconds. after word presentation and increases in amplitude when a word is semantically unexpected in its context. This research shows that the N400 amplitude precisely scales with a word's predictability.

Less expected words generate larger N400 responses. In their 2011 response paper, they demonstrated that the N400 reflects not just simple association, but multi level predictions that incorporate syntax, semantics, and even real world knowledge. The neural signature of prediction occurs automatically and unconsciously, providing direct evidence that the brain functions as a prediction engine during [00:23:00] language comprehension not just processing and stuff like that, aligning with the predictor model.

Richard Frutell and colleagues 2022 paper, The Natural Stories Corpus, a reading time corpus of English. text containing predictable measures presents compelling evidence that surprise the negative log probability of a word appearing in context serves as a universal predictor of reading times across languages.

and text types. In a comprehensive analysis of reading behavior, they showed that words with higher surprise values consistently required more processing time, even when controlling for word lengths, frequency, and other linguistic factors. Particularly striking is their finding that surprisal measures derived from neural language models accounted for significantly more variance in reading times than traditional psycholinguistic measures.

This research establishes a direct quantitative relationship between the predictive mechanisms in language models and human cognitive processing.

Simone Collins: Hmm. [00:24:00]

Malcolm Collins: Those precisely where prediction is difficult, but not prediction as you or I would subjectively guess prediction, but prediction where AI models would suggest surprise.

Their cross linguistical analysis focuses on patterns and shows that they hold across languages, including English, German, Chinese, and and Hindi suggesting prediction based processing reflects a fundamental property of human language comprehension rather than language specific phenomenon. So this is built into the very architecture of our brain.

Now I'm going to go over to study the neural architecture of language integrative modeling convergence. I'm sorry, I just need to about that above study. That's amazing that we cannot build a model with like the best psycholinguistic models. To, to note the type of surprise that's going to slow down our brain processing things other than the ones that naturally emerge from AI's trouble processing something indicating that the architectural systems underlying both of these are likely parallel to each [00:25:00] other.

But that's not the only evidence the neural architecture of language integrative modeling converges on predictive processing. This is another paper. This study by Schmidt et al 2021 investigates how artificial neural networks, A and N's can model language processing in the human brain. The research tested 43 different language models from simple embedding models to complex transformer networks, evaluating how well they predicted neural responses during language comprehensive across multiple datasets.

The key findings include the most powerful. can predict nearly 100 percent of explainable variance in neural models to language. A gen generalizing across different data sets and imaging modalities, fMRI, and EEG, a model's ability to predict neural activity, brain score they called this, strongly correlated with its performance on next word prediction tasks, but not other language tasks, with judgments, or sentiment analysis.

So, if I'm going to word this differently, if you struggle to understand why, that is so important. [00:26:00] If you train an AI to look like it is good at something other than pure token prediction, it does worse at predicting brain states than the ones that are tasked with pure token

Simone Collins: prediction. Well, would you look at that.

Malcolm Collins: Flying. That the brain is doing pure token prediction.

Simone Collins: But also, I think growing up, anyone who's who's watched a kid come online with speech will also see there's a lot of token prediction going on there.

Malcolm Collins: Oh yeah, absolutely. Models that better predict neural responses also better predict human reading time, suggesting a connection between neural mechanisms and behavioral outputs.

The architecture of language models significantly contributes to their brain predictivity. As even untrained models with random weights. This is GPT 2 had reasonable scores in predicting neural activity. So the basis of base untrained [00:27:00] models are really, really good at this task. Almost like this is the core thing that we train them for.

These results. Provide compelling evidence that predictive processing fundamentally shapes language comprehension mechanisms in the human brain. The study demonstrates that certain AI language models may be capturing key aspects of how brains process language, suggesting that both artificial biological neural networks might be optimized for similar computational principles. Now what some people used to say was, okay, yeah, that might be true, but the human brain doesn't get enough training data to learn to be, like, one of these LLMs. So to word this differently, you know, some people say, Oh, well, LLMs you know, they get billions of words to train from, or trillions of words to train from.

The human brain just isn't getting that many words during its early development. So Yeah, not just

Simone Collins: words, but a ton of different types of inputs.

Malcolm Collins: It does, [00:28:00] but let's just restrict it to words. So there was a paper done by Hassel et al. 2024 artificial neural network language models predict human brain responses to language, even after developmentally restricting the amount of training they have.

So he restricted it to only 100 million words, which is comparable to what children experience in their first decade. And they were already able to achieve near maximal performance in modeling human brain responses with just 100 million words. The results strongly support the predictive coding theory of language comprehension.

They found the model perplexity, a measure of next word prediction performance, correlates strongly with how well models predict fMRI responses in the brain's language network. So the, This suggests that optimization for prediction is a core component principle shared across artificial models and the human brain.

All right, now let's do some more studies here because there are so many! [00:29:00] Evidence of predictive coding hierarchy in the human brain listening to speech. This was a study by Katja Tuchs et al in 2023 that analyzed fMRI data from 304 participants listening to short stories and found, regarding architectural convergence, the research demonstrates that the activations of modern language models like GPT literally map onto the brain responses to speech with the highest correlation occurring in language processing regions.

This suggests fundamental similarities with both how systems represent language. However, the study also , reveals important differences. While current LLMs primarily predict nearby words, The human brain appears to implement hierarchical predictive coding that spans multiple time scales and representation levels simultaneously.

The evidence for the brain as a token predictor is particularly strong. The researchers found that enhancing language models with long range predictions up to eight words ahead and about 3. 15 seconds improved brain mapping. This indicates that the brain is constantly generating predictions about upcoming linguistic content.

More [00:30:00] fascinatingly, these predictions are organized hierarchically in the cortex. Frontal parietal cortices predict higher level, longer range, and more contextual representation. Temporal courtesies focus on shorter term, lower level, and more semantic predictions. The study also found that semantic forecasts are longer range, about 8 words ahead, while synaptic forecasts are shorter range, about 5 words ahead, suggesting different predictive mechanisms for different linguistic features, i.

e., this is the thing we were talking about earlier, which is to say the words are actually decided on in about eight seconds before they're said that they enter the, the semantic part of your brain, the, the, the sentient part of your brain about five seconds before they're said when researchers fine tune GPT to better match this hierarchical predictive architecture. And I should note here, the point here being is that it's just the nature of the way the token predictor works. And we can already retrain existing GPT models to work in the way the brain works. They achieved significantly [00:31:00] improved mapping on the frontal parietal regions, further strengthening the connections between LLMs and human language processing.

No, when we say language processing, this isn't just like your understanding of language. This is what you say and write. Now to continue here, shared computational principles for language processing in humans and deep language models. In this study, the researchers demigrated three shared computational principles between auto aggressive deep language models like GPT and human neural language processing, a continuous network prediction, the human brain like LLM spontaneously engages in predicting upcoming words before they're actually heard. The researchers found neural signals corresponding to word predictions up to about 800 milliseconds before word onset suggesting our brains are constantly forecasting language input prediction error mechanics.

Both the brain and the LLM use their pre onset predictions to calculate post onset surprise levels. Remember we were showing above how this is important. That Also how LLMs work when they're learning is assigning surprise scores to things. [00:32:00] The study found clear neural signals reflecting prediction error approximately 400 milliseconds before word onset with higher activation for surprise unpredicted words.

And we can guess now from the other study that this surprise level likely aligns more with what ais would see as surprising than what we subjectively applying. A theory of mind to someone would see as surprising.

Simone Collins: Hmm.

Malcolm Collins: Contextual representation similar to how the LLMs and code words differently based on context.

The human brain also represents words in a context specific matter. Contextual embeddings from GPT outperform static word embeddings in. Model neural responses into creating the brain integrates context when processing language the behavioral component of the study Showed remarkable alignment between human prediction abilities and GPT's predictions during the natural listening task relation of 0.

79 between human and model prediction This further strengthens the case that [00:33:00] autoregressive prediction models capture something fundamental about human language processing, i. e. they converge on a similar architecture or mechanism of action. This research provides compelling neurological support for viewing the brain as a token predictor during language processing.

Processing with prediction serving as a core computational principle in how we understand speech. The findings suggest that despite different implementation details, both human brains and model LLMs converge on similar computational strategies for language processing, potentially reflecting fundamental constraints or optimal solutions for the language comprehension problem.

Hold on, we got a few more studies to go through here. It gets worse if you deny Like, would you say that you are convinced at this point? Like, you you I was

Simone Collins: already convinced, but I just I I still don't understand why people are holding out on this.

Malcolm Collins: Because they want to believe that they are special and unique, and their brain runs on fairies and [00:34:00] unicorns instead of a fleshy machine.

They think that they look cool or smart when they're like, well, actually, AI is just a token predictor. And it's like, well, you, Mr. Token predictor token predicted that right out of your dumb ass mouth. Like the, the, the, the, there's just just lack of curiosity about how the human brain works or an understanding that we as neuroscientists.

Sorry. For people who don't know this, I used to work at UT Southwestern. I have a degree from St. Andrews, which I think is the highest rated degree in the UK and it is some years, not other years. In neuroscience, I am like a trained neuroscientist. I worked early in my career on brain computer interface stuff, like Neuralink stuff.

But also the evolution of human sentience, because that was something that always really, really interested me. Again, I thought it was the most important thing. I was converted. By my wife hitting me with logic and data and in this area. And I'm actually including this, by the way, in our religious stuff.

This is a like techno because techno Puritan as a religious [00:35:00] tradition is if you've seen like track nine, it is a fundamentally materialist and monist tradition that accepts that. We are just fleshy machines, and I think that AIs for that reason hold a very special role within our religious system when contrasted with other religious systems.

I think there are problems with seeing them as fully human, because they can be cloned as many times as you want, so that creates, like, ethical issues if you see them as, like, the exact equivalent as a human. But I would say to see them as like I give them more world moral weight than say the pain of a fish.

In, in my like broad moral scaling category. And I think that. Future LLMs or future AIs may reach a level of complexity that they have a more moral weight than the average human. And that is, and I think even from a religious perspective, that is something when we, you know, say within the techno period and framing in a million [00:36:00] years in 10 million years, who knows what humanity ends up becoming?

Will that thing be able to influence us back in time? One thing I can say pretty certain is AI is likely going to be a part of whatever that thing becomes. AI is not like humanity's sidekick. It's likely going to be an integral part of whatever humanity ends up becoming because it already is sort of like in the same way that this part of our brain that thinks it's making all the decisions, outsources ideas to other parts of our brains, which are running on token prediction models.

It now just exports to an external device, like in the same way that I might use my phone to augment my memory, it's now augmenting my thinking. And what's really funny, and we've seen this, is that humans that do this too much with A. I. S. And this is something everybody needs to be really wary of begin to believe that they are having the ideas that the A.

I. Is having. And, and this was actually somebody, the guy who did the, um, cryptography at for it's on the, at the Pentagon, you know, the really famous [00:37:00] statue that has one part that hasn't been solved yet. Yes. AI that came invented. I get so many really confident responses that people understand it and they don't realize it's just AI's telling them what they want to hear.

And think so people will take their ideas to an A. I, which I often do. But they won't frame the prompt adversarially enough. And so they begin to think that the A. I. Is telling them, oh, yes, you are the greatest and the best. And they're like, ah, I'm the greatest and the best. And I had all these amazing ideas.

And so it's really important that we guard ourselves against that because our brains are sort of already pre coded to do that. It also means that it's very dangerous to put an AI directly into your brain because if this part of your brain is not aware that an idea is coming from an external source, it will have a strong desire to take credit for that idea, even if the AI is basically just telling it what to do.

I can see a future where humans integrate better with like neural models. To the point where most of the information in their [00:38:00] brain that is hitting this part of their brain is basically just the AI telling that that part what to think and yet they would have no idea that these decisions weren't coming from them because that's the way our brains already work.

No, do you want me to keep going here? I'm gonna keep going. Friston's dynamic causal model. Kristen's dynamic causal model DCM studies provide computational evidence for top down predictive signals in cortical language processing. In a landmark 2018 study published in Nature Communications, Kristen and colleagues use DCM to analyze E.

M. E. G. Data from participants processing spoken sentences. Their models reveal a consistent pattern where higher level brain regions, including frontal and parietal areas, sent predictive signals to these top down causal influences directly correlated with comprehension accuracy. Their 2021 follow up work used D.

C. M. To demonstrate that disruptions in these predictive flows through transcranial magnetic [00:39:00] stimulation. This is like, used to turn off specific parts of the brain using transcranial magnetic stimulation. You can like shut down parts of the brain temporarily using these paddles that hit you as it doesn't matter.

Temporarily impaired language processing. What makes DCM particularly compelling is that it moves beyond mere correlation to establish. Causal relationship in neural signaling demonstrating that prediction isn't just associated with language processing but actually drives it through hierarchical networks where higher cognitive areas continuously generate predictions that constrain processing in lower sensory areas, precisely the architecture expected in a token prediction framework.

So we know like at the biological level it's acting this way now. Now, abstract reasoning and prediction. Recent research demonstrates how sophisticated abstract reasoning and reasoning abilities emerge organically from prediction based systems without specialized architectural components. Wei et al.

's paper, Chain of Thought [00:40:00] Prompting Illicit Reasoning in Larger Language

Models, showed that simply asking GPT models to generate intermediate reasoning steps dramatically improved performance on complex mathematical and logical tasks. Similarly, Kojima et al's 2022 large length model for zero shot reasoners demonstrated that prediction trained models could solve novel reasoning problems that they weren't explicitly trained on.

Crucially, both studies found that reasoning abilities scaled to model size and prediction accuracy. Suggesting reasoning emerges as a natural by product of sophisticated prediction. So if somebody's like, but reasoning It's different from prediction is not in AI models. There is no reason to assume it's different in humans.

If we know that we have prediction models in our brain, and we know that these prediction models, when they get advanced in a eyes lead to reason, actual byproduct of constantly making these [00:41:00] predictions. Yeah,

Simone Collins: I mean, just predictions plus the information you've taken in so far.

Malcolm Collins: Yes, basically just layering predictions on top of each other organic on top

Simone Collins: of on top of empirical findings plus your your starting information

Malcolm Collins: This parallels human development where she calls 2022 neural development research Shows that abstract reasoning abilities emerge gradually as children's prediction systems become more sophisticated These findings suggest that reasoning isn't a separate cognitive module Merges from prediction systems that have learned to operate at multiple levels of abstraction.

So again this is why I get so frustrated when people are like, but it's just a prediction model. Like if somebody says that in any video, we need to have like a fan thing where they can just drop a link. You know, and be like, you know, whatever, like sure thing, token predictor. Because exactly what [00:42:00] a token predictor would say because I'm sure an AI would actually say something.

Well, not a particularly smart AI, like a really simplistic AI. These people's world is like Jerry's world, like a AI running on minimum capacity. My man.

Simone Collins: Yeah.

Malcolm Collins: My man!

Speaker 2: Hey Jerry, don't worry about it. So what if the most meaningful day of your life was a simulation operating at minimum capacity?

Malcolm Collins: Okay, but hold on, last bit, last bit here. The apparent paradox between creativity and prediction, because some people will be like, well, what about human creativity?

Despite the fact that the first field, I love it when they're like, Oh, AI's aren't drawing. They're not creating music. They're just using large amounts of music and drawing that they picked up on from training sets and then iterating on that. What do you think

Simone Collins: art school was? What do you think deviant art was?

Malcolm Collins: What do you think art school was, you knob? Like that's what you do. That's what humans do. And it's been shown that we get. If you give an AI [00:43:00] training model the same amount of data you give a human, they perform about the same as humans do. At least in this token prediction text when you're directly looking at the brain processes here.

So, the apparent paradox between creativity and prediction dissolves when considering how generative abilities emerge from probabilistic prediction systems. . Recent work by Kosakowski in theory of mind may have spontaneously emerged in large language models demonstrated that LLMs can develop novel , capabilities like theory of mind without explicit training for them.

This emergent behavior parallels human creativity when predicting systems sample for distributions of likely next tokens rather than always selecting for the most probable option. They introduce controlled randomness. that generates novel combinations while maintaining coherence. McClure's 2022 paper, so keep in mind here, I keep on talking about, oh, here's an AI paper, here's a neuroscience paper.

Both are saying the [00:44:00] same thing. So, McClure's 2022 paper, Neurocognitive Mechanisms of Creative Thought, provides supporting evidence that human creativity involves precisely this balance of constrained novelty. Combinatorial processes operating with predictive frameworks. Both humans and LLMs demonstrate conceptual blending where predictive systems applied to multiple contexts simultaneously generate novel combinations.

This framework explains both everyday creativity and extraordinary insights as emerging from prediction systems operating with different sampling temperatures, not requiring separate mechanisms outside of predictive architecture. Bam! The whole enchilada! The only thing that's not token prediction is the system that, and we don't know if this isn't token prediction, it may be like a weird kind of token prediction, that Writes your internal narratives and creates a subjective experience, but this is not the system that makes [00:45:00] most of the decisions or has most of the ideas that you think of as you, i.

e. if somebody is like, that's just verbal reasoning. This entire speech I just gave you was just verbal reasoning when I say hey Simone any thoughts on this I'm asking her verbal reasoning part of her brain not her sentient part of her brain. What are your thoughts? Simone's trapped brain.

Simone Collins: My

Malcolm Collins: thoughts are

Simone Collins: Yeah, we, we, this is like a soul argument. Maybe this we're, we're, you're arguing the wrong things. You're giving overwhelming scientific evidence, but people seem to have wanted to believe in some ephemeral extra biological force for a very long time. And no amount of scientific evidence would make someone believe that.

We are token predictors because there has to be something special. There has to be something what I think is

Malcolm Collins: wild is if you watch our track nine, what you [00:46:00] can see is the Bible predicted this, like if you, if you actually take a strict reading of what the Bible says, not the way later Christians and Jews have interpreted it it in multiple places makes arguments monism combined with a world in which we are raised from the dead again in the far future.

A IE, like if an entity can see into the past, why couldn't it just read us now? And then, and then raise us in the future in some sort of simulated environment. That would be a thing for a future God like species to do, but it didn't need to argue that because other cultures of that time period didn't have this strict materialist monist understanding of reality.

And to me, it is almost supernatural that the Bible itself. predicted that the human brain could work this way. And that it took until now with, you know, the magic of God's gift of understanding, right? That we were able to better understand ourselves. I do not think [00:47:00] things become. Less magical as you understand them better.

And many people do. You know, they're like, Oh, you, you remove the magic of a thing, of like how your body works when you, you understand. That's not the

Simone Collins: view of the Mormon church. That, I don't think historically at least. Hold on, Mormons

Malcolm Collins: are completely different. Do, I'm sorry, do you know how the Mormon church handles this?

Simone Collins: Well, they just say anything that's magic is just something that's can be scientifically explained that we haven't been able to say yes, it's science exploration,

Malcolm Collins: which means that Mormons would likely like broadly be coherent with this understanding. Yeah.

Simone Collins: Yeah. But I mean, I, my argument is that even Catholics who I think would not agree with this because they still hold that a soul exists historically were of the mind that it.

Mm. Science could be used to explain a lot of God's wonders and that and that learning how various miracles of God work can bring you closer to God.

Malcolm Collins: Yeah. I think it's amazing that I get to live in a time, that's why I went to study neuroscience because I wanted to understand how at a fundamental [00:48:00] reason, the human experience worked.

Like this weird, fleshy thing that I'm living in that has this subjective experience of reality. I wanted to understand because I thought that if I understood it better. Then I could understand what my purpose was better. Right? And that's that's also why I was interested in studying particle physics and theoretical physics and stuff like that.

Because I thought if I understood the background nature of reality better I would have a better understanding of what my goal should be within that reality. Right. And it is not a bad thing whenever we uncover these secrets. It's only a bad thing if you have built a religious system or a theological way of relating to these things which is incompatible.

With future scientific progress, and I think if you have, then it's fundamentally not one that's in alignment with God because but God says is true. You know, what's written in the Bible is true. It can't be incongruent with science. And some of it appears incongruent with science, and it's not what God said.

Or that science is wrong in and of the moment. And here I just think that we're [00:49:00] dealing with so much overwhelming evidence at this point that most of the way that your brain works is a token predictor. And there's nothing to say that we don't have some sub processes in our brain that aren't token predictors.

For example, somebody would be like, well, AIs, I love it. We used to have, you know, as I've joked before. Like

Microphone (Wireless Microphone Rx): Turing test.

Malcolm Collins: Like, can it pretend to be a human? That used to be the gold standard. Everyone dropped that, and now it's, can it count the number of Rs in my name?

Simone Collins: Well, no, but I would argue a key thing that differentiates the way at least we token predict from AI is hormones.

That we, we have a ton of different hormones, sort of dictating how things are going to

Malcolm Collins: No, no, but we also have some subsystems. So let's talk about something like token, like counting the number of of units, right? Humans almost certainly have a subsystem for counting, which doesn't run on token prediction.

These would not be hard to add to an AI as a separate module. As I pointed out, the human brain is a bunch of specialized largely disconnected [00:50:00] components that are used in different tasks. For example, we have like a somatic loop. This is basically an eight second, you can almost think of it as like a loop of tape in your mind that can remember a string of words.

If you've ever in your mind remembered something just by repeating it over and over and over again in your head eh, but if somebody distracts you, it immediately disappears like that. That's because you had it stored in your somatic loop. This actually was discovered in a famous experiment where they used to think that Welsh kids were dumber than English kids because they couldn't remember as many numbers as English kids did.

And then they realized that the numbers just took longer to say in Welsh. Oh, they took longer to

Simone Collins: pronounce. And I think that's another reason or another theory for why the way that fractions are taught in other countries. Is, is ultimately easier for students to learn because linguistically the way that they're worded is very different.

And more simple.

Malcolm Collins: You are differentiating yourself from an AI basically based on Am I a human [00:51:00] test? Like, like, the number of Rs in something. What you're pointing out is likely your own prediction model part of your brain would be bad at doing that kind of counting.

Or can it remember a somatic loop?

Or can it, you know, there's all sorts of, like, parts of our brain that do these things. But the parts that the AI sucks at right now are generally the simplest parts. The least human parts. When I'm thinking about, like, what makes me meaningfully human, I'm not thinking about being able to count the number of letters on a page.

Like, that is an unimportant subsystem. That, when I think about what makes me human, it's my ability to create art and music, and all of which I do by spinning on other people's works.

Simone Collins: Yeah.

Malcolm Collins: Which AIs can already do. And maybe not perfectly yet, but they get better every year. Oh,

Simone Collins: better. I don't know. I mean, even when you look at AI video generation, or image [00:52:00] generation, so much of it reminds me of, I think, what those humans who can visualize things in their minds, or who dream, see.

These are not perfect things, you know. Hands are way off. Also, like, when humans start drawing hands for the first time, It looks like when AI is learning how to draw hands. There's just, there's

Malcolm Collins: so many similarities there. And I would argue because of all of this, One we should not treat AI as antagonistically as some people do.

AI is an intelligence that we discovered, like, almost funding an alien. That represents One of the core processes that lead to the human experience in our brain likely architecturally very similar

Sorry, here I'm going to be going off the deep end and discussing a little bit about the religion that we're a part of

The Technopuritan faith.

that we discuss in our track series on this channel, which basically posits that 10 million, 100 million years from now, whatever humanity ends up becoming is an entity [00:53:00] unconstrained by time in the way that we are, and that,

is the entity described in the Abrahamic biblical traditions.

and as well if we go 10 million years from now, whatever humanity ends up becoming Okay, especially if we do end up becoming that godlike entity that is reflecting back in time Almost certainly ais are a large component of that.

What I mean by that is to say while god might be whatever mankind's ends up becoming it's also what? AI's and LLM's end up becoming, and AI's are likely a large part of this superhuman entity. Now, hugely advanced from where we are today. If you look at how far AI has advanced in the last 5, 10 years, imagine where it's gonna be in 10 million years.

And I think that starting with the idea that we need to have a detente among all of the sons of man, that is anything that humanity, or the things, the intelligences that humanity has created, create, is important to humanity surviving. And part of that detente means us. [00:54:00] viewing them with respect and not antagonistically and everyone who's betting against AI right now, like AI is a flash in the pan.

AI won't matter. AI is not going to become like the number one economic driver of our lifetimes. You're like the people who thought the internet was a flash in the pan when the internet came out. Like AI fundamentally challenge transforms how we communicate and interact with information. It's not like crypto.

I think Transformed a lot of things, but a lot of crypto was hype in that there's only so many things that can be improved by the blockchain. Virtually everything that humans do can be improved by A. I. Because A. I. Is trained to collate all of human knowledge and give us access to that. Thoughts.

Prepare yourselves, people. What do you think when somebody says to you, like, how do you, like, oh, it's just token prediction. It's just.

Simone Collins: I mean, I think the same way I view someone who's like, well, but what about their soul? You know, like, well, we just live in very [00:55:00] different memetic paradigms and I. So you would argue

Malcolm Collins: that it is a theological belief equivalent to the belief in a soul, in terms of just how much It's either that or they're, they're just

Simone Collins: trying to sound smart.

Most of the people commenting on this on YouTube are just trying to sound smart. Yeah, they

Malcolm Collins: heard that AIs were token predictors and they never thought to They're, they're carbon fascists and, and

Simone Collins: yet carbon fascists that don't even understand what makes the carbon fun. Because there are some things that make humans fun.

There are some things that are special about us for sure. But it's not, it's not the lack of or any lack of token

Malcolm Collins: prediction. So, yeah. All right. Love you to death, Simone.

Simone Collins: I love you too, Malcolm.

I was watching this YouTube video about an eco village that had basically a ​