The Predictive Brain: Autistic Edition (or Maybe the Model’s the Problem)

There’s a theory in neuroscience called predictive processing.

It says your brain is basically a prediction engine that’s constantly trying to guess what’s about to happen (so it can prepare for it). In other words, you don’t just react to the world…you predict it, moment by moment. The closer your model (of predictions) matches reality, the fewer surprises you get. Fewer surprises, less stress.

The model applies to everything…light, sound, hunger, heat. But also to something far messier: people. From infancy, we start modeling the minds of those around us. “If I cry, will she come?” “If I smile, will he stay?” It doesn’t need to be conscious…it’s just the brain doing what it does (building a layered, generative model of how others behave, feel, and respond). Social expectations become part of the predictive model we surf through life on. (nod to Andy Clark’s Surfing Uncertainty)

From the predictive processing perspective, autistic people aren’t blind to social cues. (That’s outdated bullshit.) But we weight them differently. Our brains don’t assign the same precision (the same level of trust) to social expectations as most people do. So we don’t build the same nice, tight models, make the same assumptions, or predict the same patterns.

For example, I can read derision just fine. But I don’t use it to auto-correct my behavior unless it directly impacts something I care about. For better or for worse, my actions aren’t shaped by unspoken norms or group vibes…they’re shaped by what feels real and necessary in the moment.

If you sat me down in front of Andy Clark or Karl Friston (smarty- pantses in the predictive processing world) they’d probably agree. I think. They’d tell me I’m treating social priors as low precision. That my brain doesn’t update its models based on subtle social feedback because it doesn’t trust those models enough to invest the effort. And that my supposed “motivation” is actually baked into the model itself (because prediction isn’t just about thinking, it’s about acting in accordance with what the brain expects will pay off).

Ok. But something’s missing…something big. Context.

Implicit in the predictive model is the idea that social priors are worth updating for. That most social environments are coherent, that modeling them is adaptive, and that aligning with them will yield good results.

But what if they’re not? What if you turned on the news and saw that the world was….kind of going to absolute shit? And that, incomprehensibly, people seemed fine enough to let clearly preventable disasters simply unfold and run their course?

What if the social signals you’re supposed to model are contradictory? What if they reward falsehood and punish honesty? What if they demand performance instead of coherence?

In that case, is it still a failure to model social cues? Couldn’t it be a refusal to anchor your behavior to a bullshit system? A protest of the organism rather than a failure?

Because from where I sit, if social information is incoherent, corrupt, or misaligned with ecological / biological reality, then assigning it low precision isn’t a bug…it’s a protective adaptation. Why would I burn metabolic energy predicting a system that specializes in gaslighting? Why would I track social expectation over reality? “Why do THEY? ” is the question I ask myself every day. (Just when I start to accept that people simply love the look of grass instead of nature, they go out and cut it….then just when I start to accept that people love the look of grass that is a uniform height (rather than actual grass)…they go out and cut it under clear skies when it’s 35 degrees, killing it…just when I start to accept that people are born with some sort of pathological compulsion to mow landscapes, they replace a portion of their yard with a pollinator garden…because enough of their neighbors did.)

In predictive processing terms, maybe we (autistic people) are saying, “This part of the world isn’t trustworthy. I’m not investing in modeling it.” or just “I don’t trust the model you’re asking me to fit into.”

Of course, saying that comes at a real cost to me. Exclusion, misunderstanding, misalignment. I can sit here all day telling you how principled my stand is…but that “stand” is clearly exhausting and has resulted in long-term adaptive disadvantages (in this place).Systems (“good” or “bad”) almost always punishes non-modelers. But that doesn’t make me wrong. Reality is reality.

There’s a theory in neuroscience called predictive processing.

It says your brain is basically a prediction engine that’s constantly trying to guess what’s about to happen (so it can prepare for it). In other words, you don’t just react to the world…you predict it, moment by moment. The closer your model (of predictions) matches reality, the fewer surprises you get. Fewer surprises, less stress.

The model applies to everything…light, sound, hunger, heat. But also to something far messier: people. From infancy, we start modeling the minds of those around us. “If I cry, will she come?” “If I smile, will he stay?” It doesn’t need to be conscious…it’s just the brain doing what it does (building a layered, generative model of how others behave, feel, and respond). Social expectations become part of the predictive model we surf through life on. (nod to Andy Clark’s Surfing Uncertainty)

From the predictive processing perspective, autistic people aren’t blind to social cues. (That’s outdated bullshit.) But we weight them differently. Our brains don’t assign the same precision (the same level of trust) to social expectations as most people do. So we don’t build the same nice, tight models, make the same assumptions, or predict the same patterns.

For example, I can read derision just fine. But I don’t use it to auto-correct my behavior unless it directly impacts something I care about. For better or for worse, my actions aren’t shaped by unspoken norms or group vibes…they’re shaped by what feels real and necessary in the moment.

If you sat me down in front of Andy Clark or Karl Friston (smarty- pantses in the predictive processing world) they’d probably agree. I think. They’d tell me I’m treating social priors as low precision. That my brain doesn’t update its models based on subtle social feedback because it doesn’t trust those models enough to invest the effort. And that my supposed “motivation” is actually baked into the model itself (because prediction isn’t just about thinking, it’s about acting in accordance with what the brain expects will pay off).

Ok. But something’s missing…something big. Context.

Implicit in the predictive model is the idea that social priors are worth updating for. That most social environments are coherent, that modeling them is adaptive, and that aligning with them will yield good results.

But what if they’re not? What if you turned on the news and saw that the world was….kind of going to absolute shit? And that, incomprehensibly, people seemed fine enough to let clearly preventable disasters simply unfold and run their course?

What if the social signals you’re supposed to model are contradictory? What if they reward falsehood and punish honesty? What if they demand performance instead of coherence?

In that case, is it still a failure to model social cues? Couldn’t it be a refusal to anchor your behavior to a bullshit system? A protest of the organism rather than a failure?

Because from where I sit, if social information is incoherent, corrupt, or misaligned with ecological / biological reality, then assigning it low precision isn’t a bug…it’s a protective adaptation. Why would I burn metabolic energy predicting a system that specializes in gaslighting? Why would I track social expectation over reality? “Why do THEY? ” is the question I ask myself every day. (Just when I start to accept that people simply love the look of grass instead of nature, they go out and cut it….then just when I start to accept that people love the look of grass that is a uniform height (rather than actual grass)…they go out and cut it under clear skies when it’s 35 degrees, killing it…just when I start to accept that people are born with some sort of pathological compulsion to mow landscapes, they replace a portion of their yard with a pollinator garden…because enough of their neighbors did.)

In predictive processing terms, maybe we (autistic people) are saying, “This part of the world isn’t trustworthy. I’m not investing in modeling it.” or just “I don’t trust the model you’re asking me to fit into.”

Of course, saying that comes at a real cost to me. Exclusion, misunderstanding, misalignment. I can sit here all day telling you how principled my stand is…but that “stand” is clearly exhausting and has resulted in long-term adaptive disadvantages (in this place).Systems (“good” or “bad”) almost always punishes non-modelers. But that doesn’t make me wrong. Reality is reality.

Comments

Leave a comment