ramble (predictive coding, autism,simulation)

I have predictive coding (ala Clark, Friston, Vermeulen), autism, Schmachtenberger, Baudrillard, Hoffman, and some recent experiences tumbling about in my brain, desperately looking for synthesis. I feel threads that are impossible to ignore.

Quick recap of predictive coding and autism.

In predictive coding models of the brain, perception is made up of prediction and sensory input. “Normal” brains lean heavily on priors (models of what the world usually is) and only update when error signals are strong. Most accounts of autism describe either weak priors (less predictive or top-down bias…meaning each sensory signal hits with more raw force), or overly precise priors (my predictive model is too narrow or rigid…meaning any deviation is a kind of error for me. Either way, in practice, the world feels less stabilized by consensus for me. I don’t get to lean on the stories most people use to blur and smooth reality.

While listening to a recent interview with Daniel Schmachtenberger, I was reminded that all models of reality are simplifications…they leave things out. Neurotypical perception is itself a model, with a heavy filtering function…a consensus map. From this perspective, if my priors are weaker (or overly precise)…I’m closer to a raw reality where models break down. For me, the “gaps” are almost always visible.

From there, it’s an easy jump to Baudrillard’s warning, that modern societies live inside simulations (self-referential systems of signs, detached from reality). If I feel derealization…less of a “solid self” (I do)…that’s probably simply what it’s like to live in a symbolic order but not buy into it fully. The double empathy problem is essentially me feeling the seams of a simulation that others inhabit…seamlessly.

This “self” itself is a model. It’s a predictive story your brain tells to stabilize your experience. And because my priors about selfhood are weaker (or less “sticky”), my sense of “I” feels fragile, intermittent, unreal, etc. In this fucked up place that the majority of people call “reality” (where everyone’s popping anti-depressants and obliterating the planet), my experience looks like “derealization” or “depersonalization,” but to me, it’s a kind of clarity…a deep unignorable recognition that the self is a construct. What becomes a deficit in this place (“I can’t hold reality/self together the way others do”) is a form of direct contact with the limits of models of reality (vs reality itself).

Which leads me to a burning question I’ve had for a while now: What are the chances that predictive coding’s distinction between “normal” and “autistic” actually points to the neurotypical configuration being one of priors/assumptions about the world that (in contrast to a healthy adaptive baseline) are simply imprecise (overfitted to some inaccurate model of reality)?

Neurotypical perception leans more on shared, top-down priors (context, expectations, social norms, etc.). That makes perception stable and efficient but extremely bias-prone. (Studies show that neurotypicals are more susceptible to visual illusions than autistic groups.)

Like I mentioned before, autistic perception has been described as weaker/less precise priors (Pellicano & Burr), or over-precise prediction errors and simply different precision allocation (Van de Cruy’s HIPPEA; Friston/Lawson’s “aberrant precision”). Functionally, both mean less smoothing by priors and more “bottom-up” detail, with (what they say) are costs for generalization and volatile environments. Their conclusion is that autistic people “overestimate” environmental volatility (we update too readily), while NTs are able to charge through with their predictive models intact.

And I have a real problem with this interpretation that I’ll get to shortly. But first, let’s explore the trajectory of the sort of consensus reality that I consider most neurotypical people to be living in….that set of strong priors/assumptions about the world that civilization shares. Because I have a hunch that its divergence from reality is an inevitable feature, not some sort of “bug” to be tweaked for.

If we treat civilization itself as a kind of giant predictive-coding system, its “life story” looks eerily like the brain’s, where the priors are consensus itself.

I see consensus reality as a stack of expectations or assumptions about the world shared by enough people to make coordination possible. Religion, law, money, the idea of a “nation”…these are all hyperpriors (assumptions so deep they’re almost never questioned). They make the world legible and predictable (people can trust a coin or a contract or a census).

And just like in individual perception, civilization’s priors aren’t about truth…they’re about usefulness for coordination. A shared model works best when it ignores inconvenient detail and compresses messy reality. Divergence from reality is a feature…the system actually becomes stronger by denying nuance. For example, “grain is food” (simple, measurable, taxable). But reality is actually biodiversity, shifting ecologies, seed autonomy, etc. See how that works?

This divergence from reality deepens in a few ways, the most obvious being self-reinforcement. Once a model is institutionalized, it defends itself with laws, armies, and propaganda. It also suppresses signals…inputs that contradict priors are treated as “prediction errors” to be minimized, not explored. And, back to Baudrillard, the model (that is civilization) refers increasingly to itself than to external reality (markets predicting markets, laws referencing laws, etc.). The longer it runs, the more this consensus model fine tunes and solidifies its own reality.

From a civilizational perspective, divergence from reality is coherence. If everyone buys into the strong priors (money is real, my country is legitimate, my god demands I go to church), coordination scales up and up. The obvious cost is that the model loses contact with ecological and biological feedback…the “ground truth.” Collapse shows up when prediction error (ecological crises, famines, revolts) overwhelm the significant smoothing power of the priors.

The bottom line is that civilization’s consensus model requires detachment to function. Life-as-it-is needs to be turned into life-as-the-system-says-it-is. In predictive coding terms, civ runs on priors so heavy they no longer update. In Baudrillard’s terms, simulation replaces reality. And in my own lived experience (as a “neurodivergent” person), derealization isn’t some kind of personal glitch…it’s what the whole system is doing, scaled up.

This whole thing gets even more interesting when I think more deeply about the term “consensus.” It implies something everyone’s contributed to, doesn’t it? But that clearly isn’t the case. What’s actually happening is closer to consent under conditions…most people adopt civilization’s model because rejecting it carries penalties (exile, poverty, prison, ridicule). It seems to me that the “consensus” is really an agreement to suspend disbelief and act as if the shared model is real, regardless of who authored it.

Whose model is it, then? It depends when and where you’re living. It could be state elites…kings, priests, bureaucrats historically defined categories like grain tallies, borders, and calendars. It could be economic elites…merchants, corporations, financiers shape models like money, markets, and “growth.” It could be cultural elites…professors, media, and educators maintain symbolic order (morality, legitimacy, and values). I don’t think it’s contentious to say that whatever the model, it reflects the interests of those with the leverage to universalize their interpretation. Everyone else gets folded into it as “participants,” but not authors.

The commonly accepted narrative is that homo sapiens won out over other human species due to our ability to coordinate, and that nowhere is this coordination more evident than in the wonderous achievement we call Civilization. But why isn’t anyone asking the obvious question…coordination toward whose ends? Because coordination certainly isn’t “for humanity” in some neutral sense…it’s for the ends of those who set the priors. Grain-based states are coordinated bodies for taxation, armies, and monuments. Modern market democracies are coordinated bodies for consumption, productivity, and growth. The “consensus” isn’t valuable because it’s true…it’s valuable because it directs billions of bodies toward a goal profitable or stabilizing for a ruling class.

Now we come up against the double bind of participation (as an autistic person, I’m intimately familiar with double binds). You may not have authored civilization’s model, but you can’t opt out without huge costs. Not participating is madness or heresy. I’m a dissenter and so I’m “out of touch with reality.” The pathologization of neurodivergent mismatch translates to me as: “You’re wrong. The consensus is reality.” To which I say, not only is consensus reality not reality…it isn’t fucking consensus, either. It’s a cheap trick….the imposition of someone else’s priors as if they were everyone’s. Calling it consensus simply disguises the extraction of coordination.

I want to talk now about Vermeulen’s (and others’) conclusion that the weaker (or overly precise) priors that characterize autism come at the cost of not being able to navigate volatile environments.

To me, this is just another example of the decontextualization rampant in psychology and related fields (I see it all grounded in a sort of captivity science). And, in this case, the context that’s not being accounted for is huge. I think Vermeulen and others falsely equivocate volatile SOCIAL environments and volatile environments in general.

It’s been my experience (and that of others), that autistic people perform quite well in real crisis situations. When social smoothing has no real value (or can be a detriment, even). But Vermeulen seems to think that my ability to function is impaired in the face of volatility (he makes some stupid joke about how overthinking is the last thing you want to do if you cross paths with a bear…ridiculous). I find the argument spurious and context-blind (ironic, considering he defines autism itself as context blindness).

The argument is as follows:

Because autistic perception is characterized by weaker or overly precise priors, each signal is taken “too seriously” (less smoothing from context). In a volatile environment (fast-changing, noisy, unpredictable), this supposedly leads to overwhelm, slower decisions, or less stability. Therefore, autist priors are maladaptive in volatility. B-U-L-L-S-H-I-T.

Let’s pull the curtain back on Vermeulen’s hidden assumption.

When researchers say “volatile environments,” they clearly mean volatile social environments. All you have to do is look at the nature of the studies, where success depends on rapid uptake of others’ intentions, ambiguous cues, unspoken norms, etc. In that kind of volatility, having weaker social priors (not automatically filling in the “shared model”) is costly. But it’s a category error to generalize that to volatility in all domains.

In environments characterized by social volatility, strong priors (the ones neurotypicals rely on) smooth out the noise and let them act fluidly. I’ll grant you that. But what the fuck about ecological volatility? Physical volatility? Hello?!? Sudden threats, immediate danger, technical breakdowns, real-world crises…where over-reliance on priors blinds you to what’s happening (“This can’t be happening!!”, denial, social posturing). Here, weaker/precise priors are a fidelity to incoming data and clearly convey an advantage.

I have predictive coding (ala Clark, Friston, Vermeulen), autism, Schmachtenberger, Baudrillard, Hoffman, and some recent experiences tumbling about in my brain, desperately looking for synthesis. I feel threads that are impossible to ignore.

Quick recap of predictive coding and autism.

In predictive coding models of the brain, perception is made up of prediction and sensory input. “Normal” brains lean heavily on priors (models of what the world usually is) and only update when error signals are strong. Most accounts of autism describe either weak priors (less predictive or top-down bias…meaning each sensory signal hits with more raw force), or overly precise priors (my predictive model is too narrow or rigid…meaning any deviation is a kind of error for me. Either way, in practice, the world feels less stabilized by consensus for me. I don’t get to lean on the stories most people use to blur and smooth reality.

While listening to a recent interview with Daniel Schmachtenberger, I was reminded that all models of reality are simplifications…they leave things out. Neurotypical perception is itself a model, with a heavy filtering function…a consensus map. From this perspective, if my priors are weaker (or overly precise)…I’m closer to a raw reality where models break down. For me, the “gaps” are almost always visible.

From there, it’s an easy jump to Baudrillard’s warning, that modern societies live inside simulations (self-referential systems of signs, detached from reality). If I feel derealization…less of a “solid self” (I do)…that’s probably simply what it’s like to live in a symbolic order but not buy into it fully. The double empathy problem is essentially me feeling the seams of a simulation that others inhabit…seamlessly.

This “self” itself is a model. It’s a predictive story your brain tells to stabilize your experience. And because my priors about selfhood are weaker (or less “sticky”), my sense of “I” feels fragile, intermittent, unreal, etc. In this fucked up place that the majority of people call “reality” (where everyone’s popping anti-depressants and obliterating the planet), my experience looks like “derealization” or “depersonalization,” but to me, it’s a kind of clarity…a deep unignorable recognition that the self is a construct. What becomes a deficit in this place (“I can’t hold reality/self together the way others do”) is a form of direct contact with the limits of models of reality (vs reality itself).

Which leads me to a burning question I’ve had for a while now: What are the chances that predictive coding’s distinction between “normal” and “autistic” actually points to the neurotypical configuration being one of priors/assumptions about the world that (in contrast to a healthy adaptive baseline) are simply imprecise (overfitted to some inaccurate model of reality)?

Neurotypical perception leans more on shared, top-down priors (context, expectations, social norms, etc.). That makes perception stable and efficient but extremely bias-prone. (Studies show that neurotypicals are more susceptible to visual illusions than autistic groups.)

Like I mentioned before, autistic perception has been described as weaker/less precise priors (Pellicano & Burr), or over-precise prediction errors and simply different precision allocation (Van de Cruy’s HIPPEA; Friston/Lawson’s “aberrant precision”). Functionally, both mean less smoothing by priors and more “bottom-up” detail, with (what they say) are costs for generalization and volatile environments. Their conclusion is that autistic people “overestimate” environmental volatility (we update too readily), while NTs are able to charge through with their predictive models intact.

And I have a real problem with this interpretation that I’ll get to shortly. But first, let’s explore the trajectory of the sort of consensus reality that I consider most neurotypical people to be living in….that set of strong priors/assumptions about the world that civilization shares. Because I have a hunch that its divergence from reality is an inevitable feature, not some sort of “bug” to be tweaked for.

If we treat civilization itself as a kind of giant predictive-coding system, its “life story” looks eerily like the brain’s, where the priors are consensus itself.

I see consensus reality as a stack of expectations or assumptions about the world shared by enough people to make coordination possible. Religion, law, money, the idea of a “nation”…these are all hyperpriors (assumptions so deep they’re almost never questioned). They make the world legible and predictable (people can trust a coin or a contract or a census).

And just like in individual perception, civilization’s priors aren’t about truth…they’re about usefulness for coordination. A shared model works best when it ignores inconvenient detail and compresses messy reality. Divergence from reality is a feature…the system actually becomes stronger by denying nuance. For example, “grain is food” (simple, measurable, taxable). But reality is actually biodiversity, shifting ecologies, seed autonomy, etc. See how that works?

This divergence from reality deepens in a few ways, the most obvious being self-reinforcement. Once a model is institutionalized, it defends itself with laws, armies, and propaganda. It also suppresses signals…inputs that contradict priors are treated as “prediction errors” to be minimized, not explored. And, back to Baudrillard, the model (that is civilization) refers increasingly to itself than to external reality (markets predicting markets, laws referencing laws, etc.). The longer it runs, the more this consensus model fine tunes and solidifies its own reality.

From a civilizational perspective, divergence from reality is coherence. If everyone buys into the strong priors (money is real, my country is legitimate, my god demands I go to church), coordination scales up and up. The obvious cost is that the model loses contact with ecological and biological feedback…the “ground truth.” Collapse shows up when prediction error (ecological crises, famines, revolts) overwhelm the significant smoothing power of the priors.

The bottom line is that civilization’s consensus model requires detachment to function. Life-as-it-is needs to be turned into life-as-the-system-says-it-is. In predictive coding terms, civ runs on priors so heavy they no longer update. In Baudrillard’s terms, simulation replaces reality. And in my own lived experience (as a “neurodivergent” person), derealization isn’t some kind of personal glitch…it’s what the whole system is doing, scaled up.

This whole thing gets even more interesting when I think more deeply about the term “consensus.” It implies something everyone’s contributed to, doesn’t it? But that clearly isn’t the case. What’s actually happening is closer to consent under conditions…most people adopt civilization’s model because rejecting it carries penalties (exile, poverty, prison, ridicule). It seems to me that the “consensus” is really an agreement to suspend disbelief and act as if the shared model is real, regardless of who authored it.

Whose model is it, then? It depends when and where you’re living. It could be state elites…kings, priests, bureaucrats historically defined categories like grain tallies, borders, and calendars. It could be economic elites…merchants, corporations, financiers shape models like money, markets, and “growth.” It could be cultural elites…professors, media, and educators maintain symbolic order (morality, legitimacy, and values). I don’t think it’s contentious to say that whatever the model, it reflects the interests of those with the leverage to universalize their interpretation. Everyone else gets folded into it as “participants,” but not authors.

The commonly accepted narrative is that homo sapiens won out over other human species due to our ability to coordinate, and that nowhere is this coordination more evident than in the wonderous achievement we call Civilization. But why isn’t anyone asking the obvious question…coordination toward whose ends? Because coordination certainly isn’t “for humanity” in some neutral sense…it’s for the ends of those who set the priors. Grain-based states are coordinated bodies for taxation, armies, and monuments. Modern market democracies are coordinated bodies for consumption, productivity, and growth. The “consensus” isn’t valuable because it’s true…it’s valuable because it directs billions of bodies toward a goal profitable or stabilizing for a ruling class.

Now we come up against the double bind of participation (as an autistic person, I’m intimately familiar with double binds). You may not have authored civilization’s model, but you can’t opt out without huge costs. Not participating is madness or heresy. I’m a dissenter and so I’m “out of touch with reality.” The pathologization of neurodivergent mismatch translates to me as: “You’re wrong. The consensus is reality.” To which I say, not only is consensus reality not reality…it isn’t fucking consensus, either. It’s a cheap trick….the imposition of someone else’s priors as if they were everyone’s. Calling it consensus simply disguises the extraction of coordination.

I want to talk now about Vermeulen’s (and others’) conclusion that the weaker (or overly precise) priors that characterize autism come at the cost of not being able to navigate volatile environments.

To me, this is just another example of the decontextualization rampant in psychology and related fields (I see it all grounded in a sort of captivity science). And, in this case, the context that’s not being accounted for is huge. I think Vermeulen and others falsely equivocate volatile SOCIAL environments and volatile environments in general.

It’s been my experience (and that of others), that autistic people perform quite well in real crisis situations. When social smoothing has no real value (or can be a detriment, even). But Vermeulen seems to think that my ability to function is impaired in the face of volatility (he makes some stupid joke about how overthinking is the last thing you want to do if you cross paths with a bear…ridiculous). I find the argument spurious and context-blind (ironic, considering he defines autism itself as context blindness).

The argument is as follows:

Because autistic perception is characterized by weaker or overly precise priors, each signal is taken “too seriously” (less smoothing from context). In a volatile environment (fast-changing, noisy, unpredictable), this supposedly leads to overwhelm, slower decisions, or less stability. Therefore, autist priors are maladaptive in volatility. B-U-L-L-S-H-I-T.

Let’s pull the curtain back on Vermeulen’s hidden assumption.

When researchers say “volatile environments,” they clearly mean volatile social environments. All you have to do is look at the nature of the studies, where success depends on rapid uptake of others’ intentions, ambiguous cues, unspoken norms, etc. In that kind of volatility, having weaker social priors (not automatically filling in the “shared model”) is costly. But it’s a category error to generalize that to volatility in all domains.

In environments characterized by social volatility, strong priors (the ones neurotypicals rely on) smooth out the noise and let them act fluidly. I’ll grant you that. But what the fuck about ecological volatility? Physical volatility? Hello?!? Sudden threats, immediate danger, technical breakdowns, real-world crises…where over-reliance on priors blinds you to what’s happening (“This can’t be happening!!”, denial, social posturing). Here, weaker/precise priors are a fidelity to incoming data and clearly convey an advantage.

Comments

Leave a comment