I think it’s safe to say that feedback-sensitive (neurodivergent) people are less susceptible to tribalism. Under most circumstances (I’ll get to the exceptions), we’re less likely to be a conservative, a democrat, a fundamentalist, etc.
Tribalism (the tendency to take your group’s beliefs as your own and enter into conflict with other groups) depends on high-precision social priors…shared norms, loyalty cues, in-group/out-group boundaries. But we rely more on direct sensory evidence or logical consistency than on these socially constructed signals. The pull of group identity isn’t as strong for us.
I don’t experience the same automatic emotional reward for aligning with group opinion. I don’t get that warm and fuzzy feeling called “patriotism,” for example. I’ve never understood it. If an in-group belief contradicts observed reality, I can’t help but question it…even if it costs social standing. And tribal systems clearly punish that.
Tribalism thrives on broad, simplified narratives (“they’re all like that”), which smooth over (or blind people to) exceptions. But exceptions are what catch my attention the most. Outliers that break the spell of group generalizations stand out to me, and I’m constantly stupefied that this isn’t the case for most people.
Neurodivergents aren’t always resistant to tribalism, of course. In environments where belonging feels existentially necessary (which is just about every fucking environment in 2025), we can certainly conform strongly…overcompensate even.
But I’d argue that for most of human history, where the “opt-out” option was real, if the group became oppressive, coercive, or incompatible with our temperament, we simply….left. And that escape valve probably served as a check on group conformity and control.
We’d leave for a number of reasons. If group norms are arbitrary or contradictory, sticking around creates constant prediction error. As hard as it is, departure is often the path of least resistance for someone like me, and probably would have been for people like me in the past. We were also probably self-reliant in certain domains. Many autistic skill sets (deep knowledge in specific areas, tracking environmental patterns) would translate to survivability outside rigid social structures. And a drive for integrity over belonging means that physical separation would have been (and still is) preferable to constant self-betrayal.
Wherever civilization spreads, the option to leave disappears. Agricultural and industrial societies lock people into fixed territories, usually controlled by central authorities. Survival becomes tied to participating in a single dominant system (currency, markets, legal structures), removing just about every parallel option. And instead of many small groups to choose from, there’s effectively one “tribe” (the society and its cultural apparatus). The option to “simply leave” is gone now, I’d say.
Without the option to leave, those of us who would naturally walk away face a stark choice…either overcompensate (learn to mimic, mask, and fit despite the cost) or withstand isolation (remain noncompliant and absorb the social/economic consequences). I think this goes a long way in explaining why in the modern era autistic burnout and mental health crises are more visible (Breaking News: Autism Epidemic!!). The evolutionary safety valve is….gone.
I’m not wired for tribalism. It looks ridiculous to me. I hate that people have this sort of neediness for it. Especially when, in 2025, we recognize it as the greatest barrier to humanity effectively addressing global crises…from planet destruction to systematic inequality and democratic collapse. It creates moral elasticity, where harm is justified by group loyalty. It creates rivalries purely for identity’s sake (beating the shit out of each other over a fucking soccer match). It makes coordinated responses impossible. It’s fucking dumb.
This is all deeply bound up with what I call consensus reality (the shared social priors/expectations that a group holds about “what is real,” how the world works,” and “what matters”). In that context, I see tribalism as the emotional and identity-binding mechanism that keeps people committed to those social expectations, defends them from contradiction, and rejects competing models from out-groups. Consensus reality (what people call “the real world”) gives tribalism content…stories, values, and assumptions the group agrees on. And tribalism, in turn, gives consensus reality teeth…social rewards for conformity and penalties for deviation. You could see it it as the social immune system that suppresses any error signal that might disrupt the shared model people consider “reality.”
Which brings me back to a core idea of my book: every degree of separation from reality (unmediated feedback) creates space in which lies and manipulation can be used to control human behavior. Symbolic representation, bureaucracy, technology, propaganda…all stand between an action and its consequence. From a predictive processing perspective, these separations replace sensory precision with social priors. Once your perception of consequence is dominated by priors handed to you by others, your model of reality can be steered by whoever’s controlling the narrative (regardless of what’s actually happening right outside your window).
In the human domestication frame I’m building, this explains how a control system matures. Reduce a person’s direct contact with feedback, fill the gap with consensus reality (shared fictions), and use tribalism to keep the consensus coherent and defended.
It also explains why nothing changes. Everyone’s wondering why humanity can’t seem to course-correct, but this isn’t rocket science. Mainstream “solutions” operate entirely inside mediated spaces, accepting layers of separation from reality as normal or inevitable. They try to optimize those spaces…fact-checking information, creating better messaging, nudging behavior with persuasive campaigns. They ignore the gap itself. The underlying problem (that people’s models of reality are no longer tethered to direct, embodied, ecological feedback) is left untouched.
In other words, people just swap one set of social priors for another, without increasing the precision of sensory input from the real world. Their brains are still mostly being updated by other people’s models instead of reality itself. That’s like improving the entertainment or fairness of the feedlot without questioning why the fuck the animals can’t simply graze in the field anymore. The control system remains intact (strengthened, maybe) because the medium of control (the gap) is preserved.
This is why giving people access to more information isn’t solving anything. We may have created gaps between action and consequence, but evolution hasn’t removed the cognitive biases and drives that were calibrated for direct feedback. Those drives still operate, but now they need something to work on in the absence of reality’s immediacy…and that “something” becomes bullshit. Shared fictions.
Why? Why do people need so much bullshit?
For one, I think hard-wired biases still expect input. Traits like negativity bias, advantage-seeking, and status monitoring evolved to process real-time environmental cues. Without direct cues, they grab onto representations (bullshit) because the brain just can’t seem to tolerate informational vacuum.
Bullshit comes in to fill prediction gaps. Without high-precision sensory input, shared social fictions are used to predict outcomes. Those fictions become the scaffolding (myths, ideologies, propaganda) that keep the model “stable” even if it drifts from reality (like when it starts baking the planet).
Next thing you know, manipulation rides in on necessity. Because social fictions are now the only widely shared basis for action, whoever controls them effectively controls the behavior of the group. Domestication leverages this (the feed is always narrative feed…never the real field).
The further the separation from unmediated feedback, the more elaborate the fiction has to be to sustain group coordination and suppress error signals. Fast forward to 2025, and people are acting entirely on their group’s fictions…with reality surfacing only in the form of the most immediate and extreme crises (which then get reabsorbed into new narrative).
Where are feedback-sensitive people in this story? Where’s that autistic guy?
Well, if your brain assigns low precision to social priors, then the fictions that fill feedback gaps for most people feel…jarring, flimsy, or outright hostile. My brain gives more weight to sensory or logical evidence, and that means constant prediction error when I interact with a model that’s running entirely on narrative rather than reality.
In domestication terms, that makes me an outlier in a control system that depends on narrative compliance. For the “typical” person, the fiction is not only tolerable but necessary for coordination. For me, it’s a constant source of friction (because the group’s stabilizing story is exactly where I detect the misalignment most vividly).
I’m heavy into predictive coding literature (Friston, Clark, etc.) right now, so I’ll try to frame some of my main arguments in those terms. (I’ll probably get it wrong)
Consensus reality is….encultured hyperpriors. Culture installs hyperpriors (very high-level expectations about “how things are”). They’re learned and built up by language and institutions, and they set the precision economy (which signals get trusted).
Human domestication is a sort of niche construction. One that rewards minds able to thrive in symbol-dense, delayed-feedback environments. The effect is a recalibration of precision…social model-based predictions gain weight and raw sensory error loses weight. This is the flattening of the error landscape, so to speak.
Social priors are what let us coordinate at scale (which is rarely necessary…unless you’ve fucked up at scale), but trouble starts when precision sticks to bad priors in rapidly shifting or bullshit-heavy niches (media, bureaucracy), drowning out any errors that might have resulted in correction.
An autistic person’s low tolerance for fictions is a different precision setting. It continually surfaces mismatches others smooth over. Which largely feels like shit (derealization, friction) for the autistic person, but is epistemically valuable (less “consensus-blindness,” wink wink, Peter Vermeulen).
I’ve been talking about human domestication as selection against reactivity, but I know I have to be careful with single-trait stories like that. Maybe what’s selected are policies that minimize expected free energy in a given niche. In dense, rule-ridden societies, that means predictability-friendly (?) minds. Compliance. Delayed reward. Role fluency. Some kind of energy-efficient inference under control niches.
This is where I’d be on my own, I think. This is the final “gap” where most of the highest-level thinkers are sort of playing…the control niche as a given. Someone like Andy Clarke (were he to agree with my line of reasoning so far), might say the solution is about tuning the system to balance priors and sensory input more adaptively. But in a domesticated, control-oriented society, “tuning” quickly becomes prescription…setting parameters so people remain useful to the system, not so they reconnect with (unmediated) reality).
The more fundamental problem is that any centrally managed adjustment to perception keeps people inside a mediated model. It doesn’t restore autonomy…it optimizes compliance. And the last thing I want is to compliant in a system that is clearly out of touch with reality.
Leave a comment