-
Human Self-Domestication…selection against autonomy, not hot heads.
Richard Wrangham frames selection against reactive aggression (he uses the term “hot heads”) as the driver of human self-domestication and argues that our level of proactive aggression largely remained the same. He describes these as distinct evolutionary strategies, each with different adaptive costs and benefits.
To be clear, reactive aggression is impulsive, emotionally-driven violence in response to provocation or frustration (e.g. bar fights, chimpanzee dominance squabbles, etc.). Proactive aggression is calculated, planned violence deployed strategically for advantage (e.g. ambushes, executions, coordinated raids).
Wrangham’s central point is that self-domestication arises when reactive aggression is consistently punished (and culled), while proactive aggression not only persists but is sometimes institutionalized (authorities get a monopoly on violence).
His reasoning is as follows.
In small-scale societies, reactive aggressors were costly to group stability. They disrupted cooperation, created unpredictability, and risked alienating allies. With language and coalitionary power, groups gained the ability to collectively punish or kill these “hot heads.” Over many generations, this reduced the frequency of impulsively aggressive temperaments in the gene pool. The result is a calmer, more tolerant baseline disposition in humans compared to chimpanzees…one of the classic “domestication syndrome” traits.
What rubs me the wrong way is how quickly Wrangham assumes, out of all the traits that make up domestication syndrome, that reactive aggression is what was being selected for. Why wouldn’t the selection pressure be for proactive aggression, for example? Wrangham admits that proactive aggression was reinforced in human evolution. We became better at planned violence (executions, warfare, conquest) than any other primate. Crucially, proactive aggression is socially sanctioned…it’s framed as justice, punishment, or defense of the group. That makes it evolutionarily advantageous, not disadvantageous. In Wrangham’s model, the ability to conspire and kill reactively aggressive individuals is itself an expression of proactive aggression, and therefore part of what made us more cooperative at scale.
This hypothesis feels reductive to me. Domestication in other species involves selection for predictability, docility, and compliance, not just low reactivity. By centering only on reactive aggression, Wrangham treats self-domestication as a paradoxical success story…calmer humans enabled cooperation, and cooperation enabled civilization. It leaves out what civilization actually does…the flattening of error landscapes, where any form of reactivity (not just aggression) becomes maladaptive in large, controlled groups.
I’ve been thinking seriously about whether an argument could be made, just as strong or stronger than Wrangham’s, that selection for proactive aggression was the real driver in the human domestication story.
Large-scale violence is a consistent theme in the emergence of complex societies…from the mass graves of the Neolithic to the conquest states of Mesopotamia, Mesoamerica, and beyond. Warfare, conquest, and raiding were not incidental to civilization. They were the engines of state formation, with proactive aggression (planned and coordinated violence) clearly rewarded at both the genetic and cultural level.
Take the Y-chromosome bottleneck (5,000-7,000 years ago). It shows that ~90-95% of male lineages were extinguished, leaving only a few dominant bloodlines. This is genetic evidence of the real pattern of civilizational “coordination”: violent conquest and reproductive monopoly by elite men. Where in civilization’s history is Wrangham’s “peaceful coalitionary suppression of “bad apples”?” I just don’t see it. “Super-ancestor” events (e.g. Genghis Khan’s lineage) show the same thing in miniature. Proactive, organized aggression yields massive reproductive skew.
In fact, let’s turn to reproductive skew and polygyny. Even convention historical narratives tell a story of high-status males (kings, chiefs, emperors, warlords) with harems, concubines, and multiple wives. these are outcomes made possible by proactive aggression…conquest, enslavement, and the monopolization of resources. Lower-status men were excluded from reproduction, not because they were “too reactive” (though those certainly would have been excluded as well), but because they lost wars, were enslaved, or killed.
Proactive aggression isn’t just violence. It’s long-term planning, coalition-building, deception, and symbolic justification (myths, laws, and religions sanctifying violence makes up most of the human history book). These are precisely the traits that expand during the civilizing process…organizational capacity, abstract rule-following, and symbolic reasoning, all in service of controlling large groups.
I have a few thoughts on why Wrangham favors the other story (selection against reactive aggression). It links directly to his bonobo analogy (their lower reactivity compared to chimps). And it fits with domestication syndrome traits (softer faces and reduced baseline violence), of course. But these seem weak to me. What it comes down to, I believe, is Wrangham gravitating toward an age-old optimistic narrative…humans becoming more cooperative (from the “less hot-headed” angle), writing poems, and painting the Sistine Chapel. To me this is yet another just-so story tilted toward optimism. Real, documented human history (and the present, to a large extent) reads like selection for manipulative, proactive violence. Those who excel at strategic violence and symbolic control reproduce disproportionately. Full stop. This fits much better with what we see in the pages of history…runaway systems of control, hierarchies, and narrative manipulations that still structure our domesticated condition. These are better explained as the costs of selecting for proactive aggression than as some sort of “goodness paradox”.
In fact, it might be a silly thought experiment, but who’s to say that if were possible to actively select for proactive aggression in other species, that domestication traits wouldn’t appear?
To me, domestication syndrome (floppy ears, smaller brains, prolonged neoteny, pigmentation changes, altered reproductive cycles) arises because selection pressure narrows the error landscape of a species. The mechanism most often discussed is neural crest cell changes…but the reason for those changes could be any number of selection pressures. In foxes, it was tameness toward humans. In humans (Wrangham), he says it was lower reactive aggression. But it could also plausibly be selection for predictability, planning, and controlled aggression if that’s what the system demanded (and did, and does!).
The core idea is if you reduce the payoff for being “unpredictably reactive” and increase the payoff for being “strategically compliant,” the biology shifts. The neural, hormonal, and developmental systems adapt to reward that niche. The syndrome may look similar (the smaller brains, juvenilization, etc.) because what’s really being selected for is attenuation of wild-type reactivity in general.
Let’s move away from what I see as Wrangham’s too-narrow focus and broaden this narrative a bit.
Let’s look at the human story from a predictive coding lens, and consider scarcity as a selector. In times of ecological stress, groups face more prediction errors (crops fail, animals migrate, rivers dry up). Some individuals resolve error by updating their model (adjusting expectations, moving). Others resolve error by updating the world…forcing it into alignment with their model. The latter is the logic of domestication…bend plants, animals, landscapes, and people into predictability.
From here, we can see proactive aggression as control in action. On the ground, this isn’t abstract. Pull up wild plants and keep only the docile grains. Cull the fence-jumping sheep and reactive roosters…breed the calm ones. Raid nearby villages, enslave, execute dissenters, and reward compliance. This is proactive aggression. Planned, systemic, future-oriented control. It’s violence as policy.
This makes me think of how Robert Kelly frames humanity’s cultural revolution. He proposes that symbolic thought makes it possible to imagine not just “what is,” but “what should be.” And “what should be” becomes a shared prior (model of the world) that groups can coordinate around…even if it doesn’t match reality. Once you can coordinate around a model, you can impose it, and enforce conformity within the group. To me, that’s proactive aggression (if we’re still calling it that) elevated…control not only of bodies now, but of perception and imagination.
What disappears under a system like that? Well, for one, reactive aggression clearly becomes intolerable. It represents autonomous feedback (an individual saying “no” in the moment). In control systems, that kind of unpredictable resistance is punished most severely. You know that. Slaves who rebel are killed. Chickens that cause problems are culled. Men who resist capture are killed first. The system slowly culls “reactors” and favors the predictable (those who update their selves rather than the system).
This is what I see as the flattening of the civilizing process…the rock tumbler effect. Proactive aggression is the abrasive force that flattens everything…landscapes, genetic diversity, behavioral variation, etc., etc. Reactive aggression is just one of the first “edges” to be ground away. A byproduct of selecting for proactive control. A footprint of the real selection pressure. And what remains is a domesticated phenotype…more compliant, less volatile, more predictable.
Not convinced? Try this experiment.
Write these two hypotheses out on a sheet of paper:
- “Coalitions punish hot-heads -> reactive aggression selected against -> cooperative, domesticated humans emerge.”
- “Coalitions punish (in- and out-group) resistors to group control -> resistance (often expressed as reactive aggression…rebellion, resistance to domination) -> compliant, predictable humans emerge.
Now read as many history books as you can, testing these as you go. Take notes.
Only one of these hypotheses explains why proactive aggression thrives where reactive aggression doesn’t. There is no paradox. Proactive aggression isn’t punished because it aligns with group objectives, and what disappears isn’t “bad tempers” but unmanaged defiance. Resistors are killed. Compliant captives are taken. Rebels are executed. Compliant laborers survive.
This is selection against the unpredictable expression of autonomy that disrupts control.
-
Different Maps of Reality
Predictive processing has a theory for autism.
First, a general review of predictive processing (PP) itself…
According to PP, my brain is always predicting what’s about to happen and what input means. These predictions are called priors (I think of all the priors in my brain as my model of reality). The data coming in via my senses is compared against priors. If it fits (my prediction is correct), my perception feels smooth (I see this as that “autopilot” mode that autistic people envy neurotypical people for in social situations). If it doesn’t fit, I experience a prediction error…and my brain will attempt to update my priors so that I don’t experience that error in the future (or update the world so that it fits my model of reality…but we’ll leave that for now). The experience of a prediction error is experienced as discomfort, or pain, or frustration, anger, etc…
Essentially, the brain has a map of reality that it navigates the world with. Or rather, it navigates the map (the brain is trapped in that dark cavity I call my skull), and updates the map only when incoming data causes an error, forcing it to (and when updating the world to fit the map isn’t an option).
Social priors are the part of the map labeled “other people” and “me, in the world of people”….predictions about other people and me-in-relation-to-other-people.
More than any other part of life/reality, social life is full of ambiguous signals…tone of voice, facial expressions, body language, context-dependent words and actions, irony, “unspoken rules,” etc. The neurotypical person leans on strong social priors (predictions) like, “A smile means friendliness,” “If someone asks ‘how are you,’ they don’t want a literal health report,” and “This situation calls for deference/compliance.” I think of these as learned shortcuts. They smooth ambiguity so quickly that the (neurotypical) person doesn’t notice how much interpretation they’re really doing (think of a game engine rendering details in real time and high resolution).
PP proposes that in “autistic” perception, these high-level social predictions are either less weighted (weaker…signals don’t get automatically collapsed into expected meanings…the brain says “hmm, not sure…keep checking the data) or over-precise (too narrow…the brain locks a prediction too tightly on detail, so it flags even small deviations as error). Regardless of which it is, functionally the result is similar…more error signals, less smoothing.
According to this explanation, as an autistic person, my experience of interacting with other people isn’t the “typical” experience. I need more explicit clarification (because cues don’t self-resolve). Social situations feel volatile or unpredictable to me (because I’m tracking details others are smoothing over). And I need extra cognitive effort to “keep up,” because I’m building my model of reality more deliberately…less automatically.
Imagine me around a campfire with a few (neurotypical) people. A rustling sound comes from a nearby bush and attracts everyone’s attention. One person says something like, “It’s just the wind.” In short order, everyone returns to whatever they were doing. The person who spoke has a strong prior (prediction) that the world is stable and there’s no danger to worry over. And everyone there (other than me) has a strong prior that someone saying “It’s just the wind” means it really is just the wind. A sort of bias where, out of all the data coming in, the data coming from other people is prized most highly. My response is different. I say something like, “It could be the wind, but it could be something else.” I have a weaker reliance on predictions…I weigh sensory input more highly than my fellow campers.
You can see how weaker social priors (predictions/map of social reality) would make it hard for me to collapse ambiguous social input into the “expected” consensus meaning . I see more of the true uncertainty of the world…I don’t have it smoothed away by some automatic internal mechanism.
That’s the gist of it. And I generally agree with this description of how I experience other people. And as an explanation for the frustrations I experience socially, it certainly feels spot on.
But is it only social priors that are weaker (or overly precise) for me? Or is it priors in general? Depending on who you ask in predictive coding circles, the answer is different.
Early accounts (e.g. Pellicano & Burr, 2012) suggested that autistic people have weaker priors across the board, not just social ones. That would mean that I rely less on prediction in all areas of life. Their evidence for this included the fact that autistic people are less fooled by certain visual illusions (like the hollow mask illusion or context-driven size illusions) and the whole savant piece (enhanced perception of detail and irregularities in non-social patterns…sounds, textures, math, mechanics). The story told was that my “weak priors” are my global style of perception…my world is simply less smoothed…more data-driven.
Later accounts (e.g. Van de Cruys, Lawson, Friston) argue it’s not globally weaker, but misallocated precision. In other words, overly precise priors at one level (rigid routines, intense interests), too little precision at another (ambiguous social interference), and sometimes overly precise weighting on prediction errors (making every mismatch I feel seem urgent). This would explain why some autistic people are incredibly good at pattern-based forecasting (weather models, coding, chess) but struggle in fluid, implicit social contexts.
Both camps agree that social priors are a “special case,” I think. Social environments rely heavily on very fuzzy priors like shared norms and implicit meanings. In the predictive coding model, those are exactly the kinds of priors autistic perception would likely either underweight (“I need to check the data”) or overweight in detail (“I expect exactly X, so any deviation throws me”). In other words, social priors are where my difference shows up most glaringly…but the difference itself might apply everywhere.
Ok, the neurotypical neuroscientists have had their turn. Pass me the mike now, please.
My first reaction to all this…well, anger. And perplexity. I simply don’t understand how, given all the forms of data you can base your map of reality on, that you would choose….other people? With their confusion, and duplicity, and moving moralities, and drives, and, and, and….Why? Why that?
It’s not that I don’t “get” what social priors are. They’re about shared assumptions that keep groups coordinated. What counts as polite, what role you’re supposed to play, what’s “normal” in a given social setting, which explanations everyone in the group nods along to, even if they’re flimsy….that all makes sense to me so long as there is a group goal. As a sort of necessary evil in service of achieving an objective that requires group coordination…I get it. But as a way to live your life? Intentionally mapping your reality on the fuzziest, most contingent, and most contradictory signposts you can find? That confuses the fuck out of me.
Let’s take a closer look at social priors (the part of my reality map that has to do with other people). They’re arbitrary…different from group to group, era to era. They constantly shift…flipping overnight (today’s taboo is tomorrow’s norm). They’re completely detached from ecology…attached instead to abstract concepts like appearance, hierarchy, interpersonal signals. And they leave the door open 24/7 to gaslighting…if everyone else insists the emperor has clothes, the “consensus” is real enough to punish me even if it’s bullshit.
Why not map your reality on data that makes sense? Ecological priors like gravity, cycles of day/night, seasons, and biology are stable, and are directly tethered to reality (what happens predictably and significantly for survival). Embodied priors like the way your body predicts balance, hunger, threat that are constantly and deeply tested through feedback loops that are largely transparent. Social priors? The least tethered? The most prone to drift and self-reference? Really? Really?
On a theoretical level, I try to understand why neurotypicals lean exclusively on this messiest area of the map. Social priors smooth uncertainty, which must feel good. They also enable fast coordination in groups…and those neurotypicals sure like being in groups. There, they’re rewarded…”getting along” matters more (socially, professionally) than being right. But treating them as reality itself? To the point where questioning them is seen as dysfunction rather than discernment. Jesus Christ.
I know this is the double empathy problem at work. I’ll never be able to truly empathize with the neurotypical condition. And if I had to state my position in relation to theirs, as dispassionately as possible, I’d simply say that I’d rather my anchor my sense of the world in ecological and embodied feedback than in fragile, shifting group models. And that this position of mine (it’s not a choice) is not dysfunction unless group coordination is forced upon me as my only means of survival. That my position is arguably closer to “baseline life” than the civilizational overlay.
-
ramble (predictive coding, autism,simulation)
I have predictive coding (ala Clark, Friston, Vermeulen), autism, Schmachtenberger, Baudrillard, Hoffman, and some recent experiences tumbling about in my brain, desperately looking for synthesis. I feel threads that are impossible to ignore.
Quick recap of predictive coding and autism.
In predictive coding models of the brain, perception is made up of prediction and sensory input. “Normal” brains lean heavily on priors (models of what the world usually is) and only update when error signals are strong. Most accounts of autism describe either weak priors (less predictive or top-down bias…meaning each sensory signal hits with more raw force), or overly precise priors (my predictive model is too narrow or rigid…meaning any deviation is a kind of error for me. Either way, in practice, the world feels less stabilized by consensus for me. I don’t get to lean on the stories most people use to blur and smooth reality.
While listening to a recent interview with Daniel Schmachtenberger, I was reminded that all models of reality are simplifications…they leave things out. Neurotypical perception is itself a model, with a heavy filtering function…a consensus map. From this perspective, if my priors are weaker (or overly precise)…I’m closer to a raw reality where models break down. For me, the “gaps” are almost always visible.
From there, it’s an easy jump to Baudrillard’s warning, that modern societies live inside simulations (self-referential systems of signs, detached from reality). If I feel derealization…less of a “solid self” (I do)…that’s probably simply what it’s like to live in a symbolic order but not buy into it fully. The double empathy problem is essentially me feeling the seams of a simulation that others inhabit…seamlessly.
This “self” itself is a model. It’s a predictive story your brain tells to stabilize your experience. And because my priors about selfhood are weaker (or less “sticky”), my sense of “I” feels fragile, intermittent, unreal, etc. In this fucked up place that the majority of people call “reality” (where everyone’s popping anti-depressants and obliterating the planet), my experience looks like “derealization” or “depersonalization,” but to me, it’s a kind of clarity…a deep unignorable recognition that the self is a construct. What becomes a deficit in this place (“I can’t hold reality/self together the way others do”) is a form of direct contact with the limits of models of reality (vs reality itself).
Which leads me to a burning question I’ve had for a while now: What are the chances that predictive coding’s distinction between “normal” and “autistic” actually points to the neurotypical configuration being one of priors/assumptions about the world that (in contrast to a healthy adaptive baseline) are simply imprecise (overfitted to some inaccurate model of reality)?
Neurotypical perception leans more on shared, top-down priors (context, expectations, social norms, etc.). That makes perception stable and efficient but extremely bias-prone. (Studies show that neurotypicals are more susceptible to visual illusions than autistic groups.)
Like I mentioned before, autistic perception has been described as weaker/less precise priors (Pellicano & Burr), or over-precise prediction errors and simply different precision allocation (Van de Cruy’s HIPPEA; Friston/Lawson’s “aberrant precision”). Functionally, both mean less smoothing by priors and more “bottom-up” detail, with (what they say) are costs for generalization and volatile environments. Their conclusion is that autistic people “overestimate” environmental volatility (we update too readily), while NTs are able to charge through with their predictive models intact.
And I have a real problem with this interpretation that I’ll get to shortly. But first, let’s explore the trajectory of the sort of consensus reality that I consider most neurotypical people to be living in….that set of strong priors/assumptions about the world that civilization shares. Because I have a hunch that its divergence from reality is an inevitable feature, not some sort of “bug” to be tweaked for.
If we treat civilization itself as a kind of giant predictive-coding system, its “life story” looks eerily like the brain’s, where the priors are consensus itself.
I see consensus reality as a stack of expectations or assumptions about the world shared by enough people to make coordination possible. Religion, law, money, the idea of a “nation”…these are all hyperpriors (assumptions so deep they’re almost never questioned). They make the world legible and predictable (people can trust a coin or a contract or a census).
And just like in individual perception, civilization’s priors aren’t about truth…they’re about usefulness for coordination. A shared model works best when it ignores inconvenient detail and compresses messy reality. Divergence from reality is a feature…the system actually becomes stronger by denying nuance. For example, “grain is food” (simple, measurable, taxable). But reality is actually biodiversity, shifting ecologies, seed autonomy, etc. See how that works?
This divergence from reality deepens in a few ways, the most obvious being self-reinforcement. Once a model is institutionalized, it defends itself with laws, armies, and propaganda. It also suppresses signals…inputs that contradict priors are treated as “prediction errors” to be minimized, not explored. And, back to Baudrillard, the model (that is civilization) refers increasingly to itself than to external reality (markets predicting markets, laws referencing laws, etc.). The longer it runs, the more this consensus model fine tunes and solidifies its own reality.
From a civilizational perspective, divergence from reality is coherence. If everyone buys into the strong priors (money is real, my country is legitimate, my god demands I go to church), coordination scales up and up. The obvious cost is that the model loses contact with ecological and biological feedback…the “ground truth.” Collapse shows up when prediction error (ecological crises, famines, revolts) overwhelm the significant smoothing power of the priors.
The bottom line is that civilization’s consensus model requires detachment to function. Life-as-it-is needs to be turned into life-as-the-system-says-it-is. In predictive coding terms, civ runs on priors so heavy they no longer update. In Baudrillard’s terms, simulation replaces reality. And in my own lived experience (as a “neurodivergent” person), derealization isn’t some kind of personal glitch…it’s what the whole system is doing, scaled up.
This whole thing gets even more interesting when I think more deeply about the term “consensus.” It implies something everyone’s contributed to, doesn’t it? But that clearly isn’t the case. What’s actually happening is closer to consent under conditions…most people adopt civilization’s model because rejecting it carries penalties (exile, poverty, prison, ridicule). It seems to me that the “consensus” is really an agreement to suspend disbelief and act as if the shared model is real, regardless of who authored it.
Whose model is it, then? It depends when and where you’re living. It could be state elites…kings, priests, bureaucrats historically defined categories like grain tallies, borders, and calendars. It could be economic elites…merchants, corporations, financiers shape models like money, markets, and “growth.” It could be cultural elites…professors, media, and educators maintain symbolic order (morality, legitimacy, and values). I don’t think it’s contentious to say that whatever the model, it reflects the interests of those with the leverage to universalize their interpretation. Everyone else gets folded into it as “participants,” but not authors.
The commonly accepted narrative is that homo sapiens won out over other human species due to our ability to coordinate, and that nowhere is this coordination more evident than in the wonderous achievement we call Civilization. But why isn’t anyone asking the obvious question…coordination toward whose ends? Because coordination certainly isn’t “for humanity” in some neutral sense…it’s for the ends of those who set the priors. Grain-based states are coordinated bodies for taxation, armies, and monuments. Modern market democracies are coordinated bodies for consumption, productivity, and growth. The “consensus” isn’t valuable because it’s true…it’s valuable because it directs billions of bodies toward a goal profitable or stabilizing for a ruling class.
Now we come up against the double bind of participation (as an autistic person, I’m intimately familiar with double binds). You may not have authored civilization’s model, but you can’t opt out without huge costs. Not participating is madness or heresy. I’m a dissenter and so I’m “out of touch with reality.” The pathologization of neurodivergent mismatch translates to me as: “You’re wrong. The consensus is reality.” To which I say, not only is consensus reality not reality…it isn’t fucking consensus, either. It’s a cheap trick….the imposition of someone else’s priors as if they were everyone’s. Calling it consensus simply disguises the extraction of coordination.
I want to talk now about Vermeulen’s (and others’) conclusion that the weaker (or overly precise) priors that characterize autism come at the cost of not being able to navigate volatile environments.
To me, this is just another example of the decontextualization rampant in psychology and related fields (I see it all grounded in a sort of captivity science). And, in this case, the context that’s not being accounted for is huge. I think Vermeulen and others falsely equivocate volatile SOCIAL environments and volatile environments in general.
It’s been my experience (and that of others), that autistic people perform quite well in real crisis situations. When social smoothing has no real value (or can be a detriment, even). But Vermeulen seems to think that my ability to function is impaired in the face of volatility (he makes some stupid joke about how overthinking is the last thing you want to do if you cross paths with a bear…ridiculous). I find the argument spurious and context-blind (ironic, considering he defines autism itself as context blindness).
The argument is as follows:
Because autistic perception is characterized by weaker or overly precise priors, each signal is taken “too seriously” (less smoothing from context). In a volatile environment (fast-changing, noisy, unpredictable), this supposedly leads to overwhelm, slower decisions, or less stability. Therefore, autist priors are maladaptive in volatility. B-U-L-L-S-H-I-T.
Let’s pull the curtain back on Vermeulen’s hidden assumption.
When researchers say “volatile environments,” they clearly mean volatile social environments. All you have to do is look at the nature of the studies, where success depends on rapid uptake of others’ intentions, ambiguous cues, unspoken norms, etc. In that kind of volatility, having weaker social priors (not automatically filling in the “shared model”) is costly. But it’s a category error to generalize that to volatility in all domains.
In environments characterized by social volatility, strong priors (the ones neurotypicals rely on) smooth out the noise and let them act fluidly. I’ll grant you that. But what the fuck about ecological volatility? Physical volatility? Hello?!? Sudden threats, immediate danger, technical breakdowns, real-world crises…where over-reliance on priors blinds you to what’s happening (“This can’t be happening!!”, denial, social posturing). Here, weaker/precise priors are a fidelity to incoming data and clearly convey an advantage.
-
The Cost of Food = A Seed
“Cost of Food Expected to Continue to Rise in 2025”
Turn off the news for a minute and turn on your brain.
From the perspective of life itself (plants, seeds, regenerative cycles…YOU), the basic act of food production is always possible. Civilization doesn’t make this fundamental fact of biology easier or harder…it just distorts how we access it (turning it into grain, surplus, taxation, property, etc.). If you step outside this very odd mindset, you’ll see that food is everywhere and always was…seeds, tubers, nuts, fruit, hunting, foraging.
Any human, in any era, could plant a seed or gather wild foods. Seeds save themselves (regeneration is literally built in). Really think about this deeply for a second: the basic abundance of food hasn’t changed from prehistory to now. What does change with civilization is the control of food.
James Scott points out that civilizations narrowed food to grain because grain is legible (easy to count, store, tax, ration, hoard), enables surplus (standing armies, bureaucracies, empires), and makes populations dependent (you can seize a granary…you can’t seize scattered nut groves and rabbit warrens). “Food scarcity” in the civilizational story means something very specific: scarcity of state-managed, taxable grain…NOT actual absence of food in the landscape.
It isn’t hard to see that growing, saving, and replanting seeds (or gathering perennials, or hunting, or fishing) wouldn’t have been significantly easier or harder across eras. What changes is whether you’re permitted to do so. Fences? Enclosures? Tax? Slavery? Monsanto seed patents? These are civilizational interruptions of a regenerative baseline (that will feed you).
Think of it as two logics. Life logic? Regeneration means food is always there. Civilization’s logic? Appropriation means food is turned into scarcity (funneling it through grain, storage, control, and rationing)…dependency.
-
“The Dark Ages” (a civilizational propaganda campaign)
What evidence do we really have of life between “great civilizations”?
Most of what we know about (recent) past human activity comes from civilizations…they wrote the texts, built the monuments, and taxed the scribes. Between empires, and especially after collapses, the trail goes quiet. Still, there are important windows…
Archaeological evidence tells us that after collapses (like the Late Bronze Age, ~1200 BCE), urban centers empty out and people scatter into villages, hill forts, and rural hamlets. It also tells us that nutrition improves after collapses…less dental decay, taller average height, fewer stress markers, etc. Peasants eat more varied local food when they’re free from (elite) grain monocultures. And they live in simpler dwellings with more egalitarian layouts (vs. palaces and temples), and engage in more local craftmanship (potter, textiles) when centralized trade breaks down.
Some examples…
Elite historians call the period after the collapse of the Roman Empire a “Dark Age,” but isotope and skeletal data show rural populations ate better when imperial taxation and grain export systems collapsed. Commoners gained land access (while the people at the top cried, “Barbarism!”).
When (classic) Mayan civilization collapses around 900 CE, monumental building stops, but villages persist…there’s plenty of evidence of crop diversification and local resilience. People didn’t vanish, in other words…they just stopped paying for the fucking pyramids.
In the Andes, after Spanish conquest destroyed centralized (Inca) systems, Indigenous ayllu (kindship networks) reasserted themselves as the real basis of survival.
Anthropology also helps fill in these “dark” gaps by studying groups who lived outside or on the margins of states. Foragers like the Hadza, San, and Inuit show what lifeways look like without taxation, markets, and state coercion…and, again, what we see are rich social bonds, leisure, and diverse diets. In The Art of Not Being Governed, James Scott argues that much of Southeast Asia’s highlands were deliberately outside of state control, and people chose to exit civilization (they didn’t “fail to develop”).
Skeletal trauma indicates that gaps between civilizations are marked by less mass warfare, and stress markers decrease in periods between states (life is much less of a chronic grind).
Basically, the evidence we have suggests that during “dark ages,” ordinary people lived better. They were healthier, freer, less taxed, and more autonomous. They engaged in local culture and kinship that is probably invisible to historians.
I think that what we call collapse now only looked catastrophic at the time to the few…scribes and kings. For most of the people we would relate to, it meant relief.
-
The sooner civilization collapses, the better.
The “saving civilization” narrative smuggles in a bunch of assumptions.
That civilization = humanity.
This ignores the fact that for most of human history we lived outside of states, agriculture, empire…with better nutrition, more leisure, stronger community ties, and little to no hierarchy.That collapse = tragedy.
In reality, archaeological and anthropological evidence shows that “collapse” of states meant ordinary people’s lives improved: fewer taxes, fewer wars, and more autonomy. The “dark ages” framing is a civilizational bias…the people writing history were the elites who lost power, not the peasants who gained freedom.That continuity of institutions is the goal.
But life itself (biological, ecological, communal) can clearly persist (flourish, even) without those institutions.I enjoy reading and listening to thinkers like Schmachtenberger, Bostrom, Harari, etc., but they also frustrate the hell out of me. In their paradigm, civilization is taken as the frame of reference. The metrics are survival of states, stability of markets, and the continuity of technology. The assumption is always that if civilization collapses, humans (and meaning, and progress) go with it.
It’s a selective view and I think it’s bullshit. It privileges what’s easiest to archive (stone, steel, writing, empire) over what’s hardest (oral culture, kinship bonds, lived quality of life). But the archive is far from the reality. Ruins, coins, monuments, and GDP have fuck all to do with lived experience.
How do you look back and measure quality of life? There are some things we can roughly quantify…
We know that declines in biodiversity track pretty damn closely with agriculture and state expansion. We know that height, bone density, and dental health were better among hunter-gatherers than early agriculturalists. We know that foragers worked ~20 hrs/week on subsistence, vs. 60+ in most agrarian/civilized contexts. We know that rates of disease, parasites, and epidemics increase with population density and domestication of animals. We know that foragers lived in fluid, egalitarian bands with profound interdependence between members. We know that inequality really only appears with agriculture and the state. Likewise mass/organized violence (wars, enslavement, genocide). We know that hunter-gatherers were happier because their needs were modest and easily met. We know that physiological stress markers (enamel hypoplasias, bone lesions) spike with agriculture and that culminate in today’s mental health crisis.
Civilization leaves us records of itself and erases precisely what made life rich and bearable (simply being alive in small communities and the sensory ecology of a biodiverse landscape). We use those records to determine what’s important in life rather than seeing them for what they are–roads to failure. Repeated failures.
Imagine a series of layered graphs (I’m shit with tech, so you’ll actually have to imagine them), not just with the usual axes (population, GDP, technological complexity), but overlayed with several “shadow metrics”…stress hormones (rising with states), storytelling hours per night (dropping with industrial time-discipline), biodiversity curves (next to depression rates), average hours of unstructured play for children (falling over time), etc. These graphs would show you a visceral contrast…material monuments climbing skyward as lived human experience goes to shit.
I’m talking about an alternative history of experience (it’s increasingly the only sort of history that interests me)…”what did it feel like to live then?” instead of “what shit did we build?”
-
Tribalism, Consensus Reality, and Domestication
I think it’s safe to say that feedback-sensitive (neurodivergent) people are less susceptible to tribalism. Under most circumstances (I’ll get to the exceptions), we’re less likely to be a conservative, a democrat, a fundamentalist, etc.
Tribalism (the tendency to take your group’s beliefs as your own and enter into conflict with other groups) depends on high-precision social priors…shared norms, loyalty cues, in-group/out-group boundaries. But we rely more on direct sensory evidence or logical consistency than on these socially constructed signals. The pull of group identity isn’t as strong for us.
I don’t experience the same automatic emotional reward for aligning with group opinion. I don’t get that warm and fuzzy feeling called “patriotism,” for example. I’ve never understood it. If an in-group belief contradicts observed reality, I can’t help but question it…even if it costs social standing. And tribal systems clearly punish that.
Tribalism thrives on broad, simplified narratives (“they’re all like that”), which smooth over (or blind people to) exceptions. But exceptions are what catch my attention the most. Outliers that break the spell of group generalizations stand out to me, and I’m constantly stupefied that this isn’t the case for most people.
Neurodivergents aren’t always resistant to tribalism, of course. In environments where belonging feels existentially necessary (which is just about every fucking environment in 2025), we can certainly conform strongly…overcompensate even.
But I’d argue that for most of human history, where the “opt-out” option was real, if the group became oppressive, coercive, or incompatible with our temperament, we simply….left. And that escape valve probably served as a check on group conformity and control.
We’d leave for a number of reasons. If group norms are arbitrary or contradictory, sticking around creates constant prediction error. As hard as it is, departure is often the path of least resistance for someone like me, and probably would have been for people like me in the past. We were also probably self-reliant in certain domains. Many autistic skill sets (deep knowledge in specific areas, tracking environmental patterns) would translate to survivability outside rigid social structures. And a drive for integrity over belonging means that physical separation would have been (and still is) preferable to constant self-betrayal.
Wherever civilization spreads, the option to leave disappears. Agricultural and industrial societies lock people into fixed territories, usually controlled by central authorities. Survival becomes tied to participating in a single dominant system (currency, markets, legal structures), removing just about every parallel option. And instead of many small groups to choose from, there’s effectively one “tribe” (the society and its cultural apparatus). The option to “simply leave” is gone now, I’d say.
Without the option to leave, those of us who would naturally walk away face a stark choice…either overcompensate (learn to mimic, mask, and fit despite the cost) or withstand isolation (remain noncompliant and absorb the social/economic consequences). I think this goes a long way in explaining why in the modern era autistic burnout and mental health crises are more visible (Breaking News: Autism Epidemic!!). The evolutionary safety valve is….gone.
I’m not wired for tribalism. It looks ridiculous to me. I hate that people have this sort of neediness for it. Especially when, in 2025, we recognize it as the greatest barrier to humanity effectively addressing global crises…from planet destruction to systematic inequality and democratic collapse. It creates moral elasticity, where harm is justified by group loyalty. It creates rivalries purely for identity’s sake (beating the shit out of each other over a fucking soccer match). It makes coordinated responses impossible. It’s fucking dumb.
This is all deeply bound up with what I call consensus reality (the shared social priors/expectations that a group holds about “what is real,” how the world works,” and “what matters”). In that context, I see tribalism as the emotional and identity-binding mechanism that keeps people committed to those social expectations, defends them from contradiction, and rejects competing models from out-groups. Consensus reality (what people call “the real world”) gives tribalism content…stories, values, and assumptions the group agrees on. And tribalism, in turn, gives consensus reality teeth…social rewards for conformity and penalties for deviation. You could see it it as the social immune system that suppresses any error signal that might disrupt the shared model people consider “reality.”
Which brings me back to a core idea of my book: every degree of separation from reality (unmediated feedback) creates space in which lies and manipulation can be used to control human behavior. Symbolic representation, bureaucracy, technology, propaganda…all stand between an action and its consequence. From a predictive processing perspective, these separations replace sensory precision with social priors. Once your perception of consequence is dominated by priors handed to you by others, your model of reality can be steered by whoever’s controlling the narrative (regardless of what’s actually happening right outside your window).
In the human domestication frame I’m building, this explains how a control system matures. Reduce a person’s direct contact with feedback, fill the gap with consensus reality (shared fictions), and use tribalism to keep the consensus coherent and defended.
It also explains why nothing changes. Everyone’s wondering why humanity can’t seem to course-correct, but this isn’t rocket science. Mainstream “solutions” operate entirely inside mediated spaces, accepting layers of separation from reality as normal or inevitable. They try to optimize those spaces…fact-checking information, creating better messaging, nudging behavior with persuasive campaigns. They ignore the gap itself. The underlying problem (that people’s models of reality are no longer tethered to direct, embodied, ecological feedback) is left untouched.
In other words, people just swap one set of social priors for another, without increasing the precision of sensory input from the real world. Their brains are still mostly being updated by other people’s models instead of reality itself. That’s like improving the entertainment or fairness of the feedlot without questioning why the fuck the animals can’t simply graze in the field anymore. The control system remains intact (strengthened, maybe) because the medium of control (the gap) is preserved.
This is why giving people access to more information isn’t solving anything. We may have created gaps between action and consequence, but evolution hasn’t removed the cognitive biases and drives that were calibrated for direct feedback. Those drives still operate, but now they need something to work on in the absence of reality’s immediacy…and that “something” becomes bullshit. Shared fictions.
Why? Why do people need so much bullshit?
For one, I think hard-wired biases still expect input. Traits like negativity bias, advantage-seeking, and status monitoring evolved to process real-time environmental cues. Without direct cues, they grab onto representations (bullshit) because the brain just can’t seem to tolerate informational vacuum.
Bullshit comes in to fill prediction gaps. Without high-precision sensory input, shared social fictions are used to predict outcomes. Those fictions become the scaffolding (myths, ideologies, propaganda) that keep the model “stable” even if it drifts from reality (like when it starts baking the planet).
Next thing you know, manipulation rides in on necessity. Because social fictions are now the only widely shared basis for action, whoever controls them effectively controls the behavior of the group. Domestication leverages this (the feed is always narrative feed…never the real field).
The further the separation from unmediated feedback, the more elaborate the fiction has to be to sustain group coordination and suppress error signals. Fast forward to 2025, and people are acting entirely on their group’s fictions…with reality surfacing only in the form of the most immediate and extreme crises (which then get reabsorbed into new narrative).
Where are feedback-sensitive people in this story? Where’s that autistic guy?
Well, if your brain assigns low precision to social priors, then the fictions that fill feedback gaps for most people feel…jarring, flimsy, or outright hostile. My brain gives more weight to sensory or logical evidence, and that means constant prediction error when I interact with a model that’s running entirely on narrative rather than reality.
In domestication terms, that makes me an outlier in a control system that depends on narrative compliance. For the “typical” person, the fiction is not only tolerable but necessary for coordination. For me, it’s a constant source of friction (because the group’s stabilizing story is exactly where I detect the misalignment most vividly).
I’m heavy into predictive coding literature (Friston, Clark, etc.) right now, so I’ll try to frame some of my main arguments in those terms. (I’ll probably get it wrong)
Consensus reality is….encultured hyperpriors. Culture installs hyperpriors (very high-level expectations about “how things are”). They’re learned and built up by language and institutions, and they set the precision economy (which signals get trusted).
Human domestication is a sort of niche construction. One that rewards minds able to thrive in symbol-dense, delayed-feedback environments. The effect is a recalibration of precision…social model-based predictions gain weight and raw sensory error loses weight. This is the flattening of the error landscape, so to speak.
Social priors are what let us coordinate at scale (which is rarely necessary…unless you’ve fucked up at scale), but trouble starts when precision sticks to bad priors in rapidly shifting or bullshit-heavy niches (media, bureaucracy), drowning out any errors that might have resulted in correction.
An autistic person’s low tolerance for fictions is a different precision setting. It continually surfaces mismatches others smooth over. Which largely feels like shit (derealization, friction) for the autistic person, but is epistemically valuable (less “consensus-blindness,” wink wink, Peter Vermeulen).
I’ve been talking about human domestication as selection against reactivity, but I know I have to be careful with single-trait stories like that. Maybe what’s selected are policies that minimize expected free energy in a given niche. In dense, rule-ridden societies, that means predictability-friendly (?) minds. Compliance. Delayed reward. Role fluency. Some kind of energy-efficient inference under control niches.
This is where I’d be on my own, I think. This is the final “gap” where most of the highest-level thinkers are sort of playing…the control niche as a given. Someone like Andy Clarke (were he to agree with my line of reasoning so far), might say the solution is about tuning the system to balance priors and sensory input more adaptively. But in a domesticated, control-oriented society, “tuning” quickly becomes prescription…setting parameters so people remain useful to the system, not so they reconnect with (unmediated) reality).
The more fundamental problem is that any centrally managed adjustment to perception keeps people inside a mediated model. It doesn’t restore autonomy…it optimizes compliance. And the last thing I want is to compliant in a system that is clearly out of touch with reality.
-
What Wrangham Gets Wrong About Human Domestication
(Hint: 900,000 cows are slaughtered daily. They shit where they eat and wouldn’t have a hope in hell at surviving without human care. But they’re nice.)
In The Goodness Paradox, Richard Wrangham argues that the main selection pressure in human (self-)domestication was the weeding out of reactive aggression. It’s a nice story that makes the net gain of human domestication harder to argue against. But, to me, it’s clear that selection against reactivity in general (or unpredictability) is the bigger, truer story, of which the reduction of “reactive aggression” is simply the most visible (and PR-friendly) chapter. Taken as a whole, and across species, the domestication package is clearly a general downshift in arousal/reactivity with a re-tuning of social expectations…not just the loss of hair-trigger violence.
Let’s look at domestication again while entertaining this broader (and inconveniently less moralistic (duller, rather than nicer humans) selection pressure.
For one thing, physiology moves first…and it’s general. In classic domestication lines (e.g. Belyaev’s foxes), selection for tameness blunts the HPA axis and stress hormones overall…fewer and fewer cortisol spikes, calmer baselines. That’s not “anti-aggression” specifically; it’s lower stress reactivity across contexts. Brain monoamines shift too (e.g. higher serotonin). That’s a whole-system calm that would make any behavior less jumpy (including but not limited to aggression).
Developmental mechanism also points to a broader retune. The “domestication syndrome” is plausibly tied to mild neural-crest hypofunction, a developmental lever that touches pigmentation, craniofacial shape, adrenal medulla, and stress circuitry. In humans, BAZ1B (a neural-crest regulator) is linked to the “modern” face and is part of the self-domestication story. None of that is news…but if you tweak this lever, you clearly soften the whole reactivity profile…not just aggression. And my guess is that whoever’s fucking with the lever has his eye on the “compliance” dial more than any other.
Comparative signals align, too. Genomic work finds overlaps between human selective sweeps and domestication-candidate genes across species…showing a syndrome-level process rather than some sort of single behavioral knob. Craniofacial “feminization” over time in H. Sapiens fits reduced androgenic/reactive profiles, too.
Domesticated behavior tracks a “global calm.” Domesticated animals are less fearful, less erratic, and more socially tolerant than their wild counterparts. Your dog’s tendency to “look back” to you in unsolvable tasks is a manifestation of that…when arousal is lower and social cues are trusted, help-seeking beats reactive persistence. That’s a broad predictability play (that has nothing to do with aggression).
Obviously, Wrangham’s focus still matters. His key point, the decoupling of reactive vs proactive aggression in humans (we got tamer in the heat-of-the-moment sense, but remained capable of planned, coalitionary violence), is real and important to explain. It’s part of the story, but not the whole story. As general reactivity is reduced, strategic (planned) aggression is preserved…because strategic aggression isn’t a startle reflex; it rides on executive control and group coordination. But selection against reactive aggression isn’t the driver in this story. It’s just one behavioral readout of a deeper arousal/volatility downshift. A nice part (maybe) of an otherwise quite shitty story (from life’s vantage point). The beef industry might point out how nice the cows are, but I don’t think even they would try to argue that “nice” is what it’s aiming for. Dull. Compliant. And so it goes with all domestication. There is an objective in the domestication process, and any and all traits that impede progress toward that objective are pruned. (adding “self-” to domestication when it comes to humans, while accurate in the sense that the domesticating agent was of the same species, gives it a voluntary flavor that has no evidence in history…the domestication of humans was driven by systemic enslavement and reproductive control just as it was for all domesticates)
Why is it so important to me to find the driver of human domestication at all? Why not just start from the broadly-accepted premise that we are a domesticated species and go from there? Because I need to know what’s truly going on in the brain during this domestication process. How do we get to the brain we call “typical” now? What was it selected for? Was it selected for something broadly adaptive? Or is it more like runaway selection? An overfitting?
To me, cognitively, domestication looks like a down-weighting of volatility and a reallocation of precision (in predictive-coding terms). Brains with lower expected volatility (that have “the world is less jumpy” as a hyperprior…fewer LC-NE-style alarm bursts…a calmer autonomic tone), higher precision on social priors (human signals are treated as the most trustworthy ones…ecological “noise” gets less weight), and policy canalization (high confidence in proximity/compliance/help-seeking policies).
I think that human self-domestication primarily targeted behavioral and physiological volatility (a population-level reduction in phasic arousal and unpredictability) of which lower reactive aggression is a salient subset. And that the result is down-tuned HPA/LC reactivity, strengthened social priors, and canalized, low-variance action policies. Think of what happened as some sort of reactivity pruning (where reactive aggression was one prominent branch that got lopped off).
What is the domesticated brain? Zoomed out, it’s clearly an instrument that’s been made dull. One that exhibits blunted responses to non-social unpredictability (startle, sensory oddballs, metabolic stressors), not just to dominance threats. And anti-aggression alone doesn’t suppress those.
If I’m reading the studies properly, there are signatures of what I’m talking about in stress-regulatory and neuromodulatory pathways (HPA, serotonin, vasopressin) and neural-crest development…not just androgenic or specifically aggression-linked loci. Recent multispecies work pointing at vasopressin receptors and neural-crest regulators certainly seems consistent with this.
Wrangham’s story doesn’t account for lower intra-individual variance in exploratory/avoidant switches and faster convergence on socially scaffolded policies (like help-seeking) across types of tasks (anti-aggression predicts biggest effects only in conflict contexts). It doesn’t explain the psychotic consensus reality holding everyone in, as it rolls off a cliff.
(In fact, I question how much of the reactive aggression branch got lopped off…surely, not nearly as much as we think. What self-domestication mostly did was gate when, where, and how the majority of people show reactivity. When accountability and real-world consequences are high, most people keep a lid on it. When consequences drop (anonymity, distance, no eye contact, no immediate cost), the lid starts to rattle…online, in cars, in fan mobs, in comment sections. I don’t think reactive aggression was bred out so much as trained into context…and how well you do in that context will largely determine the story you tell. Harvard professors are clearly doing quite well in the civilizational context and consequently have pretty stories to tell.)
-
I’m sorry.
In this backward place, every part of me that others like is an act.
In this backward place, every part of me that is repulsive to others is me.
And, so, each morning before I leave my bedroom, I face a choice: be my repulsive self or kill it.
Lately, for the first time in decades, I’ve allowed myself to be me. Repulsive.
In a backward place, the only motion for me is oppositional. Every moment a disagreement. A conflict. A solitary war that quickly begins to feel like insanity. Alone.
I’m weak. The abyss is too cold and too dark for me. So I’ll go back to killing my self, if I can. Rejoin the dreamworld…the normalized insanity. If I can.If you found this because you’re alone in this fucking nightmare…trying to be sane…I’m sorry. I am so sorry. You missed me…maybe by days, months, years…I don’t know when you’re here. I’m sorry that you are the way you are in a world like this. I wouldn’t wish sanity on anyone in this place. I don’t see a happy ending for sanity in this place.
-
The Predictive Brain: Autistic Edition (or Maybe the Model’s the Problem)
There’s a theory in neuroscience called predictive processing.
It says your brain is basically a prediction engine that’s constantly trying to guess what’s about to happen (so it can prepare for it). In other words, you don’t just react to the world…you predict it, moment by moment. The closer your model (of predictions) matches reality, the fewer surprises you get. Fewer surprises, less stress.
The model applies to everything…light, sound, hunger, heat. But also to something far messier: people. From infancy, we start modeling the minds of those around us. “If I cry, will she come?” “If I smile, will he stay?” It doesn’t need to be conscious…it’s just the brain doing what it does (building a layered, generative model of how others behave, feel, and respond). Social expectations become part of the predictive model we surf through life on. (nod to Andy Clark’s Surfing Uncertainty)
From the predictive processing perspective, autistic people aren’t blind to social cues. (That’s outdated bullshit.) But we weight them differently. Our brains don’t assign the same precision (the same level of trust) to social expectations as most people do. So we don’t build the same nice, tight models, make the same assumptions, or predict the same patterns.
For example, I can read derision just fine. But I don’t use it to auto-correct my behavior unless it directly impacts something I care about. For better or for worse, my actions aren’t shaped by unspoken norms or group vibes…they’re shaped by what feels real and necessary in the moment.
If you sat me down in front of Andy Clark or Karl Friston (smarty- pantses in the predictive processing world) they’d probably agree. I think. They’d tell me I’m treating social priors as low precision. That my brain doesn’t update its models based on subtle social feedback because it doesn’t trust those models enough to invest the effort. And that my supposed “motivation” is actually baked into the model itself (because prediction isn’t just about thinking, it’s about acting in accordance with what the brain expects will pay off).
Ok. But something’s missing…something big. Context.
Implicit in the predictive model is the idea that social priors are worth updating for. That most social environments are coherent, that modeling them is adaptive, and that aligning with them will yield good results.
But what if they’re not? What if you turned on the news and saw that the world was….kind of going to absolute shit? And that, incomprehensibly, people seemed fine enough to let clearly preventable disasters simply unfold and run their course?
What if the social signals you’re supposed to model are contradictory? What if they reward falsehood and punish honesty? What if they demand performance instead of coherence?
In that case, is it still a failure to model social cues? Couldn’t it be a refusal to anchor your behavior to a bullshit system? A protest of the organism rather than a failure?
Because from where I sit, if social information is incoherent, corrupt, or misaligned with ecological / biological reality, then assigning it low precision isn’t a bug…it’s a protective adaptation. Why would I burn metabolic energy predicting a system that specializes in gaslighting? Why would I track social expectation over reality? “Why do THEY? ” is the question I ask myself every day. (Just when I start to accept that people simply love the look of grass instead of nature, they go out and cut it….then just when I start to accept that people love the look of grass that is a uniform height (rather than actual grass)…they go out and cut it under clear skies when it’s 35 degrees, killing it…just when I start to accept that people are born with some sort of pathological compulsion to mow landscapes, they replace a portion of their yard with a pollinator garden…because enough of their neighbors did.)
In predictive processing terms, maybe we (autistic people) are saying, “This part of the world isn’t trustworthy. I’m not investing in modeling it.” or just “I don’t trust the model you’re asking me to fit into.”
Of course, saying that comes at a real cost to me. Exclusion, misunderstanding, misalignment. I can sit here all day telling you how principled my stand is…but that “stand” is clearly exhausting and has resulted in long-term adaptive disadvantages (in this place).Systems (“good” or “bad”) almost always punishes non-modelers. But that doesn’t make me wrong. Reality is reality.
Recent Posts
- Survival Stories
- Fidelity to the Group Vs Fidelity to Reality (blue pill vs red pill)
- Scarcity -> Conflict
- WILL
- No…autistic people are not rigid thinkers.
- What IS domestication?
- Where does the real control begin? (How did we get from egalitarianism to building permits and marriage licenses?)
- Domestications V1.0 / V2.0 (hunter-gatherers / suburbanites)
- Why Wrangham’s Hypothesis is Hobbesian (again)
- Was Hobbes right? (and other holes in Wrangham’s narrative)
- Human Self-Domestication…selection against autonomy, not hot heads.
- Different Maps of Reality
- ramble (predictive coding, autism,simulation)
- The Cost of Food = A Seed
- “The Dark Ages” (a civilizational propaganda campaign)
- The sooner civilization collapses, the better.
- Tribalism, Consensus Reality, and Domestication
- What Wrangham Gets Wrong About Human Domestication
- I’m sorry.
- The Predictive Brain: Autistic Edition (or Maybe the Model’s the Problem)
- Social deficit? Or social defi…nitely-don’t-care?
- Human self-domestication, Pathological Demand Avoidance, and “self-control” walk into a bar…
- Domestication and the Warping of Sexual Dimorphism
- Domestication at the Top (When Wolves Build Kennels)
- The Great Culling: How Civilization Engineered the Modern Male
- The Genome in Chains
- Human Self-Domestication–Passive Drift or Violent Control?
- So what is “neurodivergence,” really?
- The Domesticated vs. The Wild
- Compliance vs. Resilence (to Incoherence)
- Is there such thing as a “baseline human?”
- The Civilizing Process IS Domestication
- What’s the “civilizing process,” really?
- Feedback Inversion
- What is “neurotypical” across living systems?
- The Double-Empathy Struggle
- No…autistic people don’t struggle with complexity.
- Is compounding error to blame?
- Is technology to blame?
- Is abstraction to blame?
- Stability Versus “Progress”
- Is civilization inevitable?
- Never-Ending Conflict
- Dominoes
- Nothing
- I Have Nothing of Value to Say
- Premises
- No Feedback = Dominance Hierarchy
- Civilization as a Process
- I’m “divergent” from WHAT, exactly?
- What We Did to the Dog
- Life as Pathology
- Letters to Family after a Late Autism Diagnosis
- My “Alexithymia” Isn’t What They Say It Is
- In Relationship with the World
- Fuck “Nature”
- I Can’t Express my Ideas Properly
- My Abyss
- Overstimulated by Bullshit
- Transitions SHOULD be hard (in this place)
- Masking: The Feedback That Lies Back
- The “Rise of Autism”: Diagnostic Inflation