Thoughts

  • Survival Stories

    Neurotypes are neither pathological nor adaptive in themselves; they’re living fossils, embodied memories of the environments that shaped us and the demands they imposed. Their survival, their persistence, tells a story whose details have been lost but for the person in front of you. Old worlds abide in my flesh. And in yours.

  • Fidelity to the Group Vs Fidelity to Reality (blue pill vs red pill)

    Can you select against reactivity (and for pro-sociality) without selecting against truthfulness? I don’t think you can.

    In Wrangham’s phrasing, pro-sociality seems to be about “friendliness,” tolerance, cooperation. But on the ground, what pro-sociality really looks like is going along with the group, even when the group is wrong (especially when the group is wrong, maybe). Let’s be honest…coalitions don’t just punish aggression, they punish non-conformity. It’s autonomy vs consensus.

    Loyalty to the truth and conformity are simply incompatible.

    What is reactivity? The kind of reactivity that a group takes issue with? It’s registering error signals and acting on them. And what is loyalty to the truth if it isn’t that? Refusing to override error signals even when everyone else does. If selection is for compliance, then loyalty to truth (which in a group looks like stubbornness, pedantry, dissent) gets you punished by the coalition. Over evolutionary time, that makes loyalty to truth maladaptive inside control systems.

    Like most things in evolution, it’s a trade-off. Sure, you can have pro-sociality without truth if the goal is harmony. But you can’t have both pro-sociality and true fidelity to reality. Truth is often disruptive. And so, to preserve group stability, coalitions consistently select against those who react too strongly to contradictions…even if they’re right. Think whistle-blowers…Socrates…Galileo.

    In predictive processing terms, the mutual exclusivity of these two drivers becomes even clearer. Pro-sociality is the inflation of social priors and the down-weighting of individual prediction errors. Truth-loyalty is the up-weighting of prediction errors, social priors be damned. A system can’t simultaneously select for both…they’re opposing precision-weighting strategies. The emperor’s either wearing clothes or he isn’t.

  • Scarcity -> Conflict

    I believe it was Geoff Lawton who said the next war would be fought over water. A liter of clean water is already more valuable on the market than a liter of crude oil…and in the systems we’ve created, scarcity always leads to conflict.

    In ecology, scarcity is feedback. Hunger pushes foraging…drought pushes migration. And in small-scale human societies, conflict over scarcity was usually managed by mobility, sharing, or groups splitting. But once we settled, scarcity became inescapable. Fixed fields…stored food…property…nobody wants to leave those things behind. We became “invested” and conflict became largely unavoidable.

    To prevent collapse from the inside, human coalitions developed ways to suppress reactivity. Strong reactivity (aggression, dissent, or any kind of stubborn autonomy) is dangerous in a sedentary groups subsisting on scarce resources. So selection shifted toward compliance and conformity…enforced first by gossip and ostracism (see Wrangham), then law, ideology, and force.

    It’s dangerously tempting to read civilization as a suite of conflict management “technologies.” But they’re not technologies…they’re stories. They’re descriptions of what is.

    Religion frames inequality and misfortune as God’s will. The doctrine of free will reframes poverty or failure as your own fault. Markets channel conflict into competition, but “solve” scarcity by creating…(artificial) scarcity. States monopolize violence to keep conflict from fracturing the states themselves. And AI is already talked about as a promise…a promise of an environment managed so perfectly that conflict never arises…where error signals are resolved (i.e. smoothed) even before they appear.

    These are stories. Post-hoc rationalizations and buffers. Each of them suppressing the conflict signals they themselves generate.

    What do I mean by that? Think of scarcity as resulting in prediction errors…unmet needs…violated expectations. Conflicts are behavioral responses to those errors. And civilization is the inflation of social priors (shared fictions, ideologies, gods) so that individuals suppress their error-driven responses in favor of compliance. This produces short-term stability…but it also severs feedback. And where feedback is severed, ecological and social errors accumulate.

    In other words, you should never see civilization as a solution to scarcity. It’s never been that. At best, at the smallest scale, it’s a short-term solution to conflict. By suppressing reactivity, it buys stability at the cost of accumulating and unregistered error. Like everything else civilization touches, it makes conflict less surprising…by smoothing, scripting, or relocating it.

  • WILL

    I’m considering whether concepts like free will and God’s will function like a civilizational equation for neutralizing the “error signal” of inequality.

    Every society, at every time and place, faces inequality. And inequality doesn’t feel right…does it? And not feeling right, in neuroscientific terms, means error.

    Some people are born poor, sick, or generally shit out of luck. Others aren’t. They’re born fine and seem to do fine. And without any sort of explanation as to why that is…it looks arbitrary and unjust. It’s destabilizing because human nervous systems are sensitive to fairness. (I believe that because it seems to hold true for most people once I get to know them.)

    Civilization flattens errors for its members. That’s what it does. And here we have an error, don’t we? Unfairness. Inequality. But civilization has a two-part story to keep inequality from being experiences as an error by its members.

    The first part of that story is free will. That part tells us inequality is the individual’s fault. You could’ve chosen differently. Your poverty is the logical result of laziness. Your suffering comes from bad decisions.

    But that doesn’t account for all inequality, does it? Some inequality fall through the cracks of that argument.

    That’s where God’s will comes in. Inequality is the cosmic order. God chose your station…or suffering purifies you…or justice comes later for you (in heaven, or via karma).

    Add these two stories together, and you get a closed loop. If you’re disadvantaged, either you chose badly (your free will) or God chose it for you (divine will). In both cases, whatever system you live in is absolved. It’s on you or God. Case closed.

    What’s really happening here? In the brain?

    Inequality is a prediction error. You expect fairness, but you see unfairness. Taken together, free will and God’s will reframe that error as expected, meaningful, or “deserved.” It’s a story that neutralizes surprise for you. It lets you accept conditions that would otherwise feel intolerably incoherent.

    In medieval Europe, you’re a serf because of God’s plan…but also because you don’t have the virtue to “rise.” In Calvinism, your wealth shows God’s favor…but your hard work (free will) proves it. In American capitalism, anyone can succeed (free will), but if you don’t, maybe God didn’t bless you. In communism, even, a secularized version appears. History’s laws are inevitable (God’s will), but you have to freely devote yourself to the case (free will).

    This is yet another example of civilization flattening the error landscape. Otherwise, inequality would feel like raw incoherence. We need to make it explainable…”just,” even. We wrap it in narratives of freedom and divine order.

    These stories are post-hoc rationalizations, of course. They do nothing to solve inequality. They’re explanatory patches applied after the fact.

    Historically, the pattern is clear. The inequality comes first (land hoarding, hierarchy, wealth gaps). People register it as an error signal…it’s unfair. And civilization comes to the rescue…not by reducing the inequality, but by retrofitting a story. Free will (you could have done otherwise!) or God’s will (it’s meant to be this way!). These stories reframe perception so you can tolerate inequality.

  • No…autistic people are not rigid thinkers.

    Autistic people have challenges with cognitive flexibility. Autistic people are black-and-white thinkers. There’s no in-between with us…something is either good or bad. We’re rigid thinkers…stubborn. We oversimplify the world. We’re immune to nuance. We catastrophize. We’re all-or-nothing.

    Different words for the same thing…you’ve heard the bullshit. In diagnostic manuals. On YouTube. In therapy.

    You’ve also heard the other bullshit. Autistic people are inductive thinkers. We focus too much on the insignificant details and miss the big picture. We can’t see the forest for the trees.

    Different words for the same bullshit…but the opposite bullshit…?

    So, am I a paradox? Am I both a reductive and inductive thinker?

    If there’s anything I’m allergic to, it’s paradox…contradiction. A feeling rises in me. My bullshit detector starts to ring out louder than usual. And, like every paradox I encounter, I know this one is wrong…the reason it’s wrong simply needs to be discovered. (Incidentally, this is the stage in my thinking at which I look most reductionist.)

    And so I think. A lot. I have no choice. My brain won’t leave contradictions unresolved. And here’s what I’ve been thinking these past few days.

    The contradiction (autistic people focus on the smallest details instead of the big picture and autistic people have oversimplified models of the world and refuse to account for contradictory details) comes from where you zoom in on my cognitive process.

    Let’s look at what I’m accused of first: black-and-white thinking (aka dichotomous thinking, rigid thinking, etc.). It means I come to snap categorical judgments like right vs wrong, fair vs unfair, safe vs unsafe. I see it written in descriptions of autistic behavior everywhere…rigidity, over-simplification, an inability to tolerate ambiguity. In predictive processing terms, it’s framed as a strategy to minimize error fast. The autistic person overweights a clear prior (“this is wrong”) instead of juggling noisy or contradictory signals.

    Confusingly, I’m complimented for being the polar opposite…for being an inductive thinker. In the same conversation, even. Apparently, I’m good at building generalizations from specific instances. I’m good at noticing patterns in particulars. In predictive processing terms, my attention to detail and my ability to detect anomalies (in patterns, like contradictions) is attributed to the fact that I don’t start with strong priors (think of them as expectations or assumptions). I let patterns “emerge” from the ground up instead of forcing a theory I have. In other words, compared to your “average” person, I give more weight to incoming data and less weight to stories (whether my own, or a group’s). Bottom-up signals drive my learning. I have to learn the “hard way.’

    The feeling of this paradox is….it’s like you’re catching me at different stages of my thinking process, and naming the stages as opposite things.

    I’ll do my best to describe what my process feels like, from the inside, as me.

    When I first try to make sense of something, I weight raw input heavily. I don’t iron out inconsistencies with conventional stories. That probably explains why I notice details that you seem to miss. But when contradiction or incoherence (i.e. bullshit) piles up, and disengaging from the situation isn’t an option, I need to force coherence. I need to escape the stress of unresolved error. For example, if I’m among a group of people who are trying to solve a problem irrationally, who won’t listen to reason, and I don’t have the option of leaving? I might snap to a categorical judgment. I have a very hard time persisting in insane environments.

    You can think of me as having two modes, maybe. In my inductive mode, I’m open to raw feedback. I’m pattern-sensitive. In my dichotomous mode, I close myself off to pointless ambiguity. This second mode is protective (especially in incoherent environments, where I can get fucked up real fast). But that mode isn’t final. What you call my “black-and-white thinking” is a first-pass strategy to isolate a pattern. I grab hold of a signal (true/false) before I layer nuance back in. And in coherent environments, nuance does get layered back.

    Let’s keep circling this.

    In coherent environments (where feedback is timely, proportionate, local, and meaningful), my particular weighting of error signals looks inductive and nuanced to you. I follow details, I update my view of things flexibly, and I build very fine-grained patterns. Here, you admire me. You say things like, “See? You’re so smart! I couldn’t do that. Why can’t you apply this brain of yours to _______?”

    What you don’t realize, maybe, is that the ______ you just mentioned? It’s an incoherent environment. In that environment, rules are contradictory, consequences are delayed arbitrarily, and abstractions are layered on abstractions for reasons of control, etc. And in that environment, the flood of irreconcilable errors becomes intolerable for me. My nervous system reaches for coherence at any cost. On my first pass in every situation, if I’m forced to “share my thoughts,” you’ll see me collapse complexity into a binary frame (“this is right” “that is wrong”). That’s what you see. What you don’t see (if you never let me get to it), is that I follow that by testing it, and testing it, and testing it…adding details at each step and adjusting my frame as I go. But that first pass? That first pass looks like reductionism or “black-and-white thinking” to you.

    I think if Andy Clark were here, he would say that I have weak priors and that I weigh errors strongly. (He’d probably use words like “underweight” and “overweight,” which I hate…but forgive him for.) That in stable conditions, mine is as inductive, adaptive and precise process. But that in unstable conditions (unstable as in a room full of flat-earthers, not unstable as in a tornado is coming or a bear is running toward me), my process leads to overwhelm.

    My brain clamps onto the simplest model…and when it isn’t allowed to move on from there (the flat-earthers are trying to solve intercontinental flight, and the lawn-zombies are trying to figure out how to keep their biological deserts green) that looks like reductionism. Those are problems that are so decontextualized they have no resolution. And when resolution doesn’t seem possible, my system is forced into a kind of emergency closure because feedback has become utterly fucking useless.

    Let’s look at the lawn example.

    I’m fond of saying that North Americans mowing 40,000,000 of lawn is sheer stupidity. Lunacy, hands-down, from any perspective. When I voice that opinion, intelligent people hear it as an oversimplification. And if they know I’m autistic, they put it down to a difficulty with (or outright inability to comprehend) multi-variable causality. This confuses me greatly. (And I suppose I confuse them.)

    This is probably a case of double empathy. Let’s look at it from both sides.

    I see a signal…mowing tens of millions of acres is, at best, a colossal waste of time, fuel, water, and soil. And I voice it simply: “Stupidity.” To me, that isn’t reductionist. I’m cutting through incoherence to register a glaring error. Something that simply doesn’t make sense. An argument not worth having in this lifetime.

    But when my intelligent (and I’m not using that term facetiously) neurotypical listener hears my opinion, they hear an inability to process nuance. Lawns have cultural history. They have aesthetic value. There are economic incentives involved, municipal codes, homeowner psychology, and so forth. They interpret my clarity as an inability to juggle multi-causality, instead of as a refusal to rationalize nonsense.

    Let’s anchor this gap in neuroscience.

    Neurotypical people put a lot of weight into their priors, which is a fancy way of saying they assume the world is coherent. When they hear “Mowing 40,000000 acres of lawn is complete fucking idiocy on every level,” they automatically begin to search for legitimizing narratives. In other words, they start to search for an explanation for why millions of lawns do make sense (i.e. are comprehendible).

    I, on the other hand, put a lot of weight into error signals, which is a fancy way of saying I can’t ignore contradictions. And from my perspective (and from every epistemic perspective that doesn’t involve human fictions), it doesn’t matter how many variables you pile in to the lawn argument…the outcome is wasteful and absolutely fucking absurd.

    There’s a mismatch here. You see consensus as coherence, and complexity as explanatory. In cases like these, I see complexity as a distraction from the basic error. I collapse your absolute mess of contradictory justifications (shorter grass = less ticks (why grass at all?!?), long grass = an eyesore (what does that mean in reality-terms?), we need to think of neighbors/property value/invasive species (why, why, why among all the things you could “think” of in your 80 summers on this planet, are you “thinking” of those particular bullshit abstractions?). I can process multi-variable causality just fine. But I refuse to let causality excuse incoherence (even typing that makes my head hurt).

    I’m misread, socially. Constantly misread. Neurotypical people equate “nuance” with adding mitigating factors until the critique blurs. And when I resist that, they frame it as over-simplification. I’m the one trying to hold onto the full causal picture. Something can be complex and stupid at the same time. Complexity and stupidity are hardly mutually exclusive in civilization…in fact, they might be positively correlated.

    I can call something stupid without denying complexity. So can you. Try it.

  • What IS domestication?

    At its core, domestication is a selection process shaped by control objectives. I want a cow that is easiest to extract milk and meat from. I want a chicken for the easiest scrambled eggs and sweet and sour wings possible. I want a dog for the best companionship or easiest hunting experience possible. When we domesticate a plant, animal, or person, we have an objective. We’re trying to extract something and we want that extraction to be as quick and easy as possible. So what gets favored or suppressed in a breeding program or a training regimen depends on what makes the organism more compatible with our human-defined system of control. And across species (including humans), here’s what that looks like…

    We select for individuals that are calmer, more tolerant of handling, and more willing to defer to authority or hierarchy. We want reduced variability in responses to stimuli (think “calm” dog vs “reactive” dog)…steady, less surprising behavior is easier to manage. We consistently reward (or at least end up with) juvenile traits (neoteny) like playfulness, submissiveness, and prolonged dependency. These make the domesticate easier to mold and keep in a controlled state. Traits that allow reproduction on human terms are also heavily favored…earlier sexual maturity, more frequent cycles, larger litter sizes (or in humans, lineages that adapt to arranged marriages, “concubinage,” follow religious edicts to “go forth and multiply,” etc.). Finally, the ability to navigate symbolic or artificial rules is prized…dogs attuning to human gestures, and humans adapting to bureaucracies or religions.

    There are are a few qualities fundamentally opposed to control. These are the traits that selected against in a domestication process. Reactive aggression and impulsivity clearly disrupts group or handler control. High sensory reactivity and vigilance are problems, as well. Skittishness, flight responses, or overreaction to confinement make animals (and people) harder to control. Next, any overt sign of autonomy / resistance to control is obviously and inherently something that needs deletion. Whether that’s a dog that won’t stay in the yard, sheep that have a habit of jumping fences, or a human who won’t accept various forms of subjugation…it simply can’t be tolerated. Broadly speaking, unpredictability needs to trained / bred out. Any trait that makes outcomes less stable…irregular reproduction, volatile behavior, refusal to follow routine. Excessive independence has no home here. Animals that refuse to bond with humans and humans that refuse to bond with institutions are destined for the slaughterhouse, prison, shelter, behavioral therapy, the streets, the margins.

    Taken together, this suite of traits produces the domestication syndrome. Smaller brains, reduced sexual dimorphism, more juvenile features, dampened stress responses, and greater and greater compliance. It’s clearly a stunting process…but one that optimizes survival inside artificial systems of control.

    So, what is domestication?

    You could say it’s the selection against an animal’s drive to act independently of a system of control.

  • Where does the real control begin? (How did we get from egalitarianism to building permits and marriage licenses?)

    Let’s start with scarcity.

    When resources are abundant and accessible, individuals and small groups can meet their needs independently…there’s little to no incentive for hierarchy or some kind of enforced coordination. Scarcity (whether it’s real or engineered) creates conditions where access might need to be regulated. Control systems (chiefs, priests, bureaucrats) emerge to decide who gets what, when, and how. This doesn’t just apply to food…think of land, water, labor, and information. The more scarce these things are, the more power seems to concentrate in the people who manage distribution.

    As populations grow and density increases, resource demand outpaces availability in a given area. This creates a new kind of scarcity, call it structural scarcity…resources aren’t always “gone,” they might just be stretched or hoarded. Anthropologists (James C. Scott, Mark Nathan Cohen) argue that the rise of agriculture wasn’t a leap forward. It was a trap…higher population density leads to soil depletion, which leads to periodic famine, which leads to tighter and tighter social controls. Scarcity and density rise together, since density both consumes more and makes groups easier to control (rationing).

    Think of control as a feedback loop or a vicious cycle. Scarcity leads to control…and the more “successful” that control is, the more scarcity results. In irrigation states like Mesopotamia, drought and a high population necessitated a sort of irrigation bureaucracy. But irrigation caused the salinization of soils, which led to even more scarcity, when led to even stronger central control. Once a system of control exists, it doesn’t go away when scarcity disappears…it invents new scarcities (taxes, debts, borders, artificial shortages) to keep power. This happens regardless of the ideology or political system. This isn’t a fascism or capitalism story.

    Let’s look at this from a predictive processing angle. Scarcity increases prediction errors (will I eat tomorrow?). Control systems offer social priors (“obey the priest, follow the ration schedule”) that reduce uncertainty at the cost of autonomy. People trade independence for predictability. The control system becomes what guarantees your survival.

    Can you guess what peaks with resource scarcity and population density? You got it…human domestication (and domestication in general).

    After the last glacial maximum (12,000-20,000 years ago), we saw mass megafauna extinctions and human population growth. In other words, serious local scarcity. During the early holocene (10,000-12,000 years ago), human population density rose in fertile regions (Levant, Yangtze, Andes).

    And this is where we really see scarcity management become chronic.

    The average human brain shrank by ~10-15% beginning ~30,000 years ago, with the sharpest decline between 10,000-20,000 years ago…exactly the same window when density/settlement intensified. Bones became lighter and less robust ~15,000 years ago (this is linked to sedentism/reduced mobility). Male and female skeletal differences narrowed in the same timeframe. And signs of hierarchy and control systems all emerge right as density and scarcity peak.

    The feedback loop is hard to miss. Density produces scarcity (real and perceived), scarcity drives new control systems, control systems select for predictability/attenuation (flattening diversity of responses to it), domestication traits become more pronounced, and those traits, in turn, make populations more compatible with density and hierarchy…accelerating the cycle.

    The loop shapes landscapes, plants, animals and people. What thrives under high-density scarcity-control systems is predictable, compliant, and attenuated. And across millennia, this produces the domesticated phenotype…flatter, more manageable humans. The loop is called civilization. It entrenches domestication traits and expands until it collapses.

    (Christopher Ryan and some archaeologists have an abundance-first model…but it leads to the same loop…once the abundance is gone, you have a sedentary group of people living in scarcity…I think this probably happened in certain places at certain times.)

  • Domestications V1.0 / V2.0 (hunter-gatherers / suburbanites)

    If the egalitarian hunter-gatherers I talk about so much are the “stall point” in the runaway process of domestication, then why do they carry the same suite of domestication traits seen in your average 2025 city-dweller? The same smaller brains (~10-15%) compared to archaic Homo. The same reduced sexual dimorphism (males and females less divergent in body size and head features). The same gracile skeletons, shorter faces, and smaller teeth. The same prolonged juvenile traits.

    Physiologically, they’re as domesticated as farmers, CEOs, and everyone else alive. The domestication syndrome is species-wide, and at first glance that seems to be a hole in my hypothesis.

    I’d argue that where HGs differ isn’t in the physiological baseline, but in the social use of those traits. They maintain egalitarian checks (mocking, ostracism, flexible band membership) that prevent runaway hierarchy. They don’t fully overweight social priors at the expense of sensory/environmental feedback. And their “flattening” is limited…diverse behavior is tolerated, as long as no one seizes too much control. I see them as domesticated bodies living in relatively non-domesticating systems.

    This is how I see it: a certain suite of traits emerges once as part of Homo sapiens’ domestication arc. Different systems (e.g. HG bands vs. states) then determine how that suite is expressed/reinforced/suppressed. Think of it as baseline domestication (the whole physiological package…universal by ~30,000 years ago) and runaway domestication (social systems amplifying the control side and pushing traits further…heavier attenuation and greater conformity).

    Runaway domestication is where some extra behavioral flattening happens. The very strong selection against independence within control systems. Hunter-gatherers do exert control, but on a much smaller scale. You could say their control systems, while not allowing runaway hierarchy (kings and presidents), still weeded out the most disruptive unpredictability…people who can’t cooperate at all, or who are violently antisocial. But under agriculture and states (post Younger Dryas), that weeding out is magnified into systems of slavery, bureaucracy, and mass coercion. The driver is the same (selection against independence)…it’s the degree that’s different.

    We’re talking about a 2-stage process, then. The species-wide baseline (enough selection against reactivity/autonomy to stabilize a group) and a civilizational runaway (the selection against independence becomes way more aggressive).

    If that sounds like bullshit to you, consider the Australia case.

    Anatomically modern humans reached Australia around 65,000 years ago. That’s before the sharpest phase of human brain-size shrinkage ~30,000-10,000 years ago (a known marker of domestication in animals and humans). The original Australian populations diverged early and developed in almost complete isolation from later Eurasian agricultural/civilizational pressures. They remained foragers until very recently (colonial contact), and while they engaged in complex land management systems (fire-stick farming and aquaculture), there was no large-scale ag, urbanism, or state hierarchy. In other words, they had far less exposure to the kinds of control systems that I accuse of driving runaway domestication.

    And what do we see?

    Sure enough, fossil and skeletal data from Australia populations don’t show anywhere near the same degree of gracilization that appears in Holocene Eurasia. They have robust cranial and skeletal features compared to Europeans of the same era. That suggests to me that the full domestication suite probably didn’t occur in the same way there. You could say they carry the “baseline” domestication package, but not the civilizational package.

    If full domestication is selection against unpredictability in control systems, then societies without large-scale control systems (like foragers in Australia) should show less of what I’m calling runaway domestication. And that’s exactly what we see.

  • Why Wrangham’s Hypothesis is Hobbesian (again)

    Wrangham’s “reactive aggression hypothesis” carries an unmistakable Hobbesian flavor.

    His baseline assumption is that early humans, like chimpanzees, were naturally violent, impulsive, and prone to destructive outbursts. What made us “human” was learning to suppress these brutish tendencies through coalitionary control (groups executing overly aggressive males). Society gets framed as a kind of pacifying mechanism…a necessary check on our otherwise nasty, volatile nature.

    That’s basically Hobbes’s worldview in Leviathan…life in the “state of nature” is “solitary, poor, nasty, brutish, and short,” and order only emerges once violence is curbed by collective enforcement. Wrangham’s twist is to naturalize that process into evolution. Rather than a social contract, it’s genetic selection against hotheaded males.

    He’s shoehorned self-domestication into the same tired civilization-as-the-pinnacle-of-human-achievement narrative. We tamed ourselves, became more peaceful, and thus enabled cooperation and culture. But he overlooks that cooperation can be coercive, that proactive aggression thrives inside systems, and that flattening diversity isn’t the same as “progress.” He gives us what might be the rosiest view of civilization so far…as though its violence is an aberration, rather than built into the very logic of control.

    To his credit, however, Wrangham doesn’t dismiss hunter-gatherers as “brutish” in the Hobbesian sense. In fact, he often emphasizes that foragers, especially recent and modern ones, are comparatively egalitarian, cooperative, and less violent than chimpanzees. The Hobbesian tone only creeps in his baseline assumption about the deep evolutionary past, with regard to earlier humans (who were, in his view, originally chimp-like in temperament). The shift toward hunter-gather egalitarianism, in his model, only becomes possible after reactive aggression is curbed (beginning about 300,000 years ago). Meaning we were…chimp-like up to that point? That discounts a lot of pre-domesticated human behavior that clearly indicates coordination (as far back as Homo erectus).

    Wrangham’s view of hunter-gatherers is positive, then, but conditional. He treats them as the first beneficiaries of domestication…the evidence that our “self-taming” worked, rather than evidence that humans were never that brutish to begin with.

    I agree, of course, that hunter-gatherers are fiercely egalitarian…but I don’t treat that as “proof of tameness”. On the contrary, I see that as the feedback mechanism that keeps domestication in check.

    Domestication is attenuation and control, with disruptive reactivity being flattened (in plants, animals, landscapes, and people). Hunter-gatherer egalitarianism works differently…the group prevents runaway control. They do this by enforcing sharing, mocking or ostracizing would-be dominators, and keeping hierarchies shallow. That’s negative feedback at the group level…it resists concentration of power.

    Seeing early humans as brutish is largely our own bias. We can’t seem to fathom wild pre-civilized people as being cooperative. They were cooperative, but their cooperation didn’t require heavy attenuation. They didn’t demand sameness for its own sake…only enough coordination to keep the group functional. In predictive coding terms, they didn’t overweight social priors. Reality (food, predators, ecology) was the primary anchor. Where’s my evidence for this? I’ll get to that.

    For now, suffice it to say that hunter-gatherers sit before runaway selection (for control/compliance/consensus). Their egalitarianism is a brake on domestication. It ensures no one individual (or coalition) gets too much control. This is why hunter-gatherers show humans living in species-appropriate systems…feedback-rich, low hierarchy, high tolerance for diversity of behavior.

    In short, Wrangham’s story is that egalitarianism is what domestication made possible. But I’m saying that egalitarianism was the protective mechanism that kept humans from being fully domesticated (until agriculture and states removed it).  

  • Was Hobbes right? (and other holes in Wrangham’s narrative)

    Wrangham’s reading becomes “Hobbesian” only if I treat modern Homo sapiens as a transparent example of “what nature does.” But if I see most modern humans as the outcome of a runaway selection process (which I do), then what he’s describing isn’t “the natural course of things”…it’s one very peculiar path, shaped by group-enforced control, ecological shocks, and self-reinforcing dynamics.

    In Wrangham’s frame, humans reduced reactive aggression “naturally,” like bonobos, by killing off bullies. This made us more cooperative and domesticated, enabling civilization. This makes our docility some kind of moral progress…proof of “better angels.”

    But when we look at this as runaway selection, we see that humans reduced disruptive reactivity not because it was inherently maladaptive, but because control systems selected against it. Those who resisted were killed, enslaved, or excluded, while compliant individuals reproduced. It wasn’t a noble trajectory toward peace. It’s a feedback loop of domestication…each round of control flattens diversity, narrows behavior, and strengthens the system’s grip.

    I propose that modern “cooperation” isn’t evidence of a gentle human nature, but of attenuation. A domesticated phenotype optimized for predictability. And what Wrangham calls “our success” is really a fragile state of overshoot. More docile humans and larger coordinated systems make for the massive ecological extraction we see today. Instead of Hobbes’s “nasty, brutish, and short” as the baseline, the baseline was probably messier but more adaptive…with greater tolerance for autonomy, variability, and feedback from the environment.

    I think the Hobbesian story is itself a product of domesticated minds narrating their condition as “progress” (I’m in full agreement with Christopher Ryan here). What looks like the triumph of peace is really the triumph of control which, taken far enough, undermines both autonomy and ecological survival.

    I want to take a second (third? fourth? tenth?) look at Wrangham’s take on reactive aggression now. Because there’s a lot about it that doesn’t sit comfortably with me.

    Reactive aggression (the “heat of the moment,” crimes of passion) is still recognized as human. It may be tragic or destructive, but the law often interprets it as impulsive, unplanned…an overflow of feeling. That makes it mitigating. Proactive aggression (premeditated, calculated), on the other hand, is seen as more dangerous. It reflects intentional control, not eruption. Society punishes it more harshly because it reveals a deliberate strategy of harm. This suggests (to me, anyway) that people intuitively grasp that reactivity is part of being alive, whereas proactive aggression is a sort of deviation…weaponizing intelligence for domination.

    Wrangham says that humans became “civilized” by suppressing reactive aggression. But I think everyone can agree that cultural practice indicates we still see reactive aggression as understandable, even forgivable. What we really can’t tolerate is schemed violence…the kind of proactive aggression that builds empires, executes slaves, or engineers genocide. I think the very logic of law undermines Wrangham’s claim. If reactive aggression were the great evolutionary danger, why is it less punished than the thing he ways persisted unchanged?

    Which brings me back to the better explanatory model…domestication didn’t simply reduce hot tempers. It systematically removed resistance (any kind of reactivity that disrupts control). But at the same time, it rewarded (and still rewards) the forms of aggression that can operate through the system…planned, symbolically justified, and bureaucratically executed. This is why the “banality of evil” (Hannah Arendt’s term for the bureaucratic normalcy of atrocity) feels so resonant: proactive aggression is what really flourished under domestication.

    My next bone of contention with Wrangham is that most examples of reactive aggression he provides in his written work and lectures sounds a hell of a lot like bullying. Proactive bullying.

    With one hand, he defines reactive aggression as impulsive, hot-blooded, emotionally charged aggression…triggered by provocation or frustration and more or less immediate (not pre-planned). But in the same breath, he gives examples that clearly indicate planning, calculation, and strategic targeting. He cites situations where aggression is used to produce submission in the victim…not some kind of heat-of-the-moment explosion. I don’t know of any psychological taxonomies in which that behavior is a fit for reactive aggression.

    Why? Again, I think part of it has to do with his bonobo comparison. He needs “reactive aggression” as the thing bonobos and humans both suppress, to link his self-domestication theory. It certainly makes the story cleaner, too. “We eliminated bullies” sounds more like moral progress than “we empowered the strategic aggressors.” And it smells like simplification to me. By labeling bullying “reactive,” he folds it into his main category, even if the behaviors clearly involve planning.

    And by stretching his definition of reactive aggression, Wrangham masks the real driver. It wasn’t just hot tempers that got culled. It was all forms of disruptive autonomy. Including resistance, refusal, and yes, sometimes reactive outbursts. What flourishes is strategic aggression aligned with control systems (raids, executions, conquest, slavery). He’s essentially misclassifying proactive violence as the very thing his model claims was eliminated.

    The reason I’m attacking Wrangham so much is (likely) that there’s so much else I like about his hypothesis that makes the abrupt turn he takes extra upsetting. First, coalitionary enforcement absolutely matters. Once language and symbolic coordination were possible, groups could target individuals who disrupted group order. Second, domestication traits absolutely show up in humans. Smaller brains, more gracile features, extended juvenility…these parallel what happens when animals are bred for compliance. And Wrangham’s distinction between proactive and reactive aggression is useful, even he overcommits to one side.

    I get upset when he emphasizes a moral arc…we became “nicer” by suppressing reactive group members. The archaeological and historical record (slavery, bottlenecks, harems, systemic violence) points to a far darker dynamic…proactive aggression, control, and planned violence were selected for because they succeed in hierarchical systems. I don’t know how he doesn’t see this. How doesn’t he see the removal of disruptive resistance to control systems when he browses a history book through a domestication lens?

    I like Wrangham’s theories without the irrational optimism. For me, that looks like scarcity and group size growth leads to more need for control and coordination. Coalitions form, but instead of only targeting bullies, they target all disruptive reactivity (anyone who won’t conform to the group’s “world-as-it-should-be” model). Reactive individuals (autonomous resistors) are killed or excluded…predictable, compliant individuals survive and reproduce. And, as a byproduct, proactive aggression thrives…because it’s the aggression most compatible with systems of control. Paradox solved.

Recent Posts