Tag: mental-health

  • No…autistic people are not rigid thinkers.

    Autistic people have challenges with cognitive flexibility. Autistic people are black-and-white thinkers. There’s no in-between with us…something is either good or bad. We’re rigid thinkers…stubborn. We oversimplify the world. We’re immune to nuance. We catastrophize. We’re all-or-nothing.

    Different words for the same thing…you’ve heard the bullshit. In diagnostic manuals. On YouTube. In therapy.

    You’ve also heard the other bullshit. Autistic people are inductive thinkers. We focus too much on the insignificant details and miss the big picture. We can’t see the forest for the trees.

    Different words for the same bullshit…but the opposite bullshit…?

    So, am I a paradox? Am I both a reductive and inductive thinker?

    If there’s anything I’m allergic to, it’s paradox…contradiction. A feeling rises in me. My bullshit detector starts to ring out louder than usual. And, like every paradox I encounter, I know this one is wrong…the reason it’s wrong simply needs to be discovered. (Incidentally, this is the stage in my thinking at which I look most reductionist.)

    And so I think. A lot. I have no choice. My brain won’t leave contradictions unresolved. And here’s what I’ve been thinking these past few days.

    The contradiction (autistic people focus on the smallest details instead of the big picture and autistic people have oversimplified models of the world and refuse to account for contradictory details) comes from where you zoom in on my cognitive process.

    Let’s look at what I’m accused of first: black-and-white thinking (aka dichotomous thinking, rigid thinking, etc.). It means I come to snap categorical judgments like right vs wrong, fair vs unfair, safe vs unsafe. I see it written in descriptions of autistic behavior everywhere…rigidity, over-simplification, an inability to tolerate ambiguity. In predictive processing terms, it’s framed as a strategy to minimize error fast. The autistic person overweights a clear prior (“this is wrong”) instead of juggling noisy or contradictory signals.

    Confusingly, I’m complimented for being the polar opposite…for being an inductive thinker. In the same conversation, even. Apparently, I’m good at building generalizations from specific instances. I’m good at noticing patterns in particulars. In predictive processing terms, my attention to detail and my ability to detect anomalies (in patterns, like contradictions) is attributed to the fact that I don’t start with strong priors (think of them as expectations or assumptions). I let patterns “emerge” from the ground up instead of forcing a theory I have. In other words, compared to your “average” person, I give more weight to incoming data and less weight to stories (whether my own, or a group’s). Bottom-up signals drive my learning. I have to learn the “hard way.’

    The feeling of this paradox is….it’s like you’re catching me at different stages of my thinking process, and naming the stages as opposite things.

    I’ll do my best to describe what my process feels like, from the inside, as me.

    When I first try to make sense of something, I weight raw input heavily. I don’t iron out inconsistencies with conventional stories. That probably explains why I notice details that you seem to miss. But when contradiction or incoherence (i.e. bullshit) piles up, and disengaging from the situation isn’t an option, I need to force coherence. I need to escape the stress of unresolved error. For example, if I’m among a group of people who are trying to solve a problem irrationally, who won’t listen to reason, and I don’t have the option of leaving? I might snap to a categorical judgment. I have a very hard time persisting in insane environments.

    You can think of me as having two modes, maybe. In my inductive mode, I’m open to raw feedback. I’m pattern-sensitive. In my dichotomous mode, I close myself off to pointless ambiguity. This second mode is protective (especially in incoherent environments, where I can get fucked up real fast). But that mode isn’t final. What you call my “black-and-white thinking” is a first-pass strategy to isolate a pattern. I grab hold of a signal (true/false) before I layer nuance back in. And in coherent environments, nuance does get layered back.

    Let’s keep circling this.

    In coherent environments (where feedback is timely, proportionate, local, and meaningful), my particular weighting of error signals looks inductive and nuanced to you. I follow details, I update my view of things flexibly, and I build very fine-grained patterns. Here, you admire me. You say things like, “See? You’re so smart! I couldn’t do that. Why can’t you apply this brain of yours to _______?”

    What you don’t realize, maybe, is that the ______ you just mentioned? It’s an incoherent environment. In that environment, rules are contradictory, consequences are delayed arbitrarily, and abstractions are layered on abstractions for reasons of control, etc. And in that environment, the flood of irreconcilable errors becomes intolerable for me. My nervous system reaches for coherence at any cost. On my first pass in every situation, if I’m forced to “share my thoughts,” you’ll see me collapse complexity into a binary frame (“this is right” “that is wrong”). That’s what you see. What you don’t see (if you never let me get to it), is that I follow that by testing it, and testing it, and testing it…adding details at each step and adjusting my frame as I go. But that first pass? That first pass looks like reductionism or “black-and-white thinking” to you.

    I think if Andy Clark were here, he would say that I have weak priors and that I weigh errors strongly. (He’d probably use words like “underweight” and “overweight,” which I hate…but forgive him for.) That in stable conditions, mine is as inductive, adaptive and precise process. But that in unstable conditions (unstable as in a room full of flat-earthers, not unstable as in a tornado is coming or a bear is running toward me), my process leads to overwhelm.

    My brain clamps onto the simplest model…and when it isn’t allowed to move on from there (the flat-earthers are trying to solve intercontinental flight, and the lawn-zombies are trying to figure out how to keep their biological deserts green) that looks like reductionism. Those are problems that are so decontextualized they have no resolution. And when resolution doesn’t seem possible, my system is forced into a kind of emergency closure because feedback has become utterly fucking useless.

    Let’s look at the lawn example.

    I’m fond of saying that North Americans mowing 40,000,000 of lawn is sheer stupidity. Lunacy, hands-down, from any perspective. When I voice that opinion, intelligent people hear it as an oversimplification. And if they know I’m autistic, they put it down to a difficulty with (or outright inability to comprehend) multi-variable causality. This confuses me greatly. (And I suppose I confuse them.)

    This is probably a case of double empathy. Let’s look at it from both sides.

    I see a signal…mowing tens of millions of acres is, at best, a colossal waste of time, fuel, water, and soil. And I voice it simply: “Stupidity.” To me, that isn’t reductionist. I’m cutting through incoherence to register a glaring error. Something that simply doesn’t make sense. An argument not worth having in this lifetime.

    But when my intelligent (and I’m not using that term facetiously) neurotypical listener hears my opinion, they hear an inability to process nuance. Lawns have cultural history. They have aesthetic value. There are economic incentives involved, municipal codes, homeowner psychology, and so forth. They interpret my clarity as an inability to juggle multi-causality, instead of as a refusal to rationalize nonsense.

    Let’s anchor this gap in neuroscience.

    Neurotypical people put a lot of weight into their priors, which is a fancy way of saying they assume the world is coherent. When they hear “Mowing 40,000000 acres of lawn is complete fucking idiocy on every level,” they automatically begin to search for legitimizing narratives. In other words, they start to search for an explanation for why millions of lawns do make sense (i.e. are comprehendible).

    I, on the other hand, put a lot of weight into error signals, which is a fancy way of saying I can’t ignore contradictions. And from my perspective (and from every epistemic perspective that doesn’t involve human fictions), it doesn’t matter how many variables you pile in to the lawn argument…the outcome is wasteful and absolutely fucking absurd.

    There’s a mismatch here. You see consensus as coherence, and complexity as explanatory. In cases like these, I see complexity as a distraction from the basic error. I collapse your absolute mess of contradictory justifications (shorter grass = less ticks (why grass at all?!?), long grass = an eyesore (what does that mean in reality-terms?), we need to think of neighbors/property value/invasive species (why, why, why among all the things you could “think” of in your 80 summers on this planet, are you “thinking” of those particular bullshit abstractions?). I can process multi-variable causality just fine. But I refuse to let causality excuse incoherence (even typing that makes my head hurt).

    I’m misread, socially. Constantly misread. Neurotypical people equate “nuance” with adding mitigating factors until the critique blurs. And when I resist that, they frame it as over-simplification. I’m the one trying to hold onto the full causal picture. Something can be complex and stupid at the same time. Complexity and stupidity are hardly mutually exclusive in civilization…in fact, they might be positively correlated.

    I can call something stupid without denying complexity. So can you. Try it.

  • Was Hobbes right? (and other holes in Wrangham’s narrative)

    Wrangham’s reading becomes “Hobbesian” only if I treat modern Homo sapiens as a transparent example of “what nature does.” But if I see most modern humans as the outcome of a runaway selection process (which I do), then what he’s describing isn’t “the natural course of things”…it’s one very peculiar path, shaped by group-enforced control, ecological shocks, and self-reinforcing dynamics.

    In Wrangham’s frame, humans reduced reactive aggression “naturally,” like bonobos, by killing off bullies. This made us more cooperative and domesticated, enabling civilization. This makes our docility some kind of moral progress…proof of “better angels.”

    But when we look at this as runaway selection, we see that humans reduced disruptive reactivity not because it was inherently maladaptive, but because control systems selected against it. Those who resisted were killed, enslaved, or excluded, while compliant individuals reproduced. It wasn’t a noble trajectory toward peace. It’s a feedback loop of domestication…each round of control flattens diversity, narrows behavior, and strengthens the system’s grip.

    I propose that modern “cooperation” isn’t evidence of a gentle human nature, but of attenuation. A domesticated phenotype optimized for predictability. And what Wrangham calls “our success” is really a fragile state of overshoot. More docile humans and larger coordinated systems make for the massive ecological extraction we see today. Instead of Hobbes’s “nasty, brutish, and short” as the baseline, the baseline was probably messier but more adaptive…with greater tolerance for autonomy, variability, and feedback from the environment.

    I think the Hobbesian story is itself a product of domesticated minds narrating their condition as “progress” (I’m in full agreement with Christopher Ryan here). What looks like the triumph of peace is really the triumph of control which, taken far enough, undermines both autonomy and ecological survival.

    I want to take a second (third? fourth? tenth?) look at Wrangham’s take on reactive aggression now. Because there’s a lot about it that doesn’t sit comfortably with me.

    Reactive aggression (the “heat of the moment,” crimes of passion) is still recognized as human. It may be tragic or destructive, but the law often interprets it as impulsive, unplanned…an overflow of feeling. That makes it mitigating. Proactive aggression (premeditated, calculated), on the other hand, is seen as more dangerous. It reflects intentional control, not eruption. Society punishes it more harshly because it reveals a deliberate strategy of harm. This suggests (to me, anyway) that people intuitively grasp that reactivity is part of being alive, whereas proactive aggression is a sort of deviation…weaponizing intelligence for domination.

    Wrangham says that humans became “civilized” by suppressing reactive aggression. But I think everyone can agree that cultural practice indicates we still see reactive aggression as understandable, even forgivable. What we really can’t tolerate is schemed violence…the kind of proactive aggression that builds empires, executes slaves, or engineers genocide. I think the very logic of law undermines Wrangham’s claim. If reactive aggression were the great evolutionary danger, why is it less punished than the thing he ways persisted unchanged?

    Which brings me back to the better explanatory model…domestication didn’t simply reduce hot tempers. It systematically removed resistance (any kind of reactivity that disrupts control). But at the same time, it rewarded (and still rewards) the forms of aggression that can operate through the system…planned, symbolically justified, and bureaucratically executed. This is why the “banality of evil” (Hannah Arendt’s term for the bureaucratic normalcy of atrocity) feels so resonant: proactive aggression is what really flourished under domestication.

    My next bone of contention with Wrangham is that most examples of reactive aggression he provides in his written work and lectures sounds a hell of a lot like bullying. Proactive bullying.

    With one hand, he defines reactive aggression as impulsive, hot-blooded, emotionally charged aggression…triggered by provocation or frustration and more or less immediate (not pre-planned). But in the same breath, he gives examples that clearly indicate planning, calculation, and strategic targeting. He cites situations where aggression is used to produce submission in the victim…not some kind of heat-of-the-moment explosion. I don’t know of any psychological taxonomies in which that behavior is a fit for reactive aggression.

    Why? Again, I think part of it has to do with his bonobo comparison. He needs “reactive aggression” as the thing bonobos and humans both suppress, to link his self-domestication theory. It certainly makes the story cleaner, too. “We eliminated bullies” sounds more like moral progress than “we empowered the strategic aggressors.” And it smells like simplification to me. By labeling bullying “reactive,” he folds it into his main category, even if the behaviors clearly involve planning.

    And by stretching his definition of reactive aggression, Wrangham masks the real driver. It wasn’t just hot tempers that got culled. It was all forms of disruptive autonomy. Including resistance, refusal, and yes, sometimes reactive outbursts. What flourishes is strategic aggression aligned with control systems (raids, executions, conquest, slavery). He’s essentially misclassifying proactive violence as the very thing his model claims was eliminated.

    The reason I’m attacking Wrangham so much is (likely) that there’s so much else I like about his hypothesis that makes the abrupt turn he takes extra upsetting. First, coalitionary enforcement absolutely matters. Once language and symbolic coordination were possible, groups could target individuals who disrupted group order. Second, domestication traits absolutely show up in humans. Smaller brains, more gracile features, extended juvenility…these parallel what happens when animals are bred for compliance. And Wrangham’s distinction between proactive and reactive aggression is useful, even he overcommits to one side.

    I get upset when he emphasizes a moral arc…we became “nicer” by suppressing reactive group members. The archaeological and historical record (slavery, bottlenecks, harems, systemic violence) points to a far darker dynamic…proactive aggression, control, and planned violence were selected for because they succeed in hierarchical systems. I don’t know how he doesn’t see this. How doesn’t he see the removal of disruptive resistance to control systems when he browses a history book through a domestication lens?

    I like Wrangham’s theories without the irrational optimism. For me, that looks like scarcity and group size growth leads to more need for control and coordination. Coalitions form, but instead of only targeting bullies, they target all disruptive reactivity (anyone who won’t conform to the group’s “world-as-it-should-be” model). Reactive individuals (autonomous resistors) are killed or excluded…predictable, compliant individuals survive and reproduce. And, as a byproduct, proactive aggression thrives…because it’s the aggression most compatible with systems of control. Paradox solved.

  • Human Self-Domestication…selection against autonomy, not hot heads.

    Richard Wrangham frames selection against reactive aggression (he uses the term “hot heads”) as the driver of human self-domestication and argues that our level of proactive aggression largely remained the same. He describes these as distinct evolutionary strategies, each with different adaptive costs and benefits.

    To be clear, reactive aggression is impulsive, emotionally-driven violence in response to provocation or frustration (e.g. bar fights, chimpanzee dominance squabbles, etc.). Proactive aggression is calculated, planned violence deployed strategically for advantage (e.g. ambushes, executions, coordinated raids).

    Wrangham’s central point is that self-domestication arises when reactive aggression is consistently punished (and culled), while proactive aggression not only persists but is sometimes institutionalized (authorities get a monopoly on violence).

    His reasoning is as follows.

    In small-scale societies, reactive aggressors were costly to group stability. They disrupted cooperation, created unpredictability, and risked alienating allies. With language and coalitionary power, groups gained the ability to collectively punish or kill these “hot heads.” Over many generations, this reduced the frequency of impulsively aggressive temperaments in the gene pool. The result is a calmer, more tolerant baseline disposition in humans compared to chimpanzees…one of the classic “domestication syndrome” traits.

    What rubs me the wrong way is how quickly Wrangham assumes, out of all the traits that make up domestication syndrome, that reactive aggression is what was being selected for. Why wouldn’t the selection pressure be for proactive aggression, for example? Wrangham admits that proactive aggression was reinforced in human evolution. We became better at planned violence (executions, warfare, conquest) than any other primate. Crucially, proactive aggression is socially sanctioned…it’s framed as justice, punishment, or defense of the group. That makes it evolutionarily advantageous, not disadvantageous. In Wrangham’s model, the ability to conspire and kill reactively aggressive individuals is itself an expression of proactive aggression, and therefore part of what made us more cooperative at scale.

    This hypothesis feels reductive to me. Domestication in other species involves selection for predictability, docility, and compliance, not just low reactivity. By centering only on reactive aggression, Wrangham treats self-domestication as a paradoxical success story…calmer humans enabled cooperation, and cooperation enabled civilization. It leaves out what civilization actually does…the flattening of error landscapes, where any form of reactivity (not just aggression) becomes maladaptive in large, controlled groups.

    I’ve been thinking seriously about whether an argument could be made, just as strong or stronger than Wrangham’s, that selection for proactive aggression was the real driver in the human domestication story.

    Large-scale violence is a consistent theme in the emergence of complex societies…from the mass graves of the Neolithic to the conquest states of Mesopotamia, Mesoamerica, and beyond. Warfare, conquest, and raiding were not incidental to civilization. They were the engines of state formation, with proactive aggression (planned and coordinated violence) clearly rewarded at both the genetic and cultural level.

    Take the Y-chromosome bottleneck (5,000-7,000 years ago). It shows that ~90-95% of male lineages were extinguished, leaving only a few dominant bloodlines. This is genetic evidence of the real pattern of civilizational “coordination”: violent conquest and reproductive monopoly by elite men. Where in civilization’s history is Wrangham’s “peaceful coalitionary suppression of “bad apples”?” I just don’t see it. “Super-ancestor” events (e.g. Genghis Khan’s lineage) show the same thing in miniature. Proactive, organized aggression yields massive reproductive skew.

    In fact, let’s turn to reproductive skew and polygyny. Even convention historical narratives tell a story of high-status males (kings, chiefs, emperors, warlords) with harems, concubines, and multiple wives. these are outcomes made possible by proactive aggression…conquest, enslavement, and the monopolization of resources. Lower-status men were excluded from reproduction, not because they were “too reactive” (though those certainly would have been excluded as well), but because they lost wars, were enslaved, or killed.

    Proactive aggression isn’t just violence. It’s long-term planning, coalition-building, deception, and symbolic justification (myths, laws, and religions sanctifying violence makes up most of the human history book). These are precisely the traits that expand during the civilizing process…organizational capacity, abstract rule-following, and symbolic reasoning, all in service of controlling large groups.

    I have a few thoughts on why Wrangham favors the other story (selection against reactive aggression). It links directly to his bonobo analogy (their lower reactivity compared to chimps). And it fits with domestication syndrome traits (softer faces and reduced baseline violence), of course. But these seem weak to me. What it comes down to, I believe, is Wrangham gravitating toward an age-old optimistic narrative…humans becoming more cooperative (from the “less hot-headed” angle), writing poems, and painting the Sistine Chapel. To me this is yet another just-so story tilted toward optimism. Real, documented human history (and the present, to a large extent) reads like selection for manipulative, proactive violence. Those who excel at strategic violence and symbolic control reproduce disproportionately. Full stop. This fits much better with what we see in the pages of history…runaway systems of control, hierarchies, and narrative manipulations that still structure our domesticated condition. These are better explained as the costs of selecting for proactive aggression than as some sort of “goodness paradox”.

    In fact, it might be a silly thought experiment, but who’s to say that if were possible to actively select for proactive aggression in other species, that domestication traits wouldn’t appear?

    To me, domestication syndrome (floppy ears, smaller brains, prolonged neoteny, pigmentation changes, altered reproductive cycles) arises because selection pressure narrows the error landscape of a species. The mechanism most often discussed is neural crest cell changes…but the reason for those changes could be any number of selection pressures. In foxes, it was tameness toward humans. In humans (Wrangham), he says it was lower reactive aggression. But it could also plausibly be selection for predictability, planning, and controlled aggression if that’s what the system demanded (and did, and does!).

    The core idea is if you reduce the payoff for being “unpredictably reactive” and increase the payoff for being “strategically compliant,” the biology shifts. The neural, hormonal, and developmental systems adapt to reward that niche. The syndrome may look similar (the smaller brains, juvenilization, etc.) because what’s really being selected for is attenuation of wild-type reactivity in general.

    Let’s move away from what I see as Wrangham’s too-narrow focus and broaden this narrative a bit.

    Let’s look at the human story from a predictive coding lens, and consider scarcity as a selector. In times of ecological stress, groups face more prediction errors (crops fail, animals migrate, rivers dry up). Some individuals resolve error by updating their model (adjusting expectations, moving). Others resolve error by updating the world…forcing it into alignment with their model. The latter is the logic of domestication…bend plants, animals, landscapes, and people into predictability.

    From here, we can see proactive aggression as control in action. On the ground, this isn’t abstract. Pull up wild plants and keep only the docile grains. Cull the fence-jumping sheep and reactive roosters…breed the calm ones. Raid nearby villages, enslave, execute dissenters, and reward compliance. This is proactive aggression. Planned, systemic, future-oriented control. It’s violence as policy.

    This makes me think of how Robert Kelly frames humanity’s cultural revolution. He proposes that symbolic thought makes it possible to imagine not just “what is,” but “what should be.” And “what should be” becomes a shared prior (model of the world) that groups can coordinate around…even if it doesn’t match reality. Once you can coordinate around a model, you can impose it, and enforce conformity within the group. To me, that’s proactive aggression (if we’re still calling it that) elevated…control not only of bodies now, but of perception and imagination.

    What disappears under a system like that? Well, for one, reactive aggression clearly becomes intolerable. It represents autonomous feedback (an individual saying “no” in the moment). In control systems, that kind of unpredictable resistance is punished most severely. You know that. Slaves who rebel are killed. Chickens that cause problems are culled. Men who resist capture are killed first. The system slowly culls “reactors” and favors the predictable (those who update their selves rather than the system).

    This is what I see as the flattening of the civilizing process…the rock tumbler effect. Proactive aggression is the abrasive force that flattens everything…landscapes, genetic diversity, behavioral variation, etc., etc. Reactive aggression is just one of the first “edges” to be ground away. A byproduct of selecting for proactive control. A footprint of the real selection pressure. And what remains is a domesticated phenotype…more compliant, less volatile, more predictable.

    Not convinced? Try this experiment.

    Write these two hypotheses out on a sheet of paper:

    1. “Coalitions punish hot-heads -> reactive aggression selected against -> cooperative, domesticated humans emerge.”
    2. “Coalitions punish (in- and out-group) resistors to group control -> resistance (often expressed as reactive aggression…rebellion, resistance to domination) -> compliant, predictable humans emerge.

    Now read as many history books as you can, testing these as you go. Take notes.

    Only one of these hypotheses explains why proactive aggression thrives where reactive aggression doesn’t. There is no paradox. Proactive aggression isn’t punished because it aligns with group objectives, and what disappears isn’t “bad tempers” but unmanaged defiance. Resistors are killed. Compliant captives are taken. Rebels are executed. Compliant laborers survive.

    This is selection against the unpredictable expression of autonomy that disrupts control.

  • Different Maps of Reality

    Predictive processing has a theory for autism.

    First, a general review of predictive processing (PP) itself…

    According to PP, my brain is always predicting what’s about to happen and what input means. These predictions are called priors (I think of all the priors in my brain as my model of reality). The data coming in via my senses is compared against priors. If it fits (my prediction is correct), my perception feels smooth (I see this as that “autopilot” mode that autistic people envy neurotypical people for in social situations). If it doesn’t fit, I experience a prediction error…and my brain will attempt to update my priors so that I don’t experience that error in the future (or update the world so that it fits my model of reality…but we’ll leave that for now). The experience of a prediction error is experienced as discomfort, or pain, or frustration, anger, etc…

    Essentially, the brain has a map of reality that it navigates the world with. Or rather, it navigates the map (the brain is trapped in that dark cavity I call my skull), and updates the map only when incoming data causes an error, forcing it to (and when updating the world to fit the map isn’t an option).

    Social priors are the part of the map labeled “other people” and “me, in the world of people”….predictions about other people and me-in-relation-to-other-people.

    More than any other part of life/reality, social life is full of ambiguous signals…tone of voice, facial expressions, body language, context-dependent words and actions, irony, “unspoken rules,” etc. The neurotypical person leans on strong social priors (predictions) like, “A smile means friendliness,” “If someone asks ‘how are you,’ they don’t want a literal health report,” and “This situation calls for deference/compliance.” I think of these as learned shortcuts. They smooth ambiguity so quickly that the (neurotypical) person doesn’t notice how much interpretation they’re really doing (think of a game engine rendering details in real time and high resolution).

    PP proposes that in “autistic” perception, these high-level social predictions are either less weighted (weaker…signals don’t get automatically collapsed into expected meanings…the brain says “hmm, not sure…keep checking the data) or over-precise (too narrow…the brain locks a prediction too tightly on detail, so it flags even small deviations as error). Regardless of which it is, functionally the result is similar…more error signals, less smoothing.

    According to this explanation, as an autistic person, my experience of interacting with other people isn’t the “typical” experience. I need more explicit clarification (because cues don’t self-resolve). Social situations feel volatile or unpredictable to me (because I’m tracking details others are smoothing over). And I need extra cognitive effort to “keep up,” because I’m building my model of reality more deliberately…less automatically.

    Imagine me around a campfire with a few (neurotypical) people. A rustling sound comes from a nearby bush and attracts everyone’s attention. One person says something like, “It’s just the wind.” In short order, everyone returns to whatever they were doing. The person who spoke has a strong prior (prediction) that the world is stable and there’s no danger to worry over. And everyone there (other than me) has a strong prior that someone saying “It’s just the wind” means it really is just the wind. A sort of bias where, out of all the data coming in, the data coming from other people is prized most highly. My response is different. I say something like, “It could be the wind, but it could be something else.” I have a weaker reliance on predictions…I weigh sensory input more highly than my fellow campers.

    You can see how weaker social priors (predictions/map of social reality) would make it hard for me to collapse ambiguous social input into the “expected” consensus meaning . I see more of the true uncertainty of the world…I don’t have it smoothed away by some automatic internal mechanism.

    That’s the gist of it. And I generally agree with this description of how I experience other people. And as an explanation for the frustrations I experience socially, it certainly feels spot on.

    But is it only social priors that are weaker (or overly precise) for me? Or is it priors in general? Depending on who you ask in predictive coding circles, the answer is different.

    Early accounts (e.g. Pellicano & Burr, 2012) suggested that autistic people have weaker priors across the board, not just social ones. That would mean that I rely less on prediction in all areas of life. Their evidence for this included the fact that autistic people are less fooled by certain visual illusions (like the hollow mask illusion or context-driven size illusions) and the whole savant piece (enhanced perception of detail and irregularities in non-social patterns…sounds, textures, math, mechanics). The story told was that my “weak priors” are my global style of perception…my world is simply less smoothed…more data-driven.

    Later accounts (e.g. Van de Cruys, Lawson, Friston) argue it’s not globally weaker, but misallocated precision. In other words, overly precise priors at one level (rigid routines, intense interests), too little precision at another (ambiguous social interference), and sometimes overly precise weighting on prediction errors (making every mismatch I feel seem urgent). This would explain why some autistic people are incredibly good at pattern-based forecasting (weather models, coding, chess) but struggle in fluid, implicit social contexts.

    Both camps agree that social priors are a “special case,” I think. Social environments rely heavily on very fuzzy priors like shared norms and implicit meanings. In the predictive coding model, those are exactly the kinds of priors autistic perception would likely either underweight (“I need to check the data”) or overweight in detail (“I expect exactly X, so any deviation throws me”). In other words, social priors are where my difference shows up most glaringly…but the difference itself might apply everywhere.

    Ok, the neurotypical neuroscientists have had their turn. Pass me the mike now, please.

    My first reaction to all this…well, anger. And perplexity. I simply don’t understand how, given all the forms of data you can base your map of reality on, that you would choose….other people? With their confusion, and duplicity, and moving moralities, and drives, and, and, and….Why? Why that?

    It’s not that I don’t “get” what social priors are. They’re about shared assumptions that keep groups coordinated. What counts as polite, what role you’re supposed to play, what’s “normal” in a given social setting, which explanations everyone in the group nods along to, even if they’re flimsy….that all makes sense to me so long as there is a group goal. As a sort of necessary evil in service of achieving an objective that requires group coordination…I get it. But as a way to live your life? Intentionally mapping your reality on the fuzziest, most contingent, and most contradictory signposts you can find? That confuses the fuck out of me.

    Let’s take a closer look at social priors (the part of my reality map that has to do with other people). They’re arbitrary…different from group to group, era to era. They constantly shift…flipping overnight (today’s taboo is tomorrow’s norm). They’re completely detached from ecology…attached instead to abstract concepts like appearance, hierarchy, interpersonal signals. And they leave the door open 24/7 to gaslighting…if everyone else insists the emperor has clothes, the “consensus” is real enough to punish me even if it’s bullshit.

    Why not map your reality on data that makes sense? Ecological priors like gravity, cycles of day/night, seasons, and biology are stable, and are directly tethered to reality (what happens predictably and significantly for survival). Embodied priors like the way your body predicts balance, hunger, threat that are constantly and deeply tested through feedback loops that are largely transparent. Social priors? The least tethered? The most prone to drift and self-reference? Really? Really?

    On a theoretical level, I try to understand why neurotypicals lean exclusively on this messiest area of the map. Social priors smooth uncertainty, which must feel good. They also enable fast coordination in groups…and those neurotypicals sure like being in groups. There, they’re rewarded…”getting along” matters more (socially, professionally) than being right. But treating them as reality itself? To the point where questioning them is seen as dysfunction rather than discernment. Jesus Christ.

    I know this is the double empathy problem at work. I’ll never be able to truly empathize with the neurotypical condition. And if I had to state my position in relation to theirs, as dispassionately as possible, I’d simply say that I’d rather my anchor my sense of the world in ecological and embodied feedback than in fragile, shifting group models. And that this position of mine (it’s not a choice) is not dysfunction unless group coordination is forced upon me as my only means of survival. That my position is arguably closer to “baseline life” than the civilizational overlay.

  • ramble (predictive coding, autism,simulation)

    I have predictive coding (ala Clark, Friston, Vermeulen), autism, Schmachtenberger, Baudrillard, Hoffman, and some recent experiences tumbling about in my brain, desperately looking for synthesis. I feel threads that are impossible to ignore.

    Quick recap of predictive coding and autism.

    In predictive coding models of the brain, perception is made up of prediction and sensory input. “Normal” brains lean heavily on priors (models of what the world usually is) and only update when error signals are strong. Most accounts of autism describe either weak priors (less predictive or top-down bias…meaning each sensory signal hits with more raw force), or overly precise priors (my predictive model is too narrow or rigid…meaning any deviation is a kind of error for me. Either way, in practice, the world feels less stabilized by consensus for me. I don’t get to lean on the stories most people use to blur and smooth reality.

    While listening to a recent interview with Daniel Schmachtenberger, I was reminded that all models of reality are simplifications…they leave things out. Neurotypical perception is itself a model, with a heavy filtering function…a consensus map. From this perspective, if my priors are weaker (or overly precise)…I’m closer to a raw reality where models break down. For me, the “gaps” are almost always visible.

    From there, it’s an easy jump to Baudrillard’s warning, that modern societies live inside simulations (self-referential systems of signs, detached from reality). If I feel derealization…less of a “solid self” (I do)…that’s probably simply what it’s like to live in a symbolic order but not buy into it fully. The double empathy problem is essentially me feeling the seams of a simulation that others inhabit…seamlessly.

    This “self” itself is a model. It’s a predictive story your brain tells to stabilize your experience. And because my priors about selfhood are weaker (or less “sticky”), my sense of “I” feels fragile, intermittent, unreal, etc. In this fucked up place that the majority of people call “reality” (where everyone’s popping anti-depressants and obliterating the planet), my experience looks like “derealization” or “depersonalization,” but to me, it’s a kind of clarity…a deep unignorable recognition that the self is a construct. What becomes a deficit in this place (“I can’t hold reality/self together the way others do”) is a form of direct contact with the limits of models of reality (vs reality itself).

    Which leads me to a burning question I’ve had for a while now: What are the chances that predictive coding’s distinction between “normal” and “autistic” actually points to the neurotypical configuration being one of priors/assumptions about the world that (in contrast to a healthy adaptive baseline) are simply imprecise (overfitted to some inaccurate model of reality)?

    Neurotypical perception leans more on shared, top-down priors (context, expectations, social norms, etc.). That makes perception stable and efficient but extremely bias-prone. (Studies show that neurotypicals are more susceptible to visual illusions than autistic groups.)

    Like I mentioned before, autistic perception has been described as weaker/less precise priors (Pellicano & Burr), or over-precise prediction errors and simply different precision allocation (Van de Cruy’s HIPPEA; Friston/Lawson’s “aberrant precision”). Functionally, both mean less smoothing by priors and more “bottom-up” detail, with (what they say) are costs for generalization and volatile environments. Their conclusion is that autistic people “overestimate” environmental volatility (we update too readily), while NTs are able to charge through with their predictive models intact.

    And I have a real problem with this interpretation that I’ll get to shortly. But first, let’s explore the trajectory of the sort of consensus reality that I consider most neurotypical people to be living in….that set of strong priors/assumptions about the world that civilization shares. Because I have a hunch that its divergence from reality is an inevitable feature, not some sort of “bug” to be tweaked for.

    If we treat civilization itself as a kind of giant predictive-coding system, its “life story” looks eerily like the brain’s, where the priors are consensus itself.

    I see consensus reality as a stack of expectations or assumptions about the world shared by enough people to make coordination possible. Religion, law, money, the idea of a “nation”…these are all hyperpriors (assumptions so deep they’re almost never questioned). They make the world legible and predictable (people can trust a coin or a contract or a census).

    And just like in individual perception, civilization’s priors aren’t about truth…they’re about usefulness for coordination. A shared model works best when it ignores inconvenient detail and compresses messy reality. Divergence from reality is a feature…the system actually becomes stronger by denying nuance. For example, “grain is food” (simple, measurable, taxable). But reality is actually biodiversity, shifting ecologies, seed autonomy, etc. See how that works?

    This divergence from reality deepens in a few ways, the most obvious being self-reinforcement. Once a model is institutionalized, it defends itself with laws, armies, and propaganda. It also suppresses signals…inputs that contradict priors are treated as “prediction errors” to be minimized, not explored. And, back to Baudrillard, the model (that is civilization) refers increasingly to itself than to external reality (markets predicting markets, laws referencing laws, etc.). The longer it runs, the more this consensus model fine tunes and solidifies its own reality.

    From a civilizational perspective, divergence from reality is coherence. If everyone buys into the strong priors (money is real, my country is legitimate, my god demands I go to church), coordination scales up and up. The obvious cost is that the model loses contact with ecological and biological feedback…the “ground truth.” Collapse shows up when prediction error (ecological crises, famines, revolts) overwhelm the significant smoothing power of the priors.

    The bottom line is that civilization’s consensus model requires detachment to function. Life-as-it-is needs to be turned into life-as-the-system-says-it-is. In predictive coding terms, civ runs on priors so heavy they no longer update. In Baudrillard’s terms, simulation replaces reality. And in my own lived experience (as a “neurodivergent” person), derealization isn’t some kind of personal glitch…it’s what the whole system is doing, scaled up.

    This whole thing gets even more interesting when I think more deeply about the term “consensus.” It implies something everyone’s contributed to, doesn’t it? But that clearly isn’t the case. What’s actually happening is closer to consent under conditions…most people adopt civilization’s model because rejecting it carries penalties (exile, poverty, prison, ridicule). It seems to me that the “consensus” is really an agreement to suspend disbelief and act as if the shared model is real, regardless of who authored it.

    Whose model is it, then? It depends when and where you’re living. It could be state elites…kings, priests, bureaucrats historically defined categories like grain tallies, borders, and calendars. It could be economic elites…merchants, corporations, financiers shape models like money, markets, and “growth.” It could be cultural elites…professors, media, and educators maintain symbolic order (morality, legitimacy, and values). I don’t think it’s contentious to say that whatever the model, it reflects the interests of those with the leverage to universalize their interpretation. Everyone else gets folded into it as “participants,” but not authors.

    The commonly accepted narrative is that homo sapiens won out over other human species due to our ability to coordinate, and that nowhere is this coordination more evident than in the wonderous achievement we call Civilization. But why isn’t anyone asking the obvious question…coordination toward whose ends? Because coordination certainly isn’t “for humanity” in some neutral sense…it’s for the ends of those who set the priors. Grain-based states are coordinated bodies for taxation, armies, and monuments. Modern market democracies are coordinated bodies for consumption, productivity, and growth. The “consensus” isn’t valuable because it’s true…it’s valuable because it directs billions of bodies toward a goal profitable or stabilizing for a ruling class.

    Now we come up against the double bind of participation (as an autistic person, I’m intimately familiar with double binds). You may not have authored civilization’s model, but you can’t opt out without huge costs. Not participating is madness or heresy. I’m a dissenter and so I’m “out of touch with reality.” The pathologization of neurodivergent mismatch translates to me as: “You’re wrong. The consensus is reality.” To which I say, not only is consensus reality not reality…it isn’t fucking consensus, either. It’s a cheap trick….the imposition of someone else’s priors as if they were everyone’s. Calling it consensus simply disguises the extraction of coordination.

    I want to talk now about Vermeulen’s (and others’) conclusion that the weaker (or overly precise) priors that characterize autism come at the cost of not being able to navigate volatile environments.

    To me, this is just another example of the decontextualization rampant in psychology and related fields (I see it all grounded in a sort of captivity science). And, in this case, the context that’s not being accounted for is huge. I think Vermeulen and others falsely equivocate volatile SOCIAL environments and volatile environments in general.

    It’s been my experience (and that of others), that autistic people perform quite well in real crisis situations. When social smoothing has no real value (or can be a detriment, even). But Vermeulen seems to think that my ability to function is impaired in the face of volatility (he makes some stupid joke about how overthinking is the last thing you want to do if you cross paths with a bear…ridiculous). I find the argument spurious and context-blind (ironic, considering he defines autism itself as context blindness).

    The argument is as follows:

    Because autistic perception is characterized by weaker or overly precise priors, each signal is taken “too seriously” (less smoothing from context). In a volatile environment (fast-changing, noisy, unpredictable), this supposedly leads to overwhelm, slower decisions, or less stability. Therefore, autist priors are maladaptive in volatility. B-U-L-L-S-H-I-T.

    Let’s pull the curtain back on Vermeulen’s hidden assumption.

    When researchers say “volatile environments,” they clearly mean volatile social environments. All you have to do is look at the nature of the studies, where success depends on rapid uptake of others’ intentions, ambiguous cues, unspoken norms, etc. In that kind of volatility, having weaker social priors (not automatically filling in the “shared model”) is costly. But it’s a category error to generalize that to volatility in all domains.

    In environments characterized by social volatility, strong priors (the ones neurotypicals rely on) smooth out the noise and let them act fluidly. I’ll grant you that. But what the fuck about ecological volatility? Physical volatility? Hello?!? Sudden threats, immediate danger, technical breakdowns, real-world crises…where over-reliance on priors blinds you to what’s happening (“This can’t be happening!!”, denial, social posturing). Here, weaker/precise priors are a fidelity to incoming data and clearly convey an advantage.

  • What Wrangham Gets Wrong About Human Domestication

    (Hint: 900,000 cows are slaughtered daily. They shit where they eat and wouldn’t have a hope in hell at surviving without human care. But they’re nice.)

    In The Goodness Paradox, Richard Wrangham argues that the main selection pressure in human (self-)domestication was the weeding out of reactive aggression. It’s a nice story that makes the net gain of human domestication harder to argue against. But, to me, it’s clear that selection against reactivity in general (or unpredictability) is the bigger, truer story, of which the reduction of “reactive aggression” is simply the most visible (and PR-friendly) chapter. Taken as a whole, and across species, the domestication package is clearly a general downshift in arousal/reactivity with a re-tuning of social expectations…not just the loss of hair-trigger violence.

    Let’s look at domestication again while entertaining this broader (and inconveniently less moralistic (duller, rather than nicer humans) selection pressure.

    For one thing, physiology moves first…and it’s general. In classic domestication lines (e.g. Belyaev’s foxes), selection for tameness blunts the HPA axis and stress hormones overall…fewer and fewer cortisol spikes, calmer baselines. That’s not “anti-aggression” specifically; it’s lower stress reactivity across contexts. Brain monoamines shift too (e.g. higher serotonin). That’s a whole-system calm that would make any behavior less jumpy (including but not limited to aggression).

    Developmental mechanism also points to a broader retune. The “domestication syndrome” is plausibly tied to mild neural-crest hypofunction, a developmental lever that touches pigmentation, craniofacial shape, adrenal medulla, and stress circuitry. In humans, BAZ1B (a neural-crest regulator) is linked to the “modern” face and is part of the self-domestication story. None of that is news…but if you tweak this lever, you clearly soften the whole reactivity profile…not just aggression. And my guess is that whoever’s fucking with the lever has his eye on the “compliance” dial more than any other.

    Comparative signals align, too. Genomic work finds overlaps between human selective sweeps and domestication-candidate genes across species…showing a syndrome-level process rather than some sort of single behavioral knob. Craniofacial “feminization” over time in H. Sapiens fits reduced androgenic/reactive profiles, too.

    Domesticated behavior tracks a “global calm.” Domesticated animals are less fearful, less erratic, and more socially tolerant than their wild counterparts. Your dog’s tendency to “look back” to you in unsolvable tasks is a manifestation of that…when arousal is lower and social cues are trusted, help-seeking beats reactive persistence. That’s a broad predictability play (that has nothing to do with aggression).

    Obviously, Wrangham’s focus still matters. His key point, the decoupling of reactive vs proactive aggression in humans (we got tamer in the heat-of-the-moment sense, but remained capable of planned, coalitionary violence), is real and important to explain. It’s part of the story, but not the whole story. As general reactivity is reduced, strategic (planned) aggression is preserved…because strategic aggression isn’t a startle reflex; it rides on executive control and group coordination. But selection against reactive aggression isn’t the driver in this story. It’s just one behavioral readout of a deeper arousal/volatility downshift. A nice part (maybe) of an otherwise quite shitty story (from life’s vantage point). The beef industry might point out how nice the cows are, but I don’t think even they would try to argue that “nice” is what it’s aiming for. Dull. Compliant. And so it goes with all domestication. There is an objective in the domestication process, and any and all traits that impede progress toward that objective are pruned. (adding “self-” to domestication when it comes to humans, while accurate in the sense that the domesticating agent was of the same species, gives it a voluntary flavor that has no evidence in history…the domestication of humans was driven by systemic enslavement and reproductive control just as it was for all domesticates)

    Why is it so important to me to find the driver of human domestication at all? Why not just start from the broadly-accepted premise that we are a domesticated species and go from there? Because I need to know what’s truly going on in the brain during this domestication process. How do we get to the brain we call “typical” now? What was it selected for? Was it selected for something broadly adaptive? Or is it more like runaway selection? An overfitting?

    To me, cognitively, domestication looks like a down-weighting of volatility and a reallocation of precision (in predictive-coding terms). Brains with lower expected volatility (that have “the world is less jumpy” as a hyperprior…fewer LC-NE-style alarm bursts…a calmer autonomic tone), higher precision on social priors (human signals are treated as the most trustworthy ones…ecological “noise” gets less weight), and policy canalization (high confidence in proximity/compliance/help-seeking policies).

    I think that human self-domestication primarily targeted behavioral and physiological volatility (a population-level reduction in phasic arousal and unpredictability) of which lower reactive aggression is a salient subset. And that the result is down-tuned HPA/LC reactivity, strengthened social priors, and canalized, low-variance action policies. Think of what happened as some sort of reactivity pruning (where reactive aggression was one prominent branch that got lopped off).

    What is the domesticated brain? Zoomed out, it’s clearly an instrument that’s been made dull. One that exhibits blunted responses to non-social unpredictability (startle, sensory oddballs, metabolic stressors), not just to dominance threats. And anti-aggression alone doesn’t suppress those.

    If I’m reading the studies properly, there are signatures of what I’m talking about in stress-regulatory and neuromodulatory pathways (HPA, serotonin, vasopressin) and neural-crest development…not just androgenic or specifically aggression-linked loci. Recent multispecies work pointing at vasopressin receptors and neural-crest regulators certainly seems consistent with this.

    Wrangham’s story doesn’t account for lower intra-individual variance in exploratory/avoidant switches and faster convergence on socially scaffolded policies (like help-seeking) across types of tasks (anti-aggression predicts biggest effects only in conflict contexts). It doesn’t explain the psychotic consensus reality holding everyone in, as it rolls off a cliff.

    (In fact, I question how much of the reactive aggression branch got lopped off…surely, not nearly as much as we think. What self-domestication mostly did was gate when, where, and how the majority of people show reactivity. When accountability and real-world consequences are high, most people keep a lid on it. When consequences drop (anonymity, distance, no eye contact, no immediate cost), the lid starts to rattle…online, in cars, in fan mobs, in comment sections. I don’t think reactive aggression was bred out so much as trained into context…and how well you do in that context will largely determine the story you tell. Harvard professors are clearly doing quite well in the civilizational context and consequently have pretty stories to tell.)

  • The Predictive Brain: Autistic Edition (or Maybe the Model’s the Problem)

    There’s a theory in neuroscience called predictive processing.

    It says your brain is basically a prediction engine that’s constantly trying to guess what’s about to happen (so it can prepare for it). In other words, you don’t just react to the world…you predict it, moment by moment. The closer your model (of predictions) matches reality, the fewer surprises you get. Fewer surprises, less stress.

    The model applies to everything…light, sound, hunger, heat. But also to something far messier: people. From infancy, we start modeling the minds of those around us. “If I cry, will she come?” “If I smile, will he stay?” It doesn’t need to be conscious…it’s just the brain doing what it does (building a layered, generative model of how others behave, feel, and respond). Social expectations become part of the predictive model we surf through life on. (nod to Andy Clark’s Surfing Uncertainty)

    From the predictive processing perspective, autistic people aren’t blind to social cues. (That’s outdated bullshit.) But we weight them differently. Our brains don’t assign the same precision (the same level of trust) to social expectations as most people do. So we don’t build the same nice, tight models, make the same assumptions, or predict the same patterns.

    For example, I can read derision just fine. But I don’t use it to auto-correct my behavior unless it directly impacts something I care about. For better or for worse, my actions aren’t shaped by unspoken norms or group vibes…they’re shaped by what feels real and necessary in the moment.

    If you sat me down in front of Andy Clark or Karl Friston (smarty- pantses in the predictive processing world) they’d probably agree. I think. They’d tell me I’m treating social priors as low precision. That my brain doesn’t update its models based on subtle social feedback because it doesn’t trust those models enough to invest the effort. And that my supposed “motivation” is actually baked into the model itself (because prediction isn’t just about thinking, it’s about acting in accordance with what the brain expects will pay off).

    Ok. But something’s missing…something big. Context.

    Implicit in the predictive model is the idea that social priors are worth updating for. That most social environments are coherent, that modeling them is adaptive, and that aligning with them will yield good results.

    But what if they’re not? What if you turned on the news and saw that the world was….kind of going to absolute shit? And that, incomprehensibly, people seemed fine enough to let clearly preventable disasters simply unfold and run their course?

    What if the social signals you’re supposed to model are contradictory? What if they reward falsehood and punish honesty? What if they demand performance instead of coherence?

    In that case, is it still a failure to model social cues? Couldn’t it be a refusal to anchor your behavior to a bullshit system? A protest of the organism rather than a failure?

    Because from where I sit, if social information is incoherent, corrupt, or misaligned with ecological / biological reality, then assigning it low precision isn’t a bug…it’s a protective adaptation. Why would I burn metabolic energy predicting a system that specializes in gaslighting? Why would I track social expectation over reality? “Why do THEY? ” is the question I ask myself every day. (Just when I start to accept that people simply love the look of grass instead of nature, they go out and cut it….then just when I start to accept that people love the look of grass that is a uniform height (rather than actual grass)…they go out and cut it under clear skies when it’s 35 degrees, killing it…just when I start to accept that people are born with some sort of pathological compulsion to mow landscapes, they replace a portion of their yard with a pollinator garden…because enough of their neighbors did.)

    In predictive processing terms, maybe we (autistic people) are saying, “This part of the world isn’t trustworthy. I’m not investing in modeling it.” or just “I don’t trust the model you’re asking me to fit into.”

    Of course, saying that comes at a real cost to me. Exclusion, misunderstanding, misalignment. I can sit here all day telling you how principled my stand is…but that “stand” is clearly exhausting and has resulted in long-term adaptive disadvantages (in this place).Systems (“good” or “bad”) almost always punishes non-modelers. But that doesn’t make me wrong. Reality is reality.

  • Social deficit? Or social defi…nitely-don’t-care?

    Don’t get all worked up. This is just me thinking out loud.

    I have problems with social settings. I really do. But I often find myself wondering if it’s less a deficit in social awareness, and more a different motivational structure (a different why, not a failed how). Let’s pull this apart.

    The standard narrative is that autistic people struggle to social cues. But, hell, I do read them…especially the negative ones (derision, exclusion, mocking). What I don’t do is monitor them constantly as a way to regulate my behavior. Because I don’t think my behavior is rooted in alignment with other people…it’s rooted in functional or internal need.

    When I was a kid I got bullied a lot. A psychologist might say that I failed to perceive signals from the group that would have allowed me to integrate successfully…and that bullying is a sort of result of failed integration. But I’ve come to realize that it’s not that I failed to perceive the signals that led up to being punched in the face…it’s more like those signals weren’t previously relevant to my goals. I had a different value hierarchy, maybe.

    Your average neurotypical person is conditioned to constantly scan for social matching, conformity, “sameness” (gestures, interests, tones). They seek safety in blending in…self-protective group behavior built on the belief that sameness = acceptance.

    I don’t do that…thing. Not by choice, anyway. I act based on what makes sense to me in the moment…functionally or internally. I’m constantly baffled by people’s need to ‘check in’ with each other. I really don’t know what that’s all about. It seems an awfully wasteful use of limited energy considering what else you could be doing. But I digress. People seem to think I act “differently” to stand out. BUT I HATE STANDING OUT. I act…based on needs. Not social mirroring. And I guess it only becomes “wrong” to me once someone points it out (over a lifetime, of course, I become able to anticipate what others think is wrong and sort of shape my behavior according to some invisible and shifting standards that I wish I’d never become aware of).

    In any case, this confuses people. They think or say something like, “But you could tell we were uncomfortable!” Right, probably. But I didn’t prioritize your discomfort over my own need…or, it didn’t register with me as something needing immediate modification (until you named it, punished it, or laughed at it).

    This is where the mythology of “mindblindness” fails…I’m not blind at all. Think of me as being non-compliant with unspoken conformity protocols…until I’m forcibly reminded. Then I mask, try to adjust, do my best to match your shifting standards and needs…but it’s reactive, not internalized. Please hear me when I say: I don’t mask because I want to be same (but don’t know how)…I mask because I’ve learned (the hard way) that you demand sameness.

    Let’s say I’m right about this. Let’s say that, as an autistic person, I don’t actually have a problem reading social cues at all…I simply don’t allocate any time or energy to the task because, on some fundamental level, the cost/reward ratio doesn’t add up for me.

    Then that would open the door to a radical reframing of how autism is interpreted within the predictive processing (PP) framework (which I’m a huge fan of).

    In the dominant PP interpretation (e.g. Pellicano & Burr, Friston, Van de Cruys, Clark), autism is characterized by:

    1. High precision prediction errors (Autistic brains assign more weight to sensory data (bottom-up input), and less weight to prior beliefs (top-down models)…which leads to a reduced ability to generalize, filter noise, or tolerate uncertainty.)
    2. Low tolerance for ambiguity (Unexpected outcomes cause larger error signals in the autistic brain, leading to discomfort, rigidity, or repetitive behavior.)
    3. Excessive demand for model updating (Because priors aren’t stick enough (i.e. I leave my model of the world more open to adjustment to real-time data), everything feels novel, and the brain is constantly working to remodel the world.)

    From this lens, autism is seen as a kind of overactive reality-checking mechanism…hypersensitive to mismatch between prediction and perception.

    But back to me. What if I can perceive social cues, but don’t automatically adjust behavior to match, and only respond when the consequences are made explicit? Well, then maybe it isn’t about being overwhelmed by error. Maybe it’s about being uninterested in minimizing certain kinds of social prediction errors (until they become functionally relevant).

    In standard PP, error minimization is assumed to be globally prioritized (that’s my understanding of it). But what if I simply don’t care about being socially in sync unless it affects my access to something I need? So I don’t treat social mismatch as important prediction error?

    That would mean some sort of hierarchy of predictive concern. Maybe my brain isn’t trying to minimize all errors…only the ones that interfere with internal coherence or functional outcomes. Maybe social expectations only matter once they constrain resources, safety, or autonomy. That would mean autistic perception may not be about error overload, but about prioritization mismatches (neurotypical brain treats social alignment as a high-priority prediction task but autistic brain treats functional clarity, pattern integrity, or sensory truth as higher priorities).

    I’m almost definitely wrong…but if I’m right….if I’m right!!:

    Autistic predictive systems don’t globally overweight prediction errors. They assign selective precision to biologically or perceptually grounded domains (e.g. sensory input, moral consistency, physical logic)…and lower precision to social expectations unless those expectations become explicit and consequential (like a punch to the head or being fired from a job). A different optimization strategy…more ecological/biological than performative. And “severity” would be the slider on that scale.

  • Human self-domestication, Pathological Demand Avoidance, and “self-control” walk into a bar…

    I’ve been circling something for a while now…trying to find the thread that runs through human self-domestication, self-control, and this term people throw around, PDA.

    I think it comes down to:

    Who (or what) is in control? and
    How do we decide what counts as a legitimate signal?

    Self-Domestication

    Over thousands of years, humans slowly became tamed. Depending on who you ask, they either tamed themselves (social pressure and mating preferences) or they tamed each other (slavery and control of reproduction). I’m of the latter opinion, but the point here isn’t the process, but its consequences…like less reactive aggression, more social tolerance, tighter symbolic governance, and the gradual internalization of rules. We stopped punching each other and starting performing for each other.

    Domestication was physical (smaller faces, softer jaws, reduced sexual dimorphism), but the the real shift was behavioral. We began outsourcing our regulation…from our gut/instinct to law and role. We made and obeyed rules…and over time became people who needed rules. You might say control became internalized (caged from the inside?).

    And I think this is where something like “self-control” shows up (and wearing a halo, no less).

    Self-control.” Anyone with a brain should find that term suspicious. It splits the self in two: some sort of wild part that has to be restrained, and a righteous part that does the restraining? (what weird fucking animals we are) Honestly, I think it’s just a theological concept dressed up as psychology. And like most civilized “virtues,” it smells like bullshit once you sniff past the incense.

    Here’s what I think “self-control” really is: the cognitive costume of domestication.

    Think about it. It’s what supposedly lets us suppress emotion, delay gratification, comply with symbolic norms, and function in environments totally divorced from our biology…schools, offices, courtrooms, churches. Self-control sure as hell doesn’t mean living wisely…it’s about sitting still when your body says move, smiling when your nervous system screams no, and nodding along when everything inside says get the fuck out.

    In a natural system, “regulation” evolved to keep us alive (avoid cliffs, dodge snakes, read the tribe’s mood, etc.). But what is regulation in civilized systems? In modern society? It’s self-suppression in the name of some symbolic performance.

    Now enter “PDA” (Pathological Demand Avoidance). Or as I prefer to think of it…one of many glitches in the domestication software.

    Here’s the narrative: PDA is often seen in autistic and ADHD individuals. It’s marked by an intense resistance to demands (even “reasonable” ones) along with panic, shutdown, or rage. Notice the language: it’s “pathological” and it’s “avoidance.” Some smart people have suggested we change the P to “persistent,” and I think that’s a good start. But what about avoidance? Is resistance to control really “avoidance?” Defiance? Oppositional? I don’t think so. I think it’s a nervous system that reacts to control like poison…civilization-induced anaphylaxis.

    What if PDA is part of a broader biological resistance to domestication that still rattles the bars?

    Let’s go back to human self-domestication (which I’d argue is synonymous with the process we call “civilization).

    Civilization built a) systems and b) people who fit them. It selected for internal submission…people who could smile through exploitation, obey without understanding, perform without protest. And over time, the organism (us) adapted to control (because it survived).

    Great…it’s adaptive then….what’s the problem?

    The problem is that not all control is created equal.

    In living systems, control is ecological. Emergent. Immediate. You overhunt, food disappears. You act like a jerk, the group boots you. You walk through stinging nettles to take a pee…you learn. The feedback is timely, proportionate, local, and meaningful. And it regulates your behavior in ways that support life.

    Compare that to the feedback in our civilized systems.

    You break a dress code and lose your job.
    You poison a river and get a bonus.
    You speak truth and get punished.
    You conform and get promoted.

    This isn’t feedback…it’s symbolic distortion (bullshit mostly). Consequences are delayed, inverted, or entirely fake. We no longer act based on what is…we act based on what signals approval.

    What are you up to today? Are you going to school to pass tests that mean fuck all? Filing a report that no one will read? Obeying rules that no one really understands? Working a job that’s killing you…because your health insurance depends on it?

    It’s control as abstraction / simulation and it severs feedback from function. And when a system loses real feedback, it can’t adapt anymore. It can’t course-correct. I can only punish, delay, distract. (This is how collapse happens.)

    I’m rambly and angry today…

    PDA isn’t rebellion for its own sake. It isn’t resistance to structure. It’s resistance to unlinked structure…to rules with no grounding, demands with no meaning, performances with no reality beneath them. To papers with numbers on them. To digital clocks and alarms and metrics and schedules….

    It’s an involuntary response to any sort of control that bypasses sense and body and consequence.

    And yeah, I get it…some people will say I’m romanticizing resistance or prehistory…that “nature controls too,” and I’m just pissed off at society.

    Maybe. But have you ever asked yourself what the purpose of the control is? Or what the quality of it is?

    Does it really keep you alive…or does it keep you in line? I’d say ecological feedback is the only feedback that teaches you anything real.

    When we resist a meaningless demand, we’re not being defiant…we’re being awake (even if we don’t know it). We feel some distortion and some lie behind the request. We’re not okay performing a role that destroys something real. To me, that’s a sign that some part of the original organism (human) still exists and resists and still rings the alarm when the world goes insane.

  • Domestication and the Warping of Sexual Dimorphism

    Here’s something we don’t talk about enough: Civilization didn’t just domesticate us. It domesticated us differently, depending on whether we were born male or female.

    In our pre-domesticated state, humans showed moderate sexual dimorphism (differences between the sexes in size, shape, and behavior). Men tended to be larger, stronger, and more prone to take risks, compete, and throw punches over territory or mates. Women carried broader hips for childbirth and bore the energetic costs of gestation, nursing, and food gathering. Nothing too extreme. It was a functional division…not a caste system.

    Then came the leash.

    If you want to understand what happened next, look at what domestication does to animals across the board…the males change more.

    You get smaller bodies, smaller brains, softer jaws, lower testosterone, and a whole lot more docility. You don’t need to fight other males for access anymore…you just need to behave. Domestication tamps down that volatile, high-testosterone edge and replaces it with social compliance. The women change too, but less dramatically. Domestication is hardly an equal-opportunity employer.

    What happens when this process is scaled up across hundreds of generations of humans?

    Let’s take one of my favorite little detours: the Y-chromosome bottleneck…an evolutionary funnel that occurred around 7,000 to 5,000 years ago. Despite the population growing, genetic evidence shows that only a tiny handful of men were passing on their genes (think roughly 5 out of every 100 men). Why? Because systems of coercion (slavery, war, patriarchy) turned reproduction into a rigged game. And those systems selected hard for one thing: control.

    Control doesn’t love testosterone. It doesn’t want unpredictability, brute force, or guys who flip tables when they lose status. It wants compliant, trainable males who can navigate symbolic ladders, defer to hierarchies, and follow rules. Over time, the male phenotype got reshaped: smaller, less aggressive, more socially performative. Instead of fighting for mates, men competed for power within abstract systems (religion, wealth, reputation).

    Women didn’t experience this reproductive bottleneck, and therefore weren’t domesticated in a biological sense, the way men were. At least nowhere near the same degree. But they were domesticated culturally. Their roles were dictated by ideological control…veils, chastity cults, arranged marriages. inheritance laws, and lineage games. Woman as symbol. Woman as vessel. Woman as territory to be defended and exchanged. Arguably, as men become more civilized, women were controlled every more tightly (as was anything men saw as a “resource”).

    And so sexual dimorphism got scrambled…intensified in the weirdest way possible. Physical differences shrank but role differences widened incredibly (differences that we still take for granted and fail to associate with domestication). Men became public actors, enforcers of systems they didn’t design. Women became private property, repositories of symbolic purity and repositories of symbolic purity and reproductive value. Both became performative shells…flattened into scripts civilization could use.

    Now forget the anthropology textbooks for a second. This process we’re talking about…what’s happening on a psychological level? What do these changes mean? How do they feel? How do people begin to experience life differently?

    Testosterone in utero shapes everything from brain lateralization to threat response. A civilizing system selecting against reactivity (for tameness) is selecting against certain kinds of minds…minds that question, that bristle, that break rules when rules break people. And so, over generations, you get men who are more verbal, more deferent, more emotionally masked. And because we live in the end product of that, we call it “progress”…as if there were a master plan to arrive at us, and…here we are! But you first need to acknowledge (at least) that there was no such plan, and that tameness was never anyone’s goal, it was simply something that the system rewarded. If you acknowledge that, we can have a conversation.

    And though women may not have been suppressed biochemically…they were certainly suppressed. Their suppression was the visible one. Mythological. Ideological. Institutional. They don’t need to be reshaped from the inside out when they can be controlled from birth to death by symbols, stories, and ceremonies.

    “Civilization made us peaceful.”

    “Civilization turned brutish men into cooperative citizens.”

    Right. These are nice Matrix-y narratives. Fairy tales. Statements that have just enough truth to squeak by as overarching explanations.

    But what did civilization do? Where was intention? What was it trying to do? (and still trying) It neutered rebellion. It privatized violence. It engineered predictable humans. Manageable ones. And because we are those humans, we call the end product “better,” and the process itself, “progress.” Against all evidence, we insist that life in civilized systems is happier, healthier, safer, and sustainable. Insanity. An insanity made possible by the changes made to us by domestication. By the civilizing process. It bred a species capable of living in complete contradiction to the signals around it.

    Docile males and constrained females. All marching toward a cliff’s edge to the beat of someone else’s drum. Marching peacefully. Unless they’re dropping nuclear bombs on each other. Or gassing each other. Or exterminating every other species. Or poisoning air, water, and food. Nice men and women.

  • So what is “neurodivergence,” really?

    We know it isn’t a disorder.

    Based on everything we know about human self-domestication, it’s hard to argue with the theory that neurodivergence is a retention of traits that were less attenuated by domestication…preserved in pockets where selection for tameness (compliance, suppression, abstraction) was weaker or more variable. And that during times of civilizational incoherence (when systems break down, contradictions multiply, symbolic structures fail), less “domesticated” people seem to appear in greater numbers (despite always being there), or become more visible because the gap between civilization and reality widens, or finally start to make sense, because their traits are adaptive in collapse.

    Let’s build this…

    Domestication selects for neural crest attenuation (compliance, docility, symbolic fluency, sensory tolerance).

    But not all populations or individuals experienced this equally (geographic, cultural, environmental diversity produced pockets of lower attenuation…these groups retained more feedback sensitivity…emotional reactivity, moral alarm, sensory intensity, literalism).

    Civilization pathologizes these traits (labels them as autism, oppositional defiance, “hyper-sensitivity,” etc.).

    But during periods of systemic incoherence or collapse, these individuals become more noticeable. Their “maladaptive” traits now map reality more accurately. They begin to show up in number…not because they’re new, but because the system’s illusions are failing.

    Fast-forward to 2025, and you have an apparent “epidemic” of neurodivergence.

    The explanation seems simple to me. We have greater exposure to feedback-inverted environments, a reduced ability of symbolic systems to contain contradiction (literal minds become more visible), more diagnostic categories and more surveillance (capturing traits that were overlooked), and a spike in environmental toxicity and noise (which dysregulates people with low attenuation).

    We don’t fit civilization because we weren’t (as) shaped by its full domestication loop.

    Why the hell is this so controversial or offensive? Clearly, some dog breeds retain more wolf-like traits. Clearly, some animals resist captivity better than others. And clearly, some humans retain more ancestral (feedback-sensitive) traits. Why? There’s only one explanation…and it’s the same one that explains why any “minority” trait persists. Their lineages were less selected for it (tameness), or more recently disrupted from (feedback-rich) contexts.

    For fuck’s sake, neurodivergent traits aren’t “new conditions.” They’re old configurations that make sense, especially in systems that don’t.

    So, what’s next?

    In evolutionary biology, we have to challenge the assumption that domestication is purely beneficial or benign. We have to reinterpret human evolution not as progressive refinement, but as selection for attenuation. We have to connect neurodivergent traits to ancestral or undomesticated configurations (if you insist). And we have to invoke runaway selection, neural crest theory, and feedback-driven adaptation when we do it.

    In neuroscience and developmental biology, we have to leverage the neural crest hypothesis to explain multi-trait shifts in domesticated species. We have to run with the theory that neurotypical traits are a developmental cascade triggered by early suppression feedback responsiveness. And we have to embrace the fact that what we classify as “autism” or “ADHD” probably reflect less attenuation of limbic, sensory, or integrative functions.

    In anthropology and archaeology, we have to reframe the civilizing process not as moral evolution, but as feedback severance and systemic control. Otherwise, we’ll continue to idealize it and our endpoint will be collapse. Again. And again. We have to challenge the dominant narrative of the “agricultural revolution” and the idea that domestication was progress. We have to recognize the fact that cultural and cognitive diversity in prehistory was shaped by differential exposure to domestication pressure.

    In psychiatry and psychology (it’s hard to be nice here), we need to reinterpret diagnostic categories as misread adaptive traits in maladaptive systems. We need to frame neurodivergence as a mismatch with an incoherent system, not as dysfunction. We need to challenge (or just burn) assumptions about “normalcy” and “functionality” in the DSM framework. And we need to wipe the slate clean and open the floor to all questions regarding moral injury, masking, and performance pathology.

    In systems theory and cybernetics, we need to look at feedback inversion as the main civilizational process. We need to apply runaway selection and closed-loop dysfunction to human cognition and culture (as painful as that will be). And we need to define neurodivergent distress as diagnostic error signals in failing systems.

    In cognitive science (and philosophy of mind), we need to challenge predictive coding’s assumption that accuracy is the goal…it needs to be acknowledged that civilization selects for predictive stability over truth. We need to demonstrate the link between literalism and feedback sensitivity to uncompromised model updating. And, come on, we need to admit that what we call “neurodivergent” cognition is closer to epistemological integrity (reality).

    In collapse studies/political sciences, we need to recognize that what we call “civilization” consistently suppresses the very traits that can correct its course. We need to see that collapse isn’t an anomaly, but the endpoint of systemic feedback suppression. And we need to say this: “Neurodivergent people are early responders in this collapsing feedback loop we find ourselves in.”

  • The Domesticated vs. The Wild

    Let’s have some fun. Imagine you’re an alien scientist, looking at domesticated humans and animals and their wild counterparts. You have no historical context…just the before-and-after. And your objective is to figure out what kind of selective pressure would explain the shift.

    You look at physical changes and note significant brain shrinkage and facial neoteny. You look at behavioral changes and note reduced reactivity (including reactive aggression) and increased compliance. You look at neurological changes and note less vigilance and more dependence. And you look at cognitive changes and note a greater tolerance for contradiction or command. Now you need to reverse-engineer the pressure that accounts for those changes.

    You’d conclude that attenuation was being rewarded not for survival, but for something like a tolerance of constraint. Reduced reactivity to imposed conditions that would normally trigger avoidance, protest, flight, or rupture.

    In domesticated (civilized) animals and people, it’s clear that attenuation is being rewarded for enabling them to do certain things. Namely, remain in proximity to unpredictable others, function under external control, inhibit instinctual responses to pain, crowding, or contradiction, and perform behaviors for social approval or symbolic reward…not direct need fulfillment.

    What if you were pressed to take a shot at describing the environment that produced such a pressure?

    If you had no cultural context and just observed the shift, you’d infer something like the following: a system that imposes artificial constraints, limits autonomy, suppresses immediate feedback, and rewards non-disruption. A system that rewards animals that don’t bolt at loud noises, humans who don’t resist moral contradiction, and minds that prioritize external signals (orders, rules, appearances) over internal ones (intuition, emotion, sensory experience). One that filters out traits that protest, question, disrupt, flee, or grieve.

    Your hypothesis might be something like, “Attenuation was being selected for to enable life inside an imposed system that contradicts natural feedback.” Of course, that’s the very definition of captivity, domestication…civilization.

    Now, you’re handed the conventional narrative. The history and anthropology books. The studies. You’d feel validated somewhat as you read the theory of human self-domestication…a process that “weeded out aggression” in favor of cooperation, social harmony, and prosocial behavior. But you’d also feel something was off. That this framing is deeply incomplete (and dangerously flattening). Because there’s no mention of the actual trade-offs.

    Let’s look at the conventional framing of human (self) domestication and see what it gets right.

    Anthropologists and evolutionary psychologists argue that early humans began to select against reactive aggression, especially in small bands where group (coalitionary) punishment could be used to ostracize or kill bullies. Over time, this likely contributed to facial feminization, reduced sexual dimorphism (differences between the sexes), and more juvenile (neotenous) behavior…hallmarks of “domestication syndrome.” Also, a reduction in testosterone-linked traits, stress-reactivity, and impulsivity…which likely made groups more stable/cohesive.

    What’s this framing missing?

    For one, I think it confuses (or leads people to confuse) submission with peace. Just because someone isn’t fighting back doesn’t mean the system is just. A domesticated animal isn’t peaceful, necessarily, it’s conditioned or selected not to protest. Likewise, a “civilized” human isn’t necessarily cooperative…they’re trained to suppress resistance. In other words, to the extent that we eliminated (reactive) aggression…we eliminated resistance to coercion.

    And it fails to distinguish between types of aggression. Reactive aggression (fight-or-flight, self-defense, boundary enforcement) was suppressed. Moral aggression (anger in response to injustice, betrayal, or cruelty) was pathologized (too sensitive or oppositional). But instrumental aggression (cold, planned, goal-oriented violence) is clearly rewarded in civilization. To the extent that it “succeeds,” it always has been.

    And the conventional explanation for human self-domestication doesn’t seem interested in what was lost. It treats the process as a moral victory. But I don’t think it was “bad behavior” that got weeded out…it was the ability to react honestly to harm. Domestication selected for attenuated perception, emotional buffering, and following symbolic rules…not any kind of inner peace. It reduced reactive violence while it reduced truthful response to violence. And I think the intention (of those driving the domestication process) was in the latter, with the former being largely inadvertent.

    Because we know that selecting for one behavioral trait (like tameness or compliance) cascades into structural, cognitive, sensory, and emotional changes. We know this. Traits aren’t modular. They’re entangled…especially when they involve the neural crest.

    The neural crest hypothesis of domestication (2014, Wilkins, Wrangham, Fitch) suggests that domestication syndrome in mammals is caused by mild deficits in neural crest cell development during embryogenesis.

    The neural crest contributes to all sorts of things…facial morphology (jaw, teeth, skull), adrenal glands (stress response), pigmentation, autonomic nervous system, peripheral nerves and glia, and parts of the limbic system (emotion, reactivity, threat detection).

    If you select for tameness (or, in humans, for docility/compliance), you’re not just changing a particular behavior…you’re reconfiguring the organism’s whole developmental trajectory. And here’s what you get:

    • Smaller brains
    • Flattened faces
    • Lower stress reactivity
    • Blunted sensory input
    • Neoteny (more juvenile traits retained into adulthood)
    • Reduced startle or protest response
    • Delayed or diminished emotional signaling

    Where does that show up in humans? Increased social pliability. Extended childhood dependence. Lower physiological sensitivity. Greater performance tolerance under contradictory or symbolic norms.

    In other words, your “modern human” wasn’t just bred to be nice…it was bred to feel less and to respond less to what would once have been danger, injustice, or disorder. That isn’t a linear trade. It’s a network-wide reorganization of the system (what Bateson would call a change in the system’s pattern of constraints).

  • Is there such thing as a “baseline human?”

    I describe the configuration of the human nervous system known as “neurotypical” as being divergent from an adaptive baseline. But is there such thing as a “baseline” human? A “baseline” wolf? After all, every organism is the result of ongoing evolution. Am I just comparing one phase of adaptation to another?

    If I were talking about evolutionary drift, or ecological selection within an intact system, then yes…I’d be fucking up. But the civilizing / domesticating process isn’t that.

    Domestication is artificial selection, not natural selection. In wild systems, traits are selected by feedback…what works, persists. In domesticated systems, traits are selected by suppression…what submits, survives. That’s a forced bottleneck, not an evolutionary trajectory. A wolf doesn’t become a dog by evolving, but by being confined, starved, bred, and rewarded into compliance. Same with us.

    And I’m comparing different conditions, not forms. This isn’t wolf vs. dog, or Paleolithic vs. modern human…it’s organism regulated by coherent feedback loops vs. organism surviving in a distorted, feedback-inverted environment. This isn’t some kind of nostalgia for prehistory…it’s about system integrity.

    It’s laughable that we live in a “world” where we have to be reminded that there is a functional baseline…you could call it feedback coherence, I guess. Coherent behavior is maintained through timely, proportionate, meaningful feedback. That’s the baseline…it’s a system condition (not a species). When a system becomes functionally closed, symbolically governed, and/or predictively trapped, it loses that baseline (even if it survives in the short term).

    You might respond that evolution got us here. But evolutionary processes don’t “justify” maladaptive systems. Saying there’s no baseline is a post hoc rationalization for harm. And I hear that all the time. People justifying obesity in dogs because it’s common in the breed. Or calling office work “adaptive” because it pays well. Or saying modern humans are just “evolved” for abstraction and control…even as the world burns and mental illness becomes the new norm.

    Evolution doesn’t care about health or coherence. It simply tracks what survives. But feedback is what sustains life, and it’s being severed.

    Ask yourself: what is selected for in society, as you know it? If you had to name one thing? Honesty? Hard work? Ambition?

    I think it’s compliance. I think the civilizing/domesticating process replaces selection for survival with selection for compliance.

    Let’s look at wild systems first. There, the selection pressure is for ecological coherence. Traits are favored because they enhance survival in a feedback-rich environment (keen senses, strong affective bonds, situational learning, pattern recognition, adaptability). An organism has to remain in sync with reality, or it dies.

    But in civilized systems, it’s easy to see that traits are favored because they enable success within an artificial, abstracted system (obedience, rule-following, role performance, suppression of emotion and instinct). You have to fit the symbolic structure, or you’re punished, excluded, pathologized, or discarded.

    It sucks because what was adaptive (sensitivity, integrity, etc.) is maladaptive in this odd place we call “civilization.” And what was dangerous (passivity, abstraction, dissociation) is rewarded.

    Think: selecting for people who can function without reality (instead of people who thrive in it).

    It’s not far fetched. At all. Sickly animals that can’t survive in the wild. Office workers who ignore chronic pain and emotional numbness (and get promoted). An entire species driving itself toward collapse while calling it “progress.”

    This whole trainwreck we’re on is a case of runaway selection, but instead of selecting for extravagant traits like peacock feathers, it selects for compliance with abstraction and resilience to incoherence. And like all runaway selection processes, it becomes self-reinforcing, decoupled from reality, and ultimately self-destructive.

    Don’t believe me? Let’s track it.

    Quick review of the basic concept. In biology, runaway selection occurs when a trait is favored so intensely within a closed feedback loop (e.g. mate choice, social signaling) that it amplifies beyond functional limits (it doesn’t serve survival anymore…it just signals compatibility with the system).

    Peacocks grow huge, draggy tails because other peacocks think it means they’re fit (not because it helps them survive). Humans undergo surgeries, wear restrictive clothes, or starve themselves for “attractiveness” under runaway cultural ideals. Same dynamic. And civilizations grow more complex, abstract, and self-referential not because it’s sustainable, but because “Complexity” signals legitimacy and control.

    Let’s run through it again.

    Civilization creates a system (think classrooms, corporations, governments) where success depends on suppressing natural feedback. Then it rewards those most tolerant of abstraction, delay, hierarchy, and contradiction. This filters out feedback-sensitive traits. That keeps happening until the system becomes so self-referential that it can’t correct course anymore…it’s bred out the ability to perceive correction.

    So it’s a runaway selection for dissociation. For the kind of human who can survive it (even if it clearly can’t survive the world).

    Like all runaway systems, the trait (in this case, compliance) accumulates beyond adaptive range. The system grows more fragile and less correctable. Feedback from the real world becomes too painful or too late. And collapse happens from the inability to stop succeeding at being disconnected (not from a single failure).

    We’re not evolving.

    We’re overfitting. Civilization is a runaway selection loop for traits that thrive in unreality.

    And the “neurotypical” configuration is a collection of those traits. It’s not a neutral or natural norm…it’s a phenotypic outcome of this runaway selection.

    A configuration that is tolerant of contradiction (doesn’t break down where reality and narrative diverge). That is emotionally buffered (can perform even when distressed). That is low in sensory vigilance (can endure loud offices, artificial lights, social facades). That is socially adaptive (mirrors norms, infers expectations, suppresses authenticity). That complies with rules even when rules are nonsensical. That’s able to delay gratification, ignore bodily needs, and maintain appearances.

    I’m not saying these traits are bad per se…but I think we can all agree that they’re not the “baseline human.” They’re the domesticated phenotype, selected over generations to survive in systems where truth no longer matters.

    And, of course, the more a system rewards these traits, the more they proliferate (socially, genetically, culturally). It becomes harder for feedback-sensitive individuals to survive. Reality has to be increasingly suppressed to preserve the illusion of normalcy. Eventually, the only people who appear “well-adjusted” are the ones most disconnected from feedback…and the entire system becomes incapable of detecting its own failure. That’s the endpoint of runaway selection.

    I have a hard time with the dominant narrative…that the neurotypical profile is some kind of gold standard of human functioning. To me, it’s clearly the domesticated outcome of a system that rewards compliance (and “stability,” such as it is) over coherence or contact with reality.

    * When I say “neurotypical,” it’s not meant as some kind of medical category. I think of it as the cognitive-behavioral phenotype most rewarded by civilization (modern society, yes, but also throughout the history of civilization). I don’t see it as a person. Not every “neurotypical person” fits this mold. I’m almost certain no one fits it perfectly. I’m describing a directional pressure, not a binary condition. And it isn’t “bad.” It’s simply optimized for the wrong environment (one that destroys life). Neurotypicality isn’t unnatural…it’s civilizationally adaptive (in a system that’s maladaptive to life).

  • The Civilizing Process IS Domestication

    Domestication is the process by which organisms are selectively shaped to be compliant, predictable, and dependent on human-controlled environments…often at the cost of sensory acuity, autonomy, and ecological fitness.

    Civilization is the expansion of symbolic control over individuals and groups through norms, rules, abstraction, and institutions…suppressing direct feedback, internal regulation, and spontaneous behavior in favor of obedience and symbolic order.

    They’re one and the same.

    They both suppress feedback sensitivity. (To control an organism or a population, you have to prevent it from reacting authentically to harm, injustice, or incoherence.)

    They both favor neoteny. (Juvenile traits like compliance, passivity, and external regulation are selected and extended into adulthood.)

    They both shift behavior from function to performance. (The wild animal hunts; the domesticated animal waits. The wild human responds; the civilized human performs.)

    They both create dependence. (On artificial systems…pens, laws, currencies, screens…rather than ecological loops.)

    They both sever feedback loops. (To domesticate is to disable the plant’s / animal’s relationship with “wild” cues. To civilize is to disable the human’s relationship with embodied, emotional, and ecological reality.)

    Domestication is the biological manifestation of the civilizing process, and civilization is domestication scaled, abstracted, and systematized. This isn’t metaphor…they’re identical. Different names for the same thing.

    So what?

    1. What we call “progress” is maladaptation. If civilization selects against feedback-sensitive traits, then most hallmarks of progress (obedience, emotional detachment, performance under duress) aren’t improvements. They’re symptoms of ecological and cognitive degradation.
    2. “Neurotypical” is a pathology of fit. In other words, the “typical” mind in civilization is one that fits a feedback-suppressed system…not one that is healthy or coherent. What we call “mental health” is largely the ability to suppress warning signals.
    3. Collapse is the endpoint. A system that inverts feedback can’t self-correct. It accumulates error until it fails catastrophically. Collapse isn’t a failure of civ…it’s its logical endpoint.
    4. Modern humans aren’t baseline humans. Just as dogs aren’t wolves, modern humans aren’t the baseline human phenotype. We’re shaped by millennia of selection for compliance, abstraction, emotional control, and symbolic performance.
    5. Resistance to this process (civilization / domestication) is a biological signal. Individuals who resist conformity, abstraction, or symbolic authority aren’t broken…they’re retaining functional traits that no longer fit the dominant system. Autism, ADHD, sensitivity, oppositionality, and “mental illness” often represent intact feedback systems in an inverted environment.

    What are the real products of civilization? Not culture, but civilization?

    We have some intentional products (ones designed to enforce control):

    • Laws / punishment systems (enforce behavior abstracted from context or consequence)
    • Religions of obedience (codify submission and moralize hierarchy)
    • Schooling (standardizes cognition and behavior to serve symbolic roles)
    • Currencies / bureaucracies (replace direct reciprocity with quantifiable abstraction)
    • Surveillance (ensures conformity without requiring local trust or co-regulation)
    • Cages / fences / walls / uniforms / schedules (tools to overwrite instinct)

    And we have some inadvertent ones (usually denied or pathologized):

    • Mental illness epidemics (result from prolonged feedback suppression and coerced performance)
    • Chronic disease (where natural regulation is replaced by artificial inputs)
    • Addiction (coping mechanism for living in a system where natural pleasure and feedback loops are severed)
    • Anxiety and control-seeking (nothing is safe, responsive, or coherent)
    • Loneliness / alienation (loss of meaningful co-regulation and mutual reliance)
    • Ecological destruction (consequences are insulated against)
    • Pathologization of feedback-sensitive people (framing coherence-seeking organisms as dysfunctional because they can’t / won’t adapt to incoherence)
  • Feedback Inversion

    The way domesticated humans and animals diverge from their wild counterparts isn’t random…it follows a predictable systems pattern that has analogues in ecology, cybernetics, even thermodynamics.

    What is it? What’s the key transformation?

    The organism shifts away from being regulated by feedback to being regulated despite it.

    That’s what domestication does (in animals or humans). It removes or blunts the organism’s natural ability to respond to environmental signals, and replaces that responsiveness with compliance to an imposed system. And the divergence unfolds along a bunch of predictable dimensions…

    Cognitive Shift (From Adaptation to Control)

    Wild mind: constantly updating based on local, real-time feedback

    Domesticated mind: defers to rules, roles, or authority (even when they contradict experience)

    Behavioral Shift (From Function to Performance)

    Wild behavior: serves a real purpose (find food, avoid danger, bond)

    Domesticated behavior: serves a symbolic or imposed role (obedience, etiquette, branding)

    (In cybernetics, this resembles a loss of negative feedback…the system stops adjusting based on outcome, and instead preserves form through positive feedback, locking in behavior.)

    Sensory Shift (From Vigilance to Tolerance)

    Wild senses: alert, acute, tuned to survival-relevant input

    Domesticated senses: dulled, filtered, or overridden to tolerate noise, confinement, social overload

    Affective Shift (From Co-regulation to Suppression)

    Wild emotions: socially functional, tied to reality

    Domesticated emotions: repressed, misdirected, or disconnected from actual stimuli (chronic anxiety, performative joy)

    Structural Shift (From Efficiency to Excess)

    Wild bodies: lean, efficient, stress-adapted

    Domesticated bodies: neotenous (juvenile traits), prone to disease, dependent on infrastructure)

    So what’s going on in this domestication process? Particularly in human behavior?

    You could call it feedback inversion. A systemic reversal of the role of feedback…from a guide to coherence to a threat to be suppressed, ignored, or distorted.

    And I’d argue that the domesticated (“neurotypical”) human mind is a product of feedback inversion…trained to override bodily, sensory, and ecological signals in favor of symbolic, delayed, or externally enforced rules.

    Let’s track this.

    Control comes first.

    1. A group (or system) seeks to stabilize its environment, secure resources, prevent loss, dominate others, etc. This is an impulse that demands predictability and reduced uncertainty.
    2. And to exert control, you have to ignore certain inconvenient signals. The hunger of others. The pain of subordinates. The ecological damage you’re causing. Your own body’ needs. In other words, you begin inverting feedback. You treat reality’s signals as noise.
    3. Once you have symbolic systems (laws, money, ideologies) in place to maintain control, they begin rewarding those who suppress feedback and punishing those who respond to it. Now we have a positive feedback loop. The more control you assert, the more feedback you need to ignore. And the more feedback you ignore, the more “brittle” your control becomes…so you assert even more.
    4. Over time, the system selects for feedback-insensitive participants. Now control isn’t just enforced…it’s embodied. Now feedback sensitivity looks like deviance.

    Once embedded, feedback inversion maintains control by filtering out any kind of destabilizing truth, prevents course correction, and confers survival advantage on the most disconnected people (until the system crashes). It starts as a tool of control but becomes a systemic pathology.

  • The Double-Empathy Struggle

    So a big part of this book is figuring out how people can do the stupid or terrible things they’ve done (and continue to do).

    The answer to that question has really proven a challenge. Frustating.

    It’s occurred to me that part of the challenge (maybe the biggest part) is that I’m trying to figure out where people diverge from reality in a way that I can understand. I keep looking for reasons I can relate to. Some sort of trap that, when I see it, I say, “I could see myself falling into that, too.” But I can’t find that trap.

    Because if the divergence of reality I see in the people around me happens at a point I would never have chosen…it feels alien, false, deductive. I need it to be human. Comprehensible to me. I want to believe that had I been there, I’d at least have seen how the mistake happened.

    It’s in this line of thought that I had a breakthrough.

    Maybe the difference isn’t in the choice…but in the threshold.

    I and others like me might just have a lower tolerance for unreality (a more sensitive detection system for contradiction). Because I think most people DO feel dissonance, but they just have more social circuitry telling them to ignore it. What is that social circuitry? And isn’t that the deviation from life’s baseline?

    When faced with serious problems, statements like, “This isn’t my place to question,” “It’s probably fine,” Everyone else seems convinced,” “It’s safer not to say anything,” and “It is what it is,” do more than annoy me. They fucking enrage me.

    So maybe the divergence is recognition. One group feels the glitch and names it…the other feels it and smooths it over. Because their nervous systems are somehow tuned to avoid rupture instead of detecting and responding to it.

    Maybe I feel reality differently. That certainly tracks. That would mean a problem of empathy across feedback thresholds. That mystery choice I’ve been looking for? The one I can comprehend as how people mistook fiction for reality? Maybe I’m not missing it at all. Maybe I’m simply seeing that, for me, there was no choice. There’s something that I would have felt that didn’t register with them.

    So let’s look at our fork again.

    Is it a different mix of people in the groups? We’ve ruled out innate cognitive superiority. Could there simply be a different mix of dispositions, thresholds, or nervous system types?

    Probably.

    Let’s say Group A has more people whose nervous systems respond strongly to contradiction, unreality, or unresolved pattern. And Group B has more people whose nervous systems prioritize social cohesion, comfort, and continuity.

    No talk of virtue…just configuration.

    Same species, same environment, different sensory weighting. It seems plausible that a small difference in feedback sensitivity across a few individuals could tip a group’s response to contradiction.

    Or is it really external conditions? Because I think these matter…but not in the way most people think. It’s not about environment determining outcome. It’s about environment shaping when and how feedback arrives. A harsh environment returns frequent, sharp signals (You’re wrong. FIX IT.) A forgiving environment allows more drift before consequences appear.

    So external conditions shape the urgency of model correction, and internal sensitivity shapes the likelihood of correction. Low sensitivity + gentle conditions? Drift compounds. Fast. High sensitivity + harsh conditions? Feedback (reality) stays close.

    Are “low tolerance for unreality” and “need for stable patterns” the same thing? I don’t think so…but they feel close.

    A low tolerance for unreality is detecting and suffering from contradictions between reality and model…it’s affective and stress-inducing. A need for stable patterns is seeking and requiring patterns that hold over time to feel safe…it’s predictive and structural.

    But they’re structurally linked, aren’t they? I need stable patterns because unreality feels intolerable. And I reject unreality because it violates the patterns I need to hold. They both express an orientation…a high-fidelity feedback requirement.

    SO…some groups contain individuals for whom predictive error is viscerally intolerable. Others contain fewer. Whether the group listens to those individuals determines whether the model corrects or compounds. The environment determines how quickly error becomes obvious. The culture determines how early error is acknowledged. And they nervous system determines how strongly error is felt.

  • No…autistic people don’t struggle with complexity.

    We struggle with complex bullshit. Complexity that doesn’t stay in contact with reality. Complexity built to preserve delusion…systems of thought that multiply explanation instead of reduce error. It’s not the number of layers…it’s whether the layers track the thing they claim to represent.

    I’m fine with complexity when it emerges from feedback, remains falsifiable, stays anchored in pattern, can be broken open and examined, and responds when something stops working.

    I’m not fine with just-so stories, self-reinforcing abstractions, theories immune to contradiction, semantic inflation (changing definitions to preserve belief), or socially protected bullshit that silences doubt.

    I’m just fine with structure…it’s insulation I have a problem with.

    Bullshit = complexity that survives by outmaneuvering feedback.

    And yet………in the early stages of understanding something, I do feel averse to complexity.

    Like why the people around me seem fine when just about nothing in the world is fine. How did they get like this? Surely their disposition isn’t life’s baseline, or the earth wouldn’t have lasted as long as it has.

    I don’t like lists of reasons. I don’t look for explanations as much as singularities. Something that collapses the list. Something that makes that fork I’ve been writing about…the one where some groups of people stayed connected to reality and others adopt fictions that ultimately lead to genocide / ecological plunder / extinction…inevitable, traceable, and unambiguous (without resorting to mysticism, virtue, or accident).

    I’m allergic to narrative sprawl (I know, I know) masquerading as theory. I don’t want an ecosystem of causes…I want a keystone fracture.

    If the starting conditions are the same, why does one group protect an erroneous model of reality, and another let it break?

    I can’t help but feel that the first real difference is what the group is optimizing for, and whether that goal is visible to them or not. I think one group is optimizing for predictive accuracy, and the other is unconsciously optimizing for social coherence. There. I said it.

    I don’t claim they know they’re doing it. But every signal, every decision, every reaction is weighed (subconsciously) against one of those metrics. When the model breaks, that internal orientation determines the response. If the priority is accuracy? “The model must adapt.” If the priority is coherence? “The contradiction must be contained.”

    So not values or beliefs, but a deep system preference for truth-tracking versus conflict-minimization. And based on everything I’ve encountered…that really feels true. It clicks.

    And it begins long before it’s visible…it shows up in how children are corrected, how dissent is handled, how stories are told, whether doubt is sacred or dangerous, and whether speech is relational or investigative. One group sharpens awareness and the other flattens tension.

    Because social coherence “works,” doesn’t it? It feels good. It stabilizes something.

    So the first difference, the root divergence, the fork, is not belief, structure, or insight. It’s which pain the group is more willing to feel: the pain of being wrong, or the pain of disagreement. When error appears, will we change the story…or suppress the signal?

  • Nothing

    A new performance always starts with hope.

    Not the naïve kind…more like a quiet, aching belief that maybe this time, I can hold it together. That if I give enough of my effort, energy, and attention, something solid will finally form around me. Something real. So I say yes. To jobs. To invitations. To marriages. Yes. Yes. Yes. To any expectation that hangs in the air unspoken. I say YES to being useful. YES to being tireless. YES to being wanted.

    Everything about ME makes people uncomfortable, but at the age of eight, I find out hard work is always applauded. And that’s something I can do. That’s my first in. Never fewer than 2-3 jobs at a time. My. Whole. Life.

    At work, I become a machine. Relentless. Competent. First to arrive, last to leave. I never say no, because no one ever says no to me. I make myself indispensable. I perform stability, drive, charisma. And people love me for it. My performance is a flawless reflection of their expectations…changing in real time as they’re perceived.

    Everything about ME makes people uncomfortable, but at the age of sixteen, I find out my face is attractive. And that’s something I can use. That’s my second in. Never without a partner. My. Whole. Life.

    In relationships, I become another mirror. Attentive. Affectionate. Charming. Safe. I show up like the ideal partner, because part of me genuinely wants to be that person—for her, for myself. I make promises I don’t realize are promises: I’ll always be this available, this engaged, this put-together. It works. I’m praised, admired. I feel chosen.

    But the gap always shows up.

    At first it’s a small delay or a quiet sense of dread. Tasks that seemed easy feel heavy now. Conversations drain me. My moods swing. I can’t keep up the pace I set…not at work, not at home. But I don’t know how to say that. I don’t know how to say that I’m breaking. I don’t even know that’s what’s happening. I just feel tired. Agitated. Trapped. Off.

    Then comes shame. The unwelcome knowledge that I’m slipping. I can’t be the person they count on. I can’t.

    C-A-N-N-O-T. Not as in “choose not to,” but NOT ABLE TO.

    I know I’m about to let everyone down again. The thing is, I want to keep the promises. I’m just not built for the way I made them. But by the time I admit that to myself, I’m already failing. Already withdrawing.

    So I disappear…emotionally first, then physically. At work, I start missing details. Resenting the schedule. Loathing my own reputation. At home, I get quiet. Stop initiating. Smiling less. Sleeping more. I avoid questions. Avoid eye contact. Avoid being known.

    And they notice. They always notice. My boss. My partner. My friends. They can’t understand why I “changed.” Why the star employee lost his spark. Why the attentive husband grew cold. I can’t explain it either…not in a way that doesn’t sound like excuse. I hate what I’m becoming, but I can’t go back. The mask is too heavy. And I don’t know who was underneath it anymore.

    So I end things. Or they do. Or the universe does.
    Then comes the silence.

    And then, eventually, comes another chance. Another invitation. Another flicker of hope.
    And I think: Maybe this time.

    Over and over and over and over and over and over.

    I know the environment I need now. That I need. Now! After nearly five decades. But I can’t build it. I can’t go to it. I have insight in one hand and a lifetime of relational debt in the other.

    I go back to pretending.

    Or I collapse.

    Or I live in this unsustainable torture of the in-between.

    Is nothing real? Where am I? What have I done? What do I do? Is it me? Where am I?

  • I’m “divergent” from WHAT, exactly?

    Civilization is a system that diverges from reality. Its function is to preserve unsustainable human behavior against natural feedback. It accomplishes this by suppressing, distorting, and severing ecological and biological feedback loops. As it becomes more effective at doing so, the living systems that depend on feedback to remain coherent (forests, animals people, ALL of life, ultimately) begin to break down.

    Feedback sensitivity, like every trait, exists on a scale. So it’s no surprise that the organisms most sensitive to feedback are the first to suffer when that feedback is polluted or withheld.

    Civilization gaslights by portraying feedback sensitivity as the deviation, when in fact it is the system itself that has broken from reality. Clearly. The evidence is everywhere it touches life: destroyed species, destroyed ecosystems, destroyed peoples.

    But within its dominant framework, “neurodivergent,” becomes a catchall for anyone whose nervous system fails to function “normally” within an environment that is fundamentally maladaptive.

    It bears repeating: the system you grieve being excluded from is maladaptive to ALL life. This isn’t a contentious statement. Turn on the news. You know it’s true. You feel it.

    The “norm,” the neurotypical person, is a hypothetical construct. It describes someone who can survive and thrive outside of reality, inside civilization’s distortions. But that person doesn’t exist. There are only people who appear to tolerate those distortions in the moment. Their bodies and minds are in deep distress, but the feedback doesn’t register on an immediate physiological level. It shows up as depression. Anxiety. Diabetes. Chronic inflammation. Autoimmune disorders. Panic attacks. Doomscrolling. Disassociation. Insomnia. And they look to their captor for solutions. Plastic surgeries. Weight-loss drugs. Self-help. Workplace wellness seminars. Sugar. Alcohol. Netflix. Adderall. SSRIs. Ambient music. Mindfulness apps. Therapy dogs.

    We need to stop speaking civilization’s language. We need reality again as a context. I’m so tired of validating the mass psychosis of broken systems.

  • Letters to Family after a Late Autism Diagnosis

    I hope this note finds everyone well. I’m writing today with a personal request regarding my father, __________.

    I’m currently working on a book that explores autism—not just in clinical terms, but in how it shapes lives, relationships, and histories. As some of you may know, my father, __________, was autistic, and is a central part of this story, and yet, in many ways, he remains the least known person in my life.

    Knowing that he was autistic, as I am (I was “diagnosed” last year), helps make sense of many things I once could not understand. Unfortunately, I know very little about the first fifty years of his life, only fragments of the two decades that followed, and mere glimpses of his final years.

    I’m reaching out to all of you because I suspect there are memories (perhaps small ones, perhaps difficult ones) that might help me piece together who he was. Any story, however brief, however second-hand, however unsavory, is welcome. A childhood impression. A family photo. A moment observed. Even a sentence your parents once said in passing.

    I understand that not everyone may have had a positive experience with my father. He could be very, very difficult. He hurt people. I’ve spent much of my life coming to terms with my own grievances. I’m not asking anyone to excuse him…but if there’s a way to understand him more clearly, I would like to try.

    When I asked questions growing up (and even in adulthood), the most consistent answer I received was, “Your father isn’t well.” I believe that was said with care. But it left a silence I’ve lived with ever since.

    So this is me, asking gently: If you have something to share, no matter how small, I would be grateful.

    _____________________________________

    Many of you have reached out to me with stories. I appreciate you so much. Apologies if I missed someone in my replies. 

    Thanks to your help, I’m coming to grips with parts of this story I didn’t even know existed. Not small pieces, either. The sort of pieces whose absence was….incomprehensible? The sort of pieces that, when missing, result in a completely incoherent story. That result in a completely incomprehensible person.

    When I’m done, I’d be happy to share some insight into my father with the interested among you (no need to reply here, but you can send me a private message). My father didn’t exist in a vacuum. He was part of a family. Most of the story will be upsetting, but if you ask for it I’ll assume you know yourself well enough to handle that sort of thing.

    Because this week of your help alone has yielded so much important information, I’d ask that you continue sharing details with me, as you remember them. Maybe you feel resistance at the thought of sharing them. I understand. No one has an obligation to share anything they don’t want to. Maybe you think some things are better left alone. That is something I have a harder time with. And if you ever read this emerging story, you’ll know why I have a hard time with that. Because a lot of the upsetting parts of this story are the result of just that tendency: a control of narratives and knowledge that presumes one’s own worldview is superior to that of others. We’re talking about some serious generational trauma here…allowed to persist under the guise of good intentions.

    No need for a polished email…single sentences with no punctuation, etc., are just fine. It doesn’t have to be a story, even. It can be a feeling you had or an impression you never fully explored.

    None of these details will ever be published. The book I’m writing necessitates an understanding of the interplay between an inherent nature (that we call ‘autism’…along with a few other diagnostic labels) and its environment, but it isn’t about my father. My father’s story is a case study that I’d really rather not have. But here we are.

    _____________________________________

    Thanks for your kind words. To be honest, I still know embarrassingly little about the subject matter of my book. And I’m a bottom-up thinker, so my learning process is SLOWWW….

    Your email is the first thing I read today and it made me feel good. And when I feel good, I overshare. That might be autism. Or it might be what an autistic person does when safe opportunities to share feel so rare. Or it might be that a diagnostic label like autism only makes sense in a certain system…a system where certain traits that are normally quite adaptive are pathologized. My book is an exploration of the last. It’s a giant footstomp against being told: “Good news. There’s a name for the way you are. It’s a disability. So just throw that name around and people will have to accommodate you.” But I don’t want to be accommodated. I never have.

    Oh boy. Here is an early morning rambling I’ll almost certainly spend the rest of the day beating myself up over. 

    You may know most of my ideas already. I have a hard time guessing how the things I say will be heard…so I either 1) over-hammer points (the way my father would feel the need to explain the history of juice before telling you he’d switched to the newest Five Alive, maybe); or 2)  go straight to a level of abstraction that presumes you know everything in my head already (which is how I adjust my behaviour when my over-hammering tendency gets brought to my attention enough times). In any one conversation, you’ll almost certainly get both from me.

    Part of the autistic ‘experience’ is is the constant performances, and one of those performances is pretending to know what one’s talking about until one does. This is one of many behavioral patterns that emerge from the struggle to make sense of social models in which you’re presented with a set of rules that no one else seems to really follow. When you follow these rules you’re ridiculed. Don’t be so literal. When you break them you’re punished. You KNEW the rule. Everyone around you navigates this terrain with far less friction. It starts to feel like they have the real rulebook in their back pocket. Unlike others, I can’t seem to ‘let go’ of that friction. It constant agitates me. But on the outside, you perform. And when the majority of your life is a performance, eventually everything you say feels like a lie. But occasionally, a few days or months later, you’ll realize something you said was true. And you’ll be surprised. Very. You realize that these performances have become as much for yourself as they were for others. 

    _____________________________________

    Ugh…here is some more over-hammering.

    As I was going through my morning routine, my mind kept going over how these things will be read by the family. Until I have the fullest story I can have, I’ll be purposely vague. But being vague invites all sorts of potential misunderstandings or objections. 

    My father had a reputation for ruining the few family gatherings he attended. He would bring ugliness. I think those of you who lived through that might be feeling the same about me. He is his father, after all. Here’s this beautiful online family community we’ve created, where we share news of baptisms, memories of loved ones who’ve passed, and other uplifting news…here’s this guy hijacking the group as some sort of platform for his mid-life crisis. I hope no one sees me like that. I’m painfully aware of the impression I can make. It’s this awareness that partly explains my historic lack of participation with family (but I read everything!). Only partly, of course…there isn’t enough time in the day to explain all the reasons I tend to avoid group settings. And none of those reasons are a particular individual, etc.

    There are huge elephants in the room that I have to address. 

    First, I want you to know that know my father’s behavior was, in many cases, grossly offensive. If you know what I mean by that, then you…already know. His behavior wasn’t harmful only under a certain light, or without a particular understanding…it was harmful, period. 

    Second, I recognize the very real efforts made by family to help him. What I have to say is in no way a unilateral dismissal of those efforts. 

    Next (and this is the hardest to address, by far), there are external circumstances of my life (in fact, probably most of the on-paper circumstances of my life) that make anything I have to say very easy to dismiss. I’ve come to understand that a big part of being a ‘high-functioning’ autistic person (i.e. an autistic person with lower support needs) is that you can blend in just enough to do some incredible damage to not only yourself, but others. I have two failed marriages under my belt, and three awesome children who understandably have some very mixed feelings about me. The behaviors I engaged in, the ones I engineered in order to access love and a feeling of belonging, meant making commitments that were well beyond my ability to keep. And if that sounds like self-serving bullshit to you…well, all I can say is that most days that’s what it sounds like to me, as well. 

    The parallels between my father’s life and my own are tragic to the point of being comical (almost). We’ve both caused damage. I’ve arguably caused more than he did. I functioned ‘better’ and longer. When you’ve caused damage like that, you lose the right to speak. When you open your mouth, people expect sickness. It’s dismissed, wholesale. 

    So a huge challenge (insurmountable, even) in explaining yourself as a late-diagnosed ‘high-functioning’ autistic person is the very real danger of having everything you say interpreted as self-serving bullshit. After this past year of corresponding with countless other late-diagnosed autistics, I can tell you this is an almost universal experience. The damage is already done. The collapse came too late. You managed to do X, so you sure as hell can do Y like the rest of us. Grow up. All you can do is shrink yourself and hope that the relational debts you (or, more accurately, the persona you created for others) incurred before your diagnosis will be somehow…forgiven? But you know they won’t be. Because you certainlywouldn’t forgive them.

    The other challenge is that autism is largely a difference of degree, not of kind. When you try to explain it, people default to their own experiences. Using our own reference points, we assume everyone experiences the world the same way we do. If I were to present my challenges to you in a list, you’d relate to just about all of them. I don’t like loud environments, either. No one does. I have a hard time with hypocrisy, too–who doesn’t? I have a difficult time with change, too. I’ve felt awkward in social situations, who doesn’t? I do best with a routine, everyone does. But life simply isn’t like that. It really sounds like you just want to avoid challenging yourself. It sounds like you’re trying to rationalize avoiding what everyone would like to avoid, but is mature enough to tolerate. Etc. Autistic adults are 7 times more likely to commit suicide than your ‘average’ person. It’s a bit harder to explain stats like that away as laziness or immaturity or irresponsibility.

  • My “Alexithymia” Isn’t What They Say It Is

    When I hear that someone is suffering (really suffering, with no way out) it hurts. The destruction of nature hurts. Reading about people in North Korean prison camps hurts. The quiet death of ecosystems, the slow violence of poverty, the stories I read here from other autistic people, the way the powerless get crushed by systems they didn’t create…this kind of pain gets in me and doesn’t leave. It’s like background radiation. I carry it everywhere.

    But when someone is suffering because of something they refuse to change, when they clearly could, but don’t…I don’t feel sad. Not really. Not even when I’m supposed to. And apparently that’s a problem. That’s not empathetic, I’m told. That’s cold. That’s…autistic?

    So I’ve been thinking: what does “empathy” mean to most people, then? Does it mean feeling what someone else feels, no matter what? Does it mean echoing their distress, even when that distress comes from avoidable choices, repeated again and again?

    To me, empathy includes being able to discern what’s really going on, and responding from a place of integrity. Otherwise, don’t we just cheapen words like “sad?”

    It’s strange to hear people say I “lack empathy.”What I feel isn’t absence. It’s selectivity. It’s proportional. It’s based on whether the situation actually warrants emotion, not whether I’m expected to emote.

    It’s strange how not reacting becomes the problem. Not the incoherence of the situation. Not the person refusing to help themselves. My failure to perform the right emotion at the right time is what gets flagged as a deficit.

    And maybe that’s why I’ve also been having such a hard time with the word alexithymia.

    Sometimes I look back on an experience…a conflict, a celebration, a goodbye…and only afterward realize it was happy. Or it was unjust. Or it was sad. At the time? I didn’t feel much of anything. I wasn’t there in the way people expect. And I find myself wondering, is that alexithymia? Is that what they mean when they say I can’t identify emotions?

    But here’s what I think is actually happening: I wasn’t allowed to be present. I was too busy tracking the expectations in the room. Too busy trying to be appropriate. Too busy masking. The part of me that might have felt joy, or grief, or wonder, wasn’t at the front of the line. It was buried under a survival protocol.

    So maybe it’s not that I “lack access” to my emotions. Maybe it’s that I’m not given access to the conditions where those emotions can surface.

    Maybe it’s not that I can’t feel. Maybe I’m just too busy surviving.

  • In Relationship with the World

    These are some rough-draft ideas from Part I (Feedback Sensitivity in Coherent Systems)

    I’ve come to believe that life persists by listening. Not through force, aggression, or even advantage, but through attention to what the world is saying. Everywhere, in every corner of the biosphere, living systems endure by sensing feedback and responding to it. A single-celled microbe navigates chemical gradients; a beaver adjusts the shape of its dam to match the water’s push and pull. Different forms, different scales, same principle: those attuned to feedback persist.

    Feedback sensitivity isn’t a marginal skill. It’s not the biological equivalent of knowing how to fold a fitted sheet (nice, but not a prerequisite for survival). Feedback sensitivity is the baseline requirement for survival.

    When I say “feedback,” I mean the circular flows of information in a system: a change in one part affects another, and eventually returns to affect its original source. Biologists call these feedback loops “negative” when they put the brakes on change, “positive” when they amplify it. Either way, they provide continuous regulatory information—a live stream of signals that allow an organism or ecosystem assess its own behavior and adjust.

    Feedback insensitivity, by contrast, leads to drift: systems that can’t correct, can’t adapt, and eventually disappear. Whether it’s a sparrow or a forest, the more sensitive the system is to these feedback, the more likely it is to maintain integrity, recover from disruption, and thrive in the long term.

    Gregory Bateson, systems theorist and anthropologist, observed that adaptive change—which is survival itself—is impossible without feedback loops, whatever the organism or system. Sometimes that change unfolds slowly, filtered through natural selection. But it also happens in real time, as individuals adjust to experience. When I first encountered this idea in Bateson’s Steps to an Ecology of Mind, it quietly restructured how I understood learning. Learning, I realized, isn’t something that unfolds in the brain, but in the loop, where it emerges as an effect of feedback. A population adjusting to resource limits, a tree directing its roots toward groundwater—these aren’t acts of isolated intelligence. They’re expressions of relationship: patterns being read, limits encountered, responses being shaped. The adjustment, the learning, isn’t something the organism invents; it emerges through its relationship with the conditions it’s embedded in. Bateson didn’t just theorize this loop; he saw it everywhere: in the way animals communicate, in family dynamics, in evolution, even in his own struggle to reconcile science with meaning.

    This learning loop is a universal experience, but for me, as a feedback-sensitive (autistic) person, it feels more immediate, more intense. Of course, that understanding of myself is relational, something that only makes sense as a comparison to other people. And I’ve learned the hard way that this is a very precarious place to argue from. I risk confusion or outright dismissal the moment I try to explain that a sound, a smell, or a minor change is flooding my body with stress, cutting through my thoughts, setting off a physiological alarm. These responses are swift and refuse to be ignored. “Everyone feels that way,” “Nobody likes those things,” or “That’s just life” aren’t helpful words in those moments.

    As a child, I didn’t have the words to make my case. I barely do now. But at ten years old, I hardly knew I even had a case to make. One of the most underrated challenges of explaining a difference that’s more about degree than kind is how people default to their own experiences. Using our own reference points, we assume everyone experiences the world the same way we do. If you don’t like loud sounds, and I seem overwhelmed by one, your assumption is that I simply haven’t been exposed to enough noise, or that I’m “too sensitive.” That I just need to get used to it. Try harder. Toughen up. As an adult, I can mitigate these dismissive assumptions, but they still follow me and they still piss me off. As a child, however, the enormous gap between what I felt to be true and what I was told was unbearable. It wasn’t just confusion—it was a minute-to-minute hell I had no words for.

    Not every system returns the same kind of feedback. And not every setting collapses the loop. When I was seventeen, and not a little inspired by Thoreau, I spent a summer by a remote lake in eastern Ontario. Not in the off-grid house my grandparents had built, but just across the water, alone in a tent, on a quiet wooded slope that backed onto crown land. I packed everything I needed on my mountain bike and rode the hundred or so kilometers from home in a day. This was my version of Walden Pond. I fished for food, gathered wood for the fire, cleared a small trail. I read. I wrote. I woke with the light, slept with the dark, and moved in rhythm with the weather. There was nothing metaphorical about it—I was in relationship.

    There were no social games to decode, no hidden meanings. No buzzing fluorescent lights humming in the ceiling or televisions playing in the background. No sudden shifts in routine. No need for performance. The world around me responded plainly to what I did: when the rain came, I got wet; when I built a fire, I got warm. The system I was inside gave immediate, proportionate feedback. And I adjusted. Not always well. I’m no Thoreau. But faithfully.

    I didn’t have a name for it then. But I read Bateson that summer, tucked into a sleeping bag with a headlamp or sitting on the raft at sunrise, and something in his writing gave shape to what I was living.

    What I was experiencing was coherence. Not just in the sense of quiet or stability, but in the deeper, systemic sense: pattern integrity. The way things fit together and return information that makes sense. That feedback loop didn’t just regulate me. It affirmed my existence. I wasn’t broken, or too much, or not enough. I was inside a system where responsiveness wasn’t something to suppress; it was a quiet necessity.

    That summer changed me…not because it taught me something I didn’t know, but because it stopped contradicting what I already did. My perception, my sensitivity, my reactions, they finally had function. I could feel a difference.

    Reading Bateson gave words to a pattern I was already living inside. He writes that when we say some particular organism survives, we’ve already taken a misstep. It isn’t the organism that survives. The real unit of survival, he argued, is organism-plus-environment. I knew what it meant to be part of a system I couldn’t separate myself from. My behavior wasn’t just coming from inside me. It was part of a loop. A reaction to something. A response to conditions.

    People like to talk as if we’re separate from our surroundings, as if we’re making decisions in a vacuum. But I’ve never experienced that. When the room shifts, I shift. When the pattern changes, I change. Contrary to what cabin-in-the-woods fantasies would have us believe, life next to a lake is no exception—change is constant, and often requires a response. But those changes didn’t feel like threats or tests. They didn’t throw me into a dysregulated state. I was simply in relationship with what was happening around me. I felt the stability of coherent feedback.

    Bateson helped me recognize the shape of my own experience. Here was a loop that I was a part of, rather than trapped within like a caged, disruptive animal, pacing in circles, desperate to make sense of the world outside. He called it a coupled system…two parts shaping and sustaining each other. Not always well. Not always clearly. But inseparably.

    We separate the two (organism and environment) because it helps us think more clearly. But it’s only a framework. And frameworks can lie if you forget they’re not the thing itself. It becomes easy, maybe even inevitable, to try to save one part of the system by overriding the other. But that isn’t intelligence. It’s the system misreading its own conditions. “The creature that wins against its environment destroys itself.”

    Coherence isn’t a solitary achievement. It’s not just mine, or just yours. What makes life possible emerges from relationship, from one part of a living whole responding to the cues and limits of the other, and adjusting behavior in response to this feedback.
    Even at the most fundamental level of physiology, feedback sensitivity is what keeps life stable. Every organism is a dense network of feedback loops, each constantly adjusting temperature, chemistry, and structure to maintain balance, even as the world outside shifts and changes. When my temperature rises, sensors in my brain detect the change, triggering responses like sweating or an increase in blood flow to the skin, cooling me down. A bacterium in a pond does the same, swimming toward nutrients and away from toxins, adjusting in real-time to the environment it encounters. These aren’t metaphors for intelligence—they’re the building blocks of it. Each feedback loop is an expression of life’s most fundamental drive: to stay aligned with the larger pattern.

    Biologists Humberto Maturana and Francisco Varela coined the term autopoiesis to describe how cells sustain themselves through constant feedback. A living cell isn’t just shaped by its environment; it actively engages with it, adjusting its internal chemistry in response to what it perceives. Life, in this sense, is not a static condition, but a never-ending dialogue—an exchange between inside and outside. Fritjof Capra, in The Web of Life, asks us to rethink cognition, not as something locked away in the brain, but as this very dialogue, an ongoing loop of perceiving, responding, and adjusting. Life is that loop. It isn’t just one side of the equation—organism or environment. We use that distinction to frame our sense of self, but really it’s just an abstraction. The truth is, we are the loop. When you say something is alive, what you’re actually describing is its ongoing participation in a dynamic feedback loop. You are not a fixed thing; you are a living, breathing process. An energy flow. And while it sounds ridiculously abstract, it’s the truest way to describe what we usually think of as you. It also happens to be the best explanation for why, for me as a (more) feedback-sensitive person, this fluid sense of self feels more pronounced, an experience of life where that constant adjustment to the world lives much closer to the surface.

  • I Can’t Express my Ideas Properly

    When I write, I either spend too much time explaining things people already know (which frustrates them) or I say things without explaining them properly (which frustrates them). I can never find find a happy medium.

    I’ll try to explain what I mean by “feedback sensitivity.”

    When I was diagnosed, I spent months watching the same videos and reading the same books most people watch and read after a late diagnosis. I had the same feelings (probably).

    I wanted to know WHAT MY AUTISM WAS. At its core. People say these particular traits are not really autism–they’re co-morbidities. Ok. Let’s put those aside. Therapists say these particular thought / behavior patterns are the result of layered trauma (i.e. decades of being autistic in a “neurotypical” world). Fine. I can see that. Let’s put those aside, as well.

    What’s left? What’s at the CORE of this label people give me (autistic/ADHD/OCD/etc.)?

    I was left with a pretty short list of what I started to call core traits. Black-and-white thinking, a need for routine and predictability, a need for a certain level of novelty, deep focus, etc.

    But lists don’t do much for me. They never have. They taunt the part of me that needs to reason inductively, to find a larger explanatory model. I wanted to know what was common to all of those traits. Where do they come from? What explains them?

    (Like anyone would, I applied my own existing knowledge, biases, and frameworks to the task. I’m heavy into permaculture, ecology, evolution, anthropology, and a few other fields. These have always been my “special interests,” as the clinical lingo goes.)

    I went down a lot of paths. Some of them were just wrong, and I had to double back. Some led me to the ideas you read about in my book work as it stands, but they sounded different at the time.
    They weren’t completely wrong, but they were juvenile or incomplete.

    For example, I toyed with the idea that I have a DRIVE and a NEED to seek out species-appropriate stimuli and environments (things that are good for humans, in general), and the extent to which I succeed…I’m FINE. The extent to which an environment or stimulus is NOT species-appropriate (not good for people, in general), I’m NOT fine. In fact, the parts of me that were just fine, strengths sometimes, in healthy environments, became disabilities. I still believe this…but I wasn’t happy with “species-appropriate.” Because it didn’t take long for me to realize that what I was talking about were things that were good for ALL forms of life (animals, plants)…not just people.

    In the end, what I found to be common to all the traits on my list was: sensitivity. They were all forms of sensitivity. More or less sensitive to change than a neurotypical person. More sensitive to contradiction. To unpredictability. To sounds and smells. Still with the species-appropriate idea firmly in mind, I felt strongly (and still do) that the change I was overly sensitive to wasn’t a level of change that was good for any person…those people were just somehow less sensitive to it. The same went for contradiction. Contradiction doesn’t benefit anyone…it leads to most of the problems we see on the news. The sounds and smells I was “overly” sensitive to? They were the smells and sounds of activities that are harmful to all people (and all living things, really)…engines, synthetic perfumes, etc. So I’m sensitive to harmful things. But shouldn’t I be? Why isn’t everyone else?

    I started to think about what allows a living thing succeed in an environment, and what causes it to fail. And I came back to a pretty fundamental principle: an animal succeeds depending on how well it can figure out the rules of a place. The better and faster it can understand the rules of a forest/prairie/pond/etc., and the better it can change its behavior to match those rules, the better it will survive and reproduce.

    Break the “rules” of the forest, and you will be “corrected.” Walk through a patch of poison ivy, and you’ll be in discomfort for a week. Go out at the wrong time of day, and you’ll be eaten alive by mosquitoes. These “corrections” the forest is giving you are known as feedback. It’s sort of a strange thing to say because we think of “feedback” as something a person gives to you. Something that’s given to you on purpose. But in ecology, the consequences of your actions in a certain ecosystem are just that: feedback / correction.

    So, on the whole, the more sensitive you are to that feedback, the better you’ll survive and reproduce. The better you can read signals and adjust your behavior by them, the more success you’ll have. It’s evolution 101.

    With that in mind…I came back to my experience in the world as an autistic person. I’d established (in my mind, anyway) that my level of sensitivity is the right level of sensitivity for a living thing. I didn’t have to come up with hypothetical scenarios to prove this to myself, I spent a lot of my early years at my uncle’s or grandfather’s…remote off-grid places where I just…lived.

    But here in this place….I AM dysfunctional. It doesn’t “feel” like I’m dysfunctional, I really am. And it’s that trait, that feedback sensitivity, that is doing the disabling. But that’s ridiculous, isn’t it? How could THE trait most responsible for a living thing’s success lead to disability?

    So I tried flipping the narrative. What if it’s the place? What if every single one of my core traits are really just indicators of what’s wrong with this place? “Deep” focus? There’s nothing deep about my focus when I’m in the woods. It’s just focus. “Black-and-white” thinking? In nature? Are you kidding? That’s the only kind of thinking there is. Something is either true or it isn’t

    I knew that this wasn’t a very nuanced argument. I knew there were holes. I knew that it was based on my own particular autism, my own particular need for supports, etc.

    BUT….core traits, right? CO-morbidities, right? Trauma, right? These are NOT autism. They’re either something that occur with it (they can occur in people who are not autistic) or something that is the result of my “autism,” that core trait of feedback sensitivity, playing for a long time in a very dirty sandbox.

    I hope this helps someone, somewhere.

  • My Abyss

    My father lived in a dark place most of the time. It was deeply uncomfortable to be around. He’d rant and spiral, consumed by things that felt wrong to him, things he couldn’t let go of. The world became an enemy in his eyes. He raged outward, with a kind of schizophrenic intensity. The air was thick with it.

    He would obsess over some perceived injustice or corruption and inflate it beyond recognition. He’d talk about it for weeks. He couldn’t stop. And what might have started from something real would get buried under the weight of his fury. It got ugly. He was ugly. In the end, it looked like nothing but rage…a need to be right.

    That’s probably me now.

    I feel the same storm building. The same fixation. The same alienation. I walk around already knowing the look people get when they start to back away. I see it. And when I get “like this,” the only thing that’s ever let me forgive myself for being so awful to be around is the belief that what I’m working on matters. That it has to be done. But on the days when I lose hold of that belief, days like today, I just feel monstrous. And ridiculous. A negative force, making everything I touch worse.

    What if I’m not fighting the madness I think I am? What if I am the madness? What if this moment, the one where I think I’m beginning to understand, is actually the total loss of my grip on what’s real?

    I truly met my father when he was already twenty years older than I am now. I don’t know what he was like at my age. Maybe he was nothing like how I knew him. He might’ve been more functional than I am now. More self-aware. Maybe I’m falling faster. I always have this version of him in my mind…unhinged, over-the-top, shouting…and I swore I wouldn’t become that. But that wasn’t who he always was, was it? Nobody is born like that. He was like me once, believing he still had all the time in the world.

    Sometimes I think I’m running the same race he lost.

    I’ve spent a lifetime waiting for someone to really see what’s inside me. Not in a vague “I believe in you” kind of way, but someone with the understanding and the means to give me time. Breathing room. A protected space to develop the thing that keeps flickering inside me. Not a free ride. Not praise. Just time. Space. It’s a childish fantasy. I know that. But I’ve spent a lifetime waiting for those people anyway.

    And some days I’m sure there is no such person. That I’m in a world of one, like my father, and that my ideas only make sense there. Only make sense to me.

    Today, I feel rage. Toward myself. Toward the world. I’m disgusted with how seriously I take myself. But I’m still angry at everyone else for not taking seriously the things I see. People mowing 40 million acres of lawn, stupid or demented…I honestly don’t know which. As if nothing ever gets through. A mirror has been held up a million times, a much better mirror than I could ever hold up, and they just keep brushing their hair in front of it.

    Confusingly, I feel a lot of rage toward autistic people online. I’m ashamed and embarrassed to admit this, but I feel abandoned. I pour myself into something, try to name what I think we‘re really feeling…something deeper than just day-to-day frustration or sensory overload…and I watch it get buried. No replies. No spark of recognition. Just more talk about dating and work anxiety and video games. Or I get torn apart. “So you’re saying [strawman argument]” (followed by 37 replies equally outraged by that particular false interpretation of my thoughts). I feel rage, not because I don’t care about them, but because I need someone to say, this is it. This is what I’ve been trying to say, too.

    Instead, I feel like a freak. Screaming into a void.

    It makes me feel ridiculous. Like maybe this is just a blown-out-of-proportion hyperfixation, after all. Like maybe all of this…the thinking, the writing, the physical stress…is just some “autistic loop” with an inflated sense of importance. And I feel so, so ugly. For my parents. For my partner. For anyone close. And I wonder, no I scream…WHAT IS IT ALL FOR?! What exactly do I think I’ve earned? What exactly do I think I deserve?

    Because by society’s standards, I’ve gotten exactly what I deserve. Nothing more. Nothing less. And everything I gave…every piece of myself I tore out and offered…it looks like less than nothing. Just another strange, intense person with grandiose ideas and no ground beneath them.

    Sometimes I think I’m brilliant. But I also think I’m trivial. Laughable. I don’t trust my reality. Not at all. I keep waiting for confirmation. Not from a crowd. Just from someone. Someone who can say, without hesitation, you’re not insane.

    Because I’m fucking terrified.

    Not that I’ll fail, but that I’ll become twisted beyond recognition long before I can save myself. That I’ll lose the thread entirely and end up in some permanent shape the world finds repulsive or sad or best hidden. And that the world will come for me. That it will come for my masks. For debts owed. What will those people find? Something unable to defend itself. Unable to explain itself.

    I don’t want to be that.
    I don’t want to be alone in that.
    I want someone to see me, not as a burden, not as a cautionary tale like my father, but as someone worth helping before it’s too late.
    And I don’t even know if that’s possible.

  • Overstimulated by Bullshit

    I’m working on a section that explores neurodivergence and artificial reward systems. I’m looking at how modern society’s “treats” affect neurodivergent people…especially compared to neurotypical peers, who function as a control group.

    You just don’t want to shower.
    You just don’t want to stop drinking.
    You just want to scroll, play video games, snack, sleep in, give up.
    You don’t want responsibility. You want excuses.
    You’re not “neurodivergent.” You’re just impulsive. Lazy. Weak.
    Grow up.

    That’s by far the loudest voice in my head.

    For years, I’ve tried to hide the fact that I can’t tolerate environments, stimuli, contradictions, etc. that others seem fine with. But I’ve also had to hide what seems to be an inability to resist what others do. I can’t have games on my phone without playing them excessively. I can’t have junk food in the house without eating myself sick. So I don’t have either. I have to keep the phone game-free and the fridge can only have whole foods. It’s embarrassing to admit. And this feeling isn’t a hindsight sort of thing. I feel it RIGHT NOW. Being overwhelmed by modern society’s excesses will probably ALWAYS feel like a personal moral failure to me (no matter how I tell myself it might be something else.

    What makes me special? Why wouldn’t people assume when I say I’m autistic or ADHD, that I’m trying to cash in on some behavior lottery…one that gets me out of doing things no one really wants to do, and grants me freedom to do whatever the hell I want?

    If that’s how you see me, “Nice try, asshole,” is probably the correct response.

    My own particular mask doesn’t help…the one I’ve worn most for the past ten years or so. It could best be described as “interesting redneck.” A bit of me peeked out, of course. The permaculture methods I like to use on my property. The odd opinion I shared…on how nice it was to have deer in my fields again (during Covid lockdowns), for example. Or repeating (a little too often) how grating the sound of the increased traffic on my road is. But by and large, I masked as what you would expect to find in a middle-aged man in a rural area. Work hard, play hard, don’t give me excuses, and all that bullshit.

    My diagnosis was like a chair to the head for that mask. None of the literature I was reading, none of the data I was seeing, could possibly allow it to survive. It didn’t just get heavy…it was putrid. It reeked of stupidity, and I knew I’d never be able to pick it up again, let alone put it on. The same proved to be true of all my masks. The studies, books, and data exposed them all for what they were. 

    I’d convinced myself, but how can I convince others? Put aside the fact that I’ve never been good at that. Let’s say, for a moment, that I was somehow able to articulate myself in a way that would cause people to listen. Well, even if I managed to quell the straw-man argument hell I was opening myself to (“What the hell are you on about? My 5-year-old autistic son has yet to speak a word. He needs help getting dressed. And you’re trying to sell me the idea that autism is some sort of biological advantage? Fuck you.”), anyone with an (indoctrinated) brain in their head isn’t going to listen to me then explain how me not taking a shower or having a beer at 9 in the morning might not purely be a personal failing. These are big bloody obstacles. The feedback I got from the few people I shared my ideas with was nothing but confirmation.

    I knew I would need an insurmountable amount of data to even have the slimmest chance of reaching a mere fraction of the most open-minded readers.

    I found it.

    I didn’t just find it…I found it with ease. (The comparative studies are everywhere. Meta-analyses. National surveys. Neuroimaging. Behavior data. It’s not subtle.)

    It needed minimal organization. It formed its own framework. And for someone like me, that’s….sheer ecstasy. An explanatory model that not only survived months of scrutiny, but instantly encompassed my hunches, my experiences, and my conclusions? How often does that happen, really? I’m a bottom-up thinker, an inductive thinker, my very nature precludes the possibility of cherry-picking data for a theory, no matter how attached I am to it. Devil’s advocate isn’t one voice among many in my head…it is the voice. I can’t “let things go.” That isn’t a flex…it’s just the way I am (and gets me into all sorts of shit). But this research was turnkey. It formed its own coherent argument. One that made me physically excited. Happy dance-flushed-stimmy excited.

    I’ve known for a long time that modern civilization doesn’t run on real signals. It runs on engineered superstimuli—“food” that’s sweeter than food, screens that flicker faster than your brain evolved to track, validation loops designed to mimic love, stimulation, and safety. In 2025, everyone knows that, really. It’s common knowledge—almost trite. And for most people, not a minority, these things are hard to resist. But for some of us, it borders on impossible.

    My experience isn’t a story of addiction or lack of willpower. It’s a story about susceptibility. The susceptibility of a feedback-sensitive brain to systems that were built to extract something from it. Clicks. Likes. Data. Energy. Money.

    Let’s be clear: not all of this is about chasing pleasure. Sometimes, it comes from avoiding pain. The sensory chaos of a grocery store. The moral incoherence of workplace small talk. The emotional friction of living in a world that doesn’t return clean, proportionate feedback. Many neurodivergent people withdraw from that world…not because we’re lazy or disinterested, but because it costs too much (neurologically) to stay in it. But withdrawal comes with its own costs. You’re not going to the farmer’s market. You’re not joining the running club. You’re not cooking a family meal. But you seek what you need (quiet, stimulation, reward) somewhere. And modern society is more than happy to offer it: in bags, in bottles, on screens.

    Still, that’s not the core argument here. Avoidance doesn’t explain how precisely these systems seem to exploit my wiring.

    This isn’t just about being boxed in by circumstance. It’s about how the system itself is built. It’s about the intensity of the signals, the distortion of natural feedback, the way those signals strike differently in the more sensitive among us. It’s about the fact that even when the external stressors are removed, the engineered signals often still hit harder, register deeper, and dysregulate faster.

    It’s about what happens when a feedback-sensitive person is exposed to artificial reward systems.

    Do you know what happens?

    When the signals get too loud for a feedback-sensitive brain to filter or resist?

    28% of adults with ADHD are obese. That’s not about chips being available. That’s about chips being formulated…saltier, fattier, more dopamine-releasing than anything in the ancestral record. The average? Sixteen percent. This is a feedback-sensitive brain lighting up “more,” doing its job. It doesn’t let go.

    Children with autism? 41-58% more likely to be obese than neurotypical peers. Are they less able to comprehend what is healthy? Do they have less willpower? Are their parents less caring or strict? Or is it because engineered food is built to override satiety? To turn feedback sensitivity against itself?

    25-37% of teens with ADHD meet clinical criteria for internet gaming disorder. Not “likes games.” Disorder. Autistic children? 3.3 hours of screen use vs 0.9 hours/day for neurotypical peers. Autistic adults? Statistically higher scores on gaming addiction tests (9% higher than clinical thresholds). Why? Structured environments. Rules. Possibility of mastery. Variable-ratio reward schedules. Sensory immersion. Linear feedback. It’s everything a feedback-hungry person wants. These are conditions they are starving for…rarely present in that place we now call the real world.

    Social media hits harder too. Each like, each comment, each notification…engineered to simulate social connection. For ADHD, it becomes a loop. For autism, it becomes a need. These are two sides of the feedback-sensitive coin.  Both are pulled deeper, faster, and stay longer.

    Pornography? Another biological drive hacked: reproduction, bonding, pleasure. But louder. Faster. On-demand. Zero ambiguity. Anyone might get addicted. But for ADHD brains (for a feedback-sensitive person living in a system that lacks biologically-significant novelty), it’s dopamine on tap. For some autistic people (feedback sensitivity in a system that’s full of distorted signals and contradiction), it becomes a ritual. Not because of what it is, necessarily (pornography), but because of how it behaves as a signal.

    Substances? The brakes and accelerators we use to reshape society’s feedback into something comprehensible, or at least dull it? 23% of people with ADHD have a co-occurring SUD. Autistic adults are nearly 9 times more likely to use recreational drugs to cope with the consequences of distorted feedback (anxiety, sensory overload).

    Compulsive shopping, binge-watching, substance abuse, overuse of screens: same pattern. Not lack of restraint. Not moral decay. Signal distortion.

    These systems engineer signals based on how the human brain picks up and processes information. They’re not bloody well accidental. They’re designed to strike the nervous system where it’s most receptive. They’re practically a case study in human feedback-sensitivity (funded by consumer / tax dollars).

    The more sensitive the person is to feedback, the better these signals “work.” It isn’t complicated. So why? Why is it contentious to say these things? Why, despite everything, do labels of dysfunction continue to accumulate on this side of the equation?

    At this rate, we’ll need to expand the English language. The words don’t exist yet for the number of labels we’ll need. Because this is the gradual pathologization of life itself.  

  • Transitions SHOULD be hard (in this place)

    I’ve had a problem with transitions my whole life. Bed to shower, shower to kitchen, reading to greeting guests, greeting guests to mowing the lawn…it’s always a fucking battle with myself. When I was diagnosed, I was told what I already knew: “You’re bad with transitions.” You overreact (I do). You shut down, or get stuck, or blow up at things that seem easy for everyone else (I do). I was given new words. Cognitive inflexibility. Behavioral rigidity. Insistence on sameness. Resistance to change. Perseverative behavior. Pathological demand avoidance. Dependence. Delay. Resistance. Slow. Poor. Intolerance. Difficulty. Rigid. Distress. Impaired.

    OK, so I’m clearly not cut out for life.

    But is it life?

    In a coherent system, the one we evolved in, transitions aren’t hard. Not because organisms there are tougher or more flexible, but because the transitions themselves make sense. They’re part of that system. Seasonal shifts. Puberty. Grief. Rest. They don’t happen suddenly or without warning. They come with cues. Physical cues. Environmental cues. Even social cues.

    Here’s the thing: organisms from those systems don’t “adapt” to the timing of transitions. They’re formed by them. There’s no gap between the system and the self. The rhythm outside becomes the rhythm inside. I don’t just endure spring. Don’t be ridiculous. I become the kind of creature that responds to spring. I don’t “handle” hunger. I become hungry. In a place that makes sense, that leads to finding or growing food. The feeling arises with purpose, and the transition it asks of me (movement, focus, effort) is supported by everything around me. I’m doing what I’m meant to, when I’m meant to.

    That’s what real feedback does. It shapes you as it informs you. And when something in the environment changes (something biologically real) a feedback-sensitive person picks that up fast. They change in response. And they change quickly, and they change well. In step with what’s actually happening.

    That’s feedback sensitivity: the degree to which your behavior maps to signal. That’s what makes an organism adaptive. That’s what makes a person adaptive. Not just quick to change, but able to change in a way that fits what’s real.

    It’s not a side trait or a quirk. It’s the foundational condition beneath every other trait we call adaptive. Learning? Downstream. Flexibility? Downstream. Even thought (real thought) starts with the ability to pick up on what’s true, and respond.

    That’s what makes it so fucking painful to live in a system where most signals don’t mean anything.

    Modern civilization is full of transitions, but they aren’t tied to any real need. They aren’t about my body, or the land, or the seasons. They’re constructed. I move from one grade to another. One job to another. One building, platform, device, account to another. One activity of questionable importance to another. It’s not that my life changes…it’s this weird environment demanding I act as if it has.

    I try to keep up. Because I’m still wired for signal. I still think transitions mean something. But they don’t anymore. They’re non-referential. They point to nothing. They’re fast, constant, and nearly always disconnected from any ecological pattern or how ready I am. And the more I try to track them, the more exhausted I get. Because I’m not supposed to track that kind of noise. I was never meant to.

    Modern civilization doesn’t create real transitions. It just repartitions reality…chops it into convenient segments that suit its own internal logic. It rearranges things for the sake of efficiency, not coherence. It runs on deadlines, not seasons. Bureaucracy, not biology. And when its logic starts to fail (it usually does) it doesn’t get corrected by feedback. It distorts or severs the feedback loops that would normally force it to change. So that I get corrected. I get labeled.

    It builds itself on top of the coherent system (the real one: biological reality)…but increasingly in defiance of it.

    And then it calls me broken when I struggle.

    But let’s be honest: struggling to move from one meaningless task to another, from one harmful environment to another, should be difficult. Struggling to shift from something that matters to something that doesn’t…that should be hard. If it’s not, that’s not a sign of health. That’s a sign that something inside has gone quiet. That feedback sensitivity (the thing that tells you what fits, what hurts, what’s true) has been pushed down so many times it stops trying to speak.

    We live in a place that celebrates that. It calls it resilience. Social intelligence. Professionalism. Maturity. But more often than not, it’s just the absence of protest. A learned silence.

    Here’s a deeper layer: over time, humans selected themselves for exactly that. Not for sensitivity to truth, but for compliance. For docility. For the ability to tolerate contradiction without protest. That’s self-domestication. It’s what lets people smile while a system collapses around them. What lets them adapt to noise, to simulation, to systems that reward pretending more than perceiving.

    And that’s not a knock on anyone…it’s just what systems like this select for. If I can’t seem to get on board with that, I’m pathologized. Called inflexible. Dramatic. Disordered. And those are all accurate descriptions of me in places like that.

    But is difficulty with incoherence really dysfunction? Isn’t it the thread of something real?

    I can handle change. I can’t ignore when a change isn’t grounded in reality. When a signal doesn’t match a truth. When the transition isn’t tied to anything that matters. My whole system lights up. I think maybe it’s supposed to.