-
Is there such thing as a “baseline human?”
I describe the configuration of the human nervous system known as “neurotypical” as being divergent from an adaptive baseline. But is there such thing as a “baseline” human? A “baseline” wolf? After all, every organism is the result of ongoing evolution. Am I just comparing one phase of adaptation to another?
If I were talking about evolutionary drift, or ecological selection within an intact system, then yes…I’d be fucking up. But the civilizing / domesticating process isn’t that.
Domestication is artificial selection, not natural selection. In wild systems, traits are selected by feedback…what works, persists. In domesticated systems, traits are selected by suppression…what submits, survives. That’s a forced bottleneck, not an evolutionary trajectory. A wolf doesn’t become a dog by evolving, but by being confined, starved, bred, and rewarded into compliance. Same with us.
And I’m comparing different conditions, not forms. This isn’t wolf vs. dog, or Paleolithic vs. modern human…it’s organism regulated by coherent feedback loops vs. organism surviving in a distorted, feedback-inverted environment. This isn’t some kind of nostalgia for prehistory…it’s about system integrity.
It’s laughable that we live in a “world” where we have to be reminded that there is a functional baseline…you could call it feedback coherence, I guess. Coherent behavior is maintained through timely, proportionate, meaningful feedback. That’s the baseline…it’s a system condition (not a species). When a system becomes functionally closed, symbolically governed, and/or predictively trapped, it loses that baseline (even if it survives in the short term).
You might respond that evolution got us here. But evolutionary processes don’t “justify” maladaptive systems. Saying there’s no baseline is a post hoc rationalization for harm. And I hear that all the time. People justifying obesity in dogs because it’s common in the breed. Or calling office work “adaptive” because it pays well. Or saying modern humans are just “evolved” for abstraction and control…even as the world burns and mental illness becomes the new norm.
Evolution doesn’t care about health or coherence. It simply tracks what survives. But feedback is what sustains life, and it’s being severed.
Ask yourself: what is selected for in society, as you know it? If you had to name one thing? Honesty? Hard work? Ambition?
I think it’s compliance. I think the civilizing/domesticating process replaces selection for survival with selection for compliance.
Let’s look at wild systems first. There, the selection pressure is for ecological coherence. Traits are favored because they enhance survival in a feedback-rich environment (keen senses, strong affective bonds, situational learning, pattern recognition, adaptability). An organism has to remain in sync with reality, or it dies.
But in civilized systems, it’s easy to see that traits are favored because they enable success within an artificial, abstracted system (obedience, rule-following, role performance, suppression of emotion and instinct). You have to fit the symbolic structure, or you’re punished, excluded, pathologized, or discarded.
It sucks because what was adaptive (sensitivity, integrity, etc.) is maladaptive in this odd place we call “civilization.” And what was dangerous (passivity, abstraction, dissociation) is rewarded.
Think: selecting for people who can function without reality (instead of people who thrive in it).
It’s not far fetched. At all. Sickly animals that can’t survive in the wild. Office workers who ignore chronic pain and emotional numbness (and get promoted). An entire species driving itself toward collapse while calling it “progress.”
This whole trainwreck we’re on is a case of runaway selection, but instead of selecting for extravagant traits like peacock feathers, it selects for compliance with abstraction and resilience to incoherence. And like all runaway selection processes, it becomes self-reinforcing, decoupled from reality, and ultimately self-destructive.
Don’t believe me? Let’s track it.
Quick review of the basic concept. In biology, runaway selection occurs when a trait is favored so intensely within a closed feedback loop (e.g. mate choice, social signaling) that it amplifies beyond functional limits (it doesn’t serve survival anymore…it just signals compatibility with the system).
Peacocks grow huge, draggy tails because other peacocks think it means they’re fit (not because it helps them survive). Humans undergo surgeries, wear restrictive clothes, or starve themselves for “attractiveness” under runaway cultural ideals. Same dynamic. And civilizations grow more complex, abstract, and self-referential not because it’s sustainable, but because “Complexity” signals legitimacy and control.
Let’s run through it again.
Civilization creates a system (think classrooms, corporations, governments) where success depends on suppressing natural feedback. Then it rewards those most tolerant of abstraction, delay, hierarchy, and contradiction. This filters out feedback-sensitive traits. That keeps happening until the system becomes so self-referential that it can’t correct course anymore…it’s bred out the ability to perceive correction.
So it’s a runaway selection for dissociation. For the kind of human who can survive it (even if it clearly can’t survive the world).
Like all runaway systems, the trait (in this case, compliance) accumulates beyond adaptive range. The system grows more fragile and less correctable. Feedback from the real world becomes too painful or too late. And collapse happens from the inability to stop succeeding at being disconnected (not from a single failure).
We’re not evolving.
We’re overfitting. Civilization is a runaway selection loop for traits that thrive in unreality.
And the “neurotypical” configuration is a collection of those traits. It’s not a neutral or natural norm…it’s a phenotypic outcome of this runaway selection.
A configuration that is tolerant of contradiction (doesn’t break down where reality and narrative diverge). That is emotionally buffered (can perform even when distressed). That is low in sensory vigilance (can endure loud offices, artificial lights, social facades). That is socially adaptive (mirrors norms, infers expectations, suppresses authenticity). That complies with rules even when rules are nonsensical. That’s able to delay gratification, ignore bodily needs, and maintain appearances.
I’m not saying these traits are bad per se…but I think we can all agree that they’re not the “baseline human.” They’re the domesticated phenotype, selected over generations to survive in systems where truth no longer matters.
And, of course, the more a system rewards these traits, the more they proliferate (socially, genetically, culturally). It becomes harder for feedback-sensitive individuals to survive. Reality has to be increasingly suppressed to preserve the illusion of normalcy. Eventually, the only people who appear “well-adjusted” are the ones most disconnected from feedback…and the entire system becomes incapable of detecting its own failure. That’s the endpoint of runaway selection.
I have a hard time with the dominant narrative…that the neurotypical profile is some kind of gold standard of human functioning. To me, it’s clearly the domesticated outcome of a system that rewards compliance (and “stability,” such as it is) over coherence or contact with reality.
* When I say “neurotypical,” it’s not meant as some kind of medical category. I think of it as the cognitive-behavioral phenotype most rewarded by civilization (modern society, yes, but also throughout the history of civilization). I don’t see it as a person. Not every “neurotypical person” fits this mold. I’m almost certain no one fits it perfectly. I’m describing a directional pressure, not a binary condition. And it isn’t “bad.” It’s simply optimized for the wrong environment (one that destroys life). Neurotypicality isn’t unnatural…it’s civilizationally adaptive (in a system that’s maladaptive to life).
-
The Civilizing Process IS Domestication
Domestication is the process by which organisms are selectively shaped to be compliant, predictable, and dependent on human-controlled environments…often at the cost of sensory acuity, autonomy, and ecological fitness.
Civilization is the expansion of symbolic control over individuals and groups through norms, rules, abstraction, and institutions…suppressing direct feedback, internal regulation, and spontaneous behavior in favor of obedience and symbolic order.
They’re one and the same.
They both suppress feedback sensitivity. (To control an organism or a population, you have to prevent it from reacting authentically to harm, injustice, or incoherence.)
They both favor neoteny. (Juvenile traits like compliance, passivity, and external regulation are selected and extended into adulthood.)
They both shift behavior from function to performance. (The wild animal hunts; the domesticated animal waits. The wild human responds; the civilized human performs.)
They both create dependence. (On artificial systems…pens, laws, currencies, screens…rather than ecological loops.)
They both sever feedback loops. (To domesticate is to disable the plant’s / animal’s relationship with “wild” cues. To civilize is to disable the human’s relationship with embodied, emotional, and ecological reality.)
Domestication is the biological manifestation of the civilizing process, and civilization is domestication scaled, abstracted, and systematized. This isn’t metaphor…they’re identical. Different names for the same thing.
So what?
- What we call “progress” is maladaptation. If civilization selects against feedback-sensitive traits, then most hallmarks of progress (obedience, emotional detachment, performance under duress) aren’t improvements. They’re symptoms of ecological and cognitive degradation.
- “Neurotypical” is a pathology of fit. In other words, the “typical” mind in civilization is one that fits a feedback-suppressed system…not one that is healthy or coherent. What we call “mental health” is largely the ability to suppress warning signals.
- Collapse is the endpoint. A system that inverts feedback can’t self-correct. It accumulates error until it fails catastrophically. Collapse isn’t a failure of civ…it’s its logical endpoint.
- Modern humans aren’t baseline humans. Just as dogs aren’t wolves, modern humans aren’t the baseline human phenotype. We’re shaped by millennia of selection for compliance, abstraction, emotional control, and symbolic performance.
- Resistance to this process (civilization / domestication) is a biological signal. Individuals who resist conformity, abstraction, or symbolic authority aren’t broken…they’re retaining functional traits that no longer fit the dominant system. Autism, ADHD, sensitivity, oppositionality, and “mental illness” often represent intact feedback systems in an inverted environment.
What are the real products of civilization? Not culture, but civilization?
We have some intentional products (ones designed to enforce control):
- Laws / punishment systems (enforce behavior abstracted from context or consequence)
- Religions of obedience (codify submission and moralize hierarchy)
- Schooling (standardizes cognition and behavior to serve symbolic roles)
- Currencies / bureaucracies (replace direct reciprocity with quantifiable abstraction)
- Surveillance (ensures conformity without requiring local trust or co-regulation)
- Cages / fences / walls / uniforms / schedules (tools to overwrite instinct)
And we have some inadvertent ones (usually denied or pathologized):
- Mental illness epidemics (result from prolonged feedback suppression and coerced performance)
- Chronic disease (where natural regulation is replaced by artificial inputs)
- Addiction (coping mechanism for living in a system where natural pleasure and feedback loops are severed)
- Anxiety and control-seeking (nothing is safe, responsive, or coherent)
- Loneliness / alienation (loss of meaningful co-regulation and mutual reliance)
- Ecological destruction (consequences are insulated against)
- Pathologization of feedback-sensitive people (framing coherence-seeking organisms as dysfunctional because they can’t / won’t adapt to incoherence)
-
What’s the “civilizing process,” really?
Here it is, real fucking simple:
- A need for control emerges. Scarcity, fear, or hierarchy drives the impulse to command others, landscapes, or outcomes. Control is applied against natural feedback (hunger, resistance, erosion, protest).
- To sustain that control, you need to suppress feedback. Deny pain. Punish dissent. Delay ecological consequences. All kinds of symbolic systems (laws, gods, spreadsheets) start replacing direct experience. Well done!
- Accomplish full feedback inversion. You need people to internalize the system. You need them to override their bodies, emotions, and instincts to fit the abstraction. The more disconnected they become, the more compliant they’ll be.
- Reward feedback-insensitive traits…compliance, social performance, dulled cognition (dumbness), emotional suppression. These traits reproduce the conditions for control.
- Expand control. New territories. More refined hierarchies. Tighter schedules. Now you’ll see new forms of feedback inversion emerge (currency, bureaucracy, digital metrics).
- Repeat. Spiral. Expand.
What is this loop? It’s civilization!! It’s domestication in both human and nonhuman systems.
Civilization is a positive feedback loop between control and feedback inversion. Don’t be fooled by the shiny poetry and porta potties…it’s:
- hierarchy
- standardization
- surveillance
- predictive modeling
- coercive compliance
- ecological collapse
…and it punishes:
- sensitivity
- divergence
- embodiment
- ecological attunement
- spontaneity
- truth
Civilization isn’t just TikTok and Trump…it’s so much less than that. It’s cancer. It’s entropy. It’s a runaway control-feedback inversion loop that selects against reality-responsive traits.
And domestication is that loop embodied…in animals, plants, and people.
- A need for control emerges. Scarcity, fear, or hierarchy drives the impulse to command others, landscapes, or outcomes. Control is applied against natural feedback (hunger, resistance, erosion, protest).
-
Feedback Inversion
The way domesticated humans and animals diverge from their wild counterparts isn’t random…it follows a predictable systems pattern that has analogues in ecology, cybernetics, even thermodynamics.
What is it? What’s the key transformation?
The organism shifts away from being regulated by feedback to being regulated despite it.
That’s what domestication does (in animals or humans). It removes or blunts the organism’s natural ability to respond to environmental signals, and replaces that responsiveness with compliance to an imposed system. And the divergence unfolds along a bunch of predictable dimensions…
Cognitive Shift (From Adaptation to Control)
Wild mind: constantly updating based on local, real-time feedback
Domesticated mind: defers to rules, roles, or authority (even when they contradict experience)
Behavioral Shift (From Function to Performance)
Wild behavior: serves a real purpose (find food, avoid danger, bond)
Domesticated behavior: serves a symbolic or imposed role (obedience, etiquette, branding)
(In cybernetics, this resembles a loss of negative feedback…the system stops adjusting based on outcome, and instead preserves form through positive feedback, locking in behavior.)
Sensory Shift (From Vigilance to Tolerance)
Wild senses: alert, acute, tuned to survival-relevant input
Domesticated senses: dulled, filtered, or overridden to tolerate noise, confinement, social overload
Affective Shift (From Co-regulation to Suppression)
Wild emotions: socially functional, tied to reality
Domesticated emotions: repressed, misdirected, or disconnected from actual stimuli (chronic anxiety, performative joy)
Structural Shift (From Efficiency to Excess)
Wild bodies: lean, efficient, stress-adapted
Domesticated bodies: neotenous (juvenile traits), prone to disease, dependent on infrastructure)
So what’s going on in this domestication process? Particularly in human behavior?
You could call it feedback inversion. A systemic reversal of the role of feedback…from a guide to coherence to a threat to be suppressed, ignored, or distorted.
And I’d argue that the domesticated (“neurotypical”) human mind is a product of feedback inversion…trained to override bodily, sensory, and ecological signals in favor of symbolic, delayed, or externally enforced rules.
Let’s track this.
Control comes first.
- A group (or system) seeks to stabilize its environment, secure resources, prevent loss, dominate others, etc. This is an impulse that demands predictability and reduced uncertainty.
- And to exert control, you have to ignore certain inconvenient signals. The hunger of others. The pain of subordinates. The ecological damage you’re causing. Your own body’ needs. In other words, you begin inverting feedback. You treat reality’s signals as noise.
- Once you have symbolic systems (laws, money, ideologies) in place to maintain control, they begin rewarding those who suppress feedback and punishing those who respond to it. Now we have a positive feedback loop. The more control you assert, the more feedback you need to ignore. And the more feedback you ignore, the more “brittle” your control becomes…so you assert even more.
- Over time, the system selects for feedback-insensitive participants. Now control isn’t just enforced…it’s embodied. Now feedback sensitivity looks like deviance.
Once embedded, feedback inversion maintains control by filtering out any kind of destabilizing truth, prevents course correction, and confers survival advantage on the most disconnected people (until the system crashes). It starts as a tool of control but becomes a systemic pathology.
-
What is “neurotypical” across living systems?
A nervous system is neurotypical when it supports survival by maintaining coherence within the environment through feedback.
This includes:
- Sensory Integrity
- Acute responsiveness to relevant environmental cues
- Filtering of irrelevant noise (but not suppression of important input)
- Modulation (not dulling)
- Predictive Flexibility
- The ability to form and update internal models based on real consequences.
- Resistance to rigid schemas when contradicted by experience.
- Local, bottom-up learning prioritized over top-down imposition.
- Emotional Fidelity
- Emotions arise in response to actual relational, bodily, or environmental signals.
- Expression aligns with inner state (no chronic masking or performative dissonance).
- Emotional responses guide adaptive behavior (e.g. flee danger, seek connection).
- Behavioral Coherence
- Behavior is driven by needs, feedback, and context (not abstract rules).
- Repetition and rhythm serve regulation (not compulsion).
- Disruption in the system leads to adaptive signals (e.g. withdrawal, stimming, protest).
- Relational Reciprocity
- Social interactions involve mutual regulation (not dominance hierarchies).
- Communication is functional, honest, and oriented toward shared understanding.
- Deception is rare and costly.
- Feedback Sensitivity
- The system changes in response to reality.
- Pain, hunger, conflict, beauty, and pleasure all serve as real-time guidance systems.
- When feedback loops break, pathology arises…not in the organism, but in the system.
Neurotypicality (in an ecological sense) is the default wiring for coherence with life.
By this definition, most modern humans are neurodivergent…not in a pathologized way, but in a systemic distortion way. They’ve been selected or conditioned to override feedback, suppress affect, and conform to symbolic systems detached from biological reality.
- Sensory Integrity
-
The Double-Empathy Struggle
So a big part of this book is figuring out how people can do the stupid or terrible things they’ve done (and continue to do).
The answer to that question has really proven a challenge. Frustating.
It’s occurred to me that part of the challenge (maybe the biggest part) is that I’m trying to figure out where people diverge from reality in a way that I can understand. I keep looking for reasons I can relate to. Some sort of trap that, when I see it, I say, “I could see myself falling into that, too.” But I can’t find that trap.
Because if the divergence of reality I see in the people around me happens at a point I would never have chosen…it feels alien, false, deductive. I need it to be human. Comprehensible to me. I want to believe that had I been there, I’d at least have seen how the mistake happened.
It’s in this line of thought that I had a breakthrough.
Maybe the difference isn’t in the choice…but in the threshold.
I and others like me might just have a lower tolerance for unreality (a more sensitive detection system for contradiction). Because I think most people DO feel dissonance, but they just have more social circuitry telling them to ignore it. What is that social circuitry? And isn’t that the deviation from life’s baseline?
When faced with serious problems, statements like, “This isn’t my place to question,” “It’s probably fine,” Everyone else seems convinced,” “It’s safer not to say anything,” and “It is what it is,” do more than annoy me. They fucking enrage me.
So maybe the divergence is recognition. One group feels the glitch and names it…the other feels it and smooths it over. Because their nervous systems are somehow tuned to avoid rupture instead of detecting and responding to it.
Maybe I feel reality differently. That certainly tracks. That would mean a problem of empathy across feedback thresholds. That mystery choice I’ve been looking for? The one I can comprehend as how people mistook fiction for reality? Maybe I’m not missing it at all. Maybe I’m simply seeing that, for me, there was no choice. There’s something that I would have felt that didn’t register with them.
So let’s look at our fork again.
Is it a different mix of people in the groups? We’ve ruled out innate cognitive superiority. Could there simply be a different mix of dispositions, thresholds, or nervous system types?
Probably.
Let’s say Group A has more people whose nervous systems respond strongly to contradiction, unreality, or unresolved pattern. And Group B has more people whose nervous systems prioritize social cohesion, comfort, and continuity.
No talk of virtue…just configuration.
Same species, same environment, different sensory weighting. It seems plausible that a small difference in feedback sensitivity across a few individuals could tip a group’s response to contradiction.
Or is it really external conditions? Because I think these matter…but not in the way most people think. It’s not about environment determining outcome. It’s about environment shaping when and how feedback arrives. A harsh environment returns frequent, sharp signals (You’re wrong. FIX IT.) A forgiving environment allows more drift before consequences appear.
So external conditions shape the urgency of model correction, and internal sensitivity shapes the likelihood of correction. Low sensitivity + gentle conditions? Drift compounds. Fast. High sensitivity + harsh conditions? Feedback (reality) stays close.
Are “low tolerance for unreality” and “need for stable patterns” the same thing? I don’t think so…but they feel close.
A low tolerance for unreality is detecting and suffering from contradictions between reality and model…it’s affective and stress-inducing. A need for stable patterns is seeking and requiring patterns that hold over time to feel safe…it’s predictive and structural.
But they’re structurally linked, aren’t they? I need stable patterns because unreality feels intolerable. And I reject unreality because it violates the patterns I need to hold. They both express an orientation…a high-fidelity feedback requirement.
SO…some groups contain individuals for whom predictive error is viscerally intolerable. Others contain fewer. Whether the group listens to those individuals determines whether the model corrects or compounds. The environment determines how quickly error becomes obvious. The culture determines how early error is acknowledged. And they nervous system determines how strongly error is felt.
-
No…autistic people don’t struggle with complexity.
We struggle with complex bullshit. Complexity that doesn’t stay in contact with reality. Complexity built to preserve delusion…systems of thought that multiply explanation instead of reduce error. It’s not the number of layers…it’s whether the layers track the thing they claim to represent.
I’m fine with complexity when it emerges from feedback, remains falsifiable, stays anchored in pattern, can be broken open and examined, and responds when something stops working.
I’m not fine with just-so stories, self-reinforcing abstractions, theories immune to contradiction, semantic inflation (changing definitions to preserve belief), or socially protected bullshit that silences doubt.
I’m just fine with structure…it’s insulation I have a problem with.
Bullshit = complexity that survives by outmaneuvering feedback.
And yet………in the early stages of understanding something, I do feel averse to complexity.
Like why the people around me seem fine when just about nothing in the world is fine. How did they get like this? Surely their disposition isn’t life’s baseline, or the earth wouldn’t have lasted as long as it has.
I don’t like lists of reasons. I don’t look for explanations as much as singularities. Something that collapses the list. Something that makes that fork I’ve been writing about…the one where some groups of people stayed connected to reality and others adopt fictions that ultimately lead to genocide / ecological plunder / extinction…inevitable, traceable, and unambiguous (without resorting to mysticism, virtue, or accident).
I’m allergic to narrative sprawl (I know, I know) masquerading as theory. I don’t want an ecosystem of causes…I want a keystone fracture.
If the starting conditions are the same, why does one group protect an erroneous model of reality, and another let it break?
I can’t help but feel that the first real difference is what the group is optimizing for, and whether that goal is visible to them or not. I think one group is optimizing for predictive accuracy, and the other is unconsciously optimizing for social coherence. There. I said it.
I don’t claim they know they’re doing it. But every signal, every decision, every reaction is weighed (subconsciously) against one of those metrics. When the model breaks, that internal orientation determines the response. If the priority is accuracy? “The model must adapt.” If the priority is coherence? “The contradiction must be contained.”
So not values or beliefs, but a deep system preference for truth-tracking versus conflict-minimization. And based on everything I’ve encountered…that really feels true. It clicks.
And it begins long before it’s visible…it shows up in how children are corrected, how dissent is handled, how stories are told, whether doubt is sacred or dangerous, and whether speech is relational or investigative. One group sharpens awareness and the other flattens tension.
Because social coherence “works,” doesn’t it? It feels good. It stabilizes something.
So the first difference, the root divergence, the fork, is not belief, structure, or insight. It’s which pain the group is more willing to feel: the pain of being wrong, or the pain of disagreement. When error appears, will we change the story…or suppress the signal?
-
Is compounding error to blame?
Maybe.
Any group that seeks advantage needs a model of the world to interpret cause and effect. This is true post-Dunbar (when a group is made up of more than ~150 people). Once behavior depends on symbol, the group is no longer responding to the world directly, but to its model of the world (this is consistent with the predictive processing model of human behavior). So what matters now is model error (and what happens to it)…not truth.
Do you treat predictive error as signal, or noise? This is the fracture.
One group encounters contradiction, failure, discomfort, and says, “We misunderstood something.” They adjust their model.
Another group encounters the same and says, “This isn’t a real error.” Their model is preserved and signal is suppressed. Then the compounding begins.
Every time the world returns unexpected feedback and you refuse to update, you embed the error into the structure. You reframe the contradiction as a test, or anomaly, or enemy action (think Trump). You revise the interpretation of feedback, not the model itself. You build insulation layers to protect the model from reality.
Each move makes the model more coherent internally, but less aligned with the world. The simulation becomes smoother and the fit becomes worse. And because each act of suppression makes the model harder to question next time, the cost of correction increases exponentially.
What’s being “compounded,” exactly? Error, because each misfit is hidden rather than corrected. Confidence, because the model appears to keep “working” internally. Power, because the system selects for those who uphold the model. And fragility, because the longer the feedback is ignored, the harsher its return.
This is how collapse becomes inevitable, not from evil or chaos, but from a feedback loop about feedback itself.
Collapse begins the first time a group decides that a predictive error is not worth adjusting for. The cause is this treatment of error, and the decision to protect the model rather than let it break where it no longer fits.
A man dances. It rains. It happens again. And again.
He (and eventually the group) builds a model: “Dancing causes rain.”
So far, this is rational…based on a perceived pattern. This is just pattern sensitivity, not delusion. Everyone does this. Animals do it too. The brain is a pattern detector, not a truth detector. No problem yet.
Others begin to believe. The dancer is now “Rainbringer.” His status rises and the ritual becomes culturally encoded. It’s a model with structure. It’s a social artifact now, not just a belief. And still no collapse. This can all exist within feedback sensitivity if error remains possible to acknowledge.
He dances and it doesn’t rain. Or it rains with no dance. The group now faces a contradiction between model (dance = rain) and feedback (it didn’t work). This is the first point of model failure, and it opens two paths.
If the group treats the error as a signal, it says, “Hmm. Maybe the connection wasn’t causal. Maybe dancing helps, but doesn’t guarantee it. Maybe something else matters too…clouds, season, soil. Maybe we were wrong. The model updates. Maybe the ritual stays as a tradition, but it loses its literal power claim. Now the worldview remains tethered to feedback.
If the group treats the error as noise, it says, “He mustn’t have danced correctly. Someone in the group was impure. The gods are angry about something else. Rain did come, it’s just coming later. Don’t question the Rainbringer.” The model is preserved. But now, additional structures must be created to explain away the contradiction. And those structures will have their own failures, requiring even more insulation. This is compounding error in action. The model survives at the cost of truth.
So the arc has a curvature. In the first path, the model continues to reflect the world, even if imperfectly. In the second path, the model begins to simulate reality, and each new contradiction deepens the simulation.
Eventually, rain becomes something that doesn’t just happen…it becomes something that has to be narrated. And the system becomes a feedback-sealed loop. Until the drought is too long, belief no longer sustains coherence, and collapse forces the signal through.
The divergence between sustainable worldview and collapsing worldview is not belief itself. It’s how the group responds when the pattern breaks.
But why does one group treat error as signal, and another as noise? What’s the difference between the two?
Is it in the quality of a group’s pattern detection? Maybe. But both groups saw a pattern where one didn’t exist. That’s normal…it’s how learning starts. So pattern detection alone doesn’t explain the difference. It might influence the likelihood of correction, but not the structural response to error. Everyone sees false patterns, but not everyone protects them.
Is it how long the pattern appears to work? Maybe. The longer a pattern appears to be true, the higher the social and symbolic cost of abandoning it. If the rain-dancer’s model “works” for 20 years before failing, the group’s going to have a hell of a time letting go of it. It’s now embedded in ritual, hierarchy, identity, morality, and possibly even infrastructure. So when error comes, it’s no longer a mere contradiction, but a threat to the entire structure. The longer false coherence holds, the more catastrophic its loss becomes. Still, this is a compounding factor, not the root cause.
Is it a group’s tolerance for uncertainty? This feels closer. Some groups may be more willing to live inside ambiguity…to say, “Maybe we don’t know.” Others require certainty, especially when power, identity, or survival are at stake. When uncertainty is seen as dangerous, contradiction is repressed. But even this is downstream of a deeper variable.
So what’s the root difference?
I’d say it has something to do with the group’s willingness to let its model break. In other words, a group’s relationship to truth. Some sort of functional truth orientation…a cultural posture that says: “Our model exists to serve reality, not the other way around. We are allowed to be wrong. The map is not the territory.”
Groups that survive over time have ritualized humility at the model level. They embed model-breakability into the structure and build a bit of slack around belief. Maybe collapse becomes inevitable when belief becomes non-negotiable. When a group treats its model as the world itself instead of something that’s subordinate to the world.
And none of that word salad comes even close to satisfying me. I still can’t locate the inherent difference in people that would explain why a group would choose fictions over reality…fictions that lead to destruction.
Even when we level the playing field…no genetic difference, a shared environment, same cognitive equipment, same feedback events…one groups loosens its grip when the model breaks, and the other tightens that. It feels like a difference that came from nowhere, and my brain doesn’t tolerate that well. I want a mechanism.
I’m not willing to say, “Some people are just wiser.” Or, “Some cultures are born better.” And definitely not, “Some mystical essence preserved them.” It’s lazy and just names the difference instead of explaining it. And it’s not agriculture. Or symbolic thought. Or state-formation. Or a very precise list of environmental conditions at a very precise time. I’ve thought these through for months, and I just don’t see it.
Maybe the difference isn’t in the people, but in the first error and how it interacts with attention.
Let’s go back to the dancer.
Two groups experience the same failed rain-dance. The only difference is in one group, someone notices and the group listens. In the other group, the same doubt arises…but it’s silenced, ignored, or never spoken. The system begins to shape attention instead of truth. Maybe.
If this were true, we could say that the divergence doesn’t begin with different kinds of people. It begins with different positions within the social system…or different degrees of attentional slack. Small variations in who’s allowed to speak, who’s believed, how disagreement is treated, and how closely someone is still tracking the world (hunters, children, women, outsiders) can determine whether the group detects error when it first appears. Maybe it’s the structure that lets signal in (or doesn’t).
But I don’t buy it. I think it comes close (it does have something to do with WHO is listened to)…but the structural argument feels too top-heavy. Too contrived. It’s something about the people. It has to be.
And I keep coming back to that silly rain dance example.
“Oh, he moved his left leg differently last time. The dance is off this time. That must be why the rain isn’t coming.” Is this where it begins? With compounding error? A first act of model preservation over model revision?
It’s like an inoculation against contradiction. The dancer failed to bring rain, and instead of letting the model break, the group makes a seemingly reasonable micro-adjustment that preserves its frame. But it proves to be anything but reasonable. It’s the beginning of something else entirely.
Because it says, “The model is still valid. The error lies in execution…not in assumption.” I think that distinction is everything. Because once you decide the model must be true, every contradiction becomes a problem to explain away, not learn from. You start adjusting the dancer’s position, the timing, the offerings, the purity of the audience, the phase of the moon, the moral status of dissenters. Each change adds complexity without re-examining the core claim…each layer distances you further from reality and makes it harder to walk back.
The “left-leg hypothesis” might feel like a natural progression of curiosity…but I don’t think it is. Because it isn’t asking, “What’s true?” It’s asking, “How can we keep the model intact?” And that’s compounding error in its earliest, most innocent form. It starts as protective curiosity, evolves into explanatory gymnastics, and ends in systemic delusion. In constantly mowing 40,000,000 acres of grass for no sane reason.
It’s a wall that begins…error becomes a problem to solve inside the model, a threat to those who challenge it, and a signal no longer heard. And eventually you’re living in reference only to the model (the dance, the roles, the rituals, the scapegoats) while the sky goes dry. “He moved his leg wrong. And so began the drought.”
-
Is technology to blame?
We know that as worldview fidelity decreases, time to collapse shortens. But what bends the line? What actually introduces feedback distortion or delay.
Let’s look at technology, because it complicates things. It doesn’t break the above model, but it introduces time lags and feedback insulation.
At its core, technology is a buffer. It extends capacity, softens consequences, and postpones the return of feedback. Irrigation lets you farm longer before drought matters. Antibiotics let you survive behaviors that used to kill you. Fossil fuels let you scale production far beyond ecological yield. The pain that wouldn’t corrected your behavior is deferred.
So low-fidelity worldviews survive longer if backed by high-powered technology. Collapse is delayed, not avoided. The worldview says, “We’re right.” The tech says, “We’ll make it look that way…until we can’t.”
But tech doesn’t just delay feedback…it also creates false signals. GPS replaces intimate knowledge of land. Social media simulates community. Processed food simulates nutrition. Air conditioning simulates a habitable climate. This builds confidence in the system, even as it drifts further away from reality. “Look how well it’s working!” (Says the thermostat on a house with a collapsing foundation.) It enables deeper detachment from feedback, which enables more elaborate simulation.
But is technology neutral? Clearly its effects depend on the worldview using it.
In high-fidelity cultures, technology extends sensitivity, preserves balance, and enhances feedback clarity (e.g. indigenous fire-stick farming, soil renewal techniques, wind-based navigation).
In low-fidelity cultures, technology conceals damage, extracts faster, delays correction (e.g. industrial agriculture, geoengineering, financial modeling). Tech isn’t a villain…but in hands of a distorted worldview, it’s something of a sorcerer’s apprentice.
Here’s the twist: tech amplifies either trajectory. It’s an amplifier, not a course corrector. It can scale either sustainability or simulation / collapse. It gives a low-fidelity culture (like the one we’re part of) more time and reach, but also makes the eventual collapse larger and more system-wide.
-
Is abstraction to blame?
Let’s make some assumptions. Let’s assume that, at the outset, there are no genetic factors significant enough to account for one entire group’s remaining connected to its environment and another choosing disconnection. Let’s assume that individuals (and groups) will seek advantage where they can find (or create) it. Let’s assume that Dunbar’s number is a hard limit (~150 people). Scale beyond that demands abstraction. Let’s assume “worldviews” emerge to maintain cohesion of groups beyond 150 people. Let’s assume worldviews exist on a spectrum of fidelity to the world…some more grounded, others more distorted. And let’s assume that collapse risk increases as worldview diverges from world…an inverse correlation between realism and resilience. Let’s do our best to let go of our “civilization vs. tribe” bias and see the whole thing as feedback fidelity across scale.
At ~150 individuals, a group’s relational coherence (previously maintained by direct sensory, ecological, and social feedback…fragments…prehistoric keyboard warriors appear). Shared stories start to replace shared experience. Symbols replace presence. And roles, laws, and systems emerge as prosthetics for lost immediacy. Now we have a fork: fidelity vs. simulation.
The group with the high-fidelity worldview uses myth, ritual, and language to model the world as closely as possible. Symbols are tethered to reality, authority is distributed (and accountable to ecology and relational norms), growth is still limited by feedback and encoded in story, and abstraction is used with care and periodically re-grounded (e.g. vision quests, initiation, seasonal rituals). These are stories that serve to remind the group of how the world works.
This group persists. Its worldview preserves adaptive behavior even at scale. They may never become “civilizations” in the classic sense, because they resist the abstraction that enables runaway scale.
The group with the low-fidelity worldview uses abstraction to model desire, not the world. Symbols become detached from feedback…power, wealth, status grow by internal logic. Authority is centralized and increasingly self-referential. Growth is pursued independent of ecological context. And simulation becomes self-sustaining…a loop that no longer checks against the world. These are stories that tell the group it’s right, even when the world says otherwise.
This group expands faster, but at the cost of delayed collapse (feedback). The tighter the internal simulation, the longer it can suppress reality…until reality returns with interest.
And so this gives us a nice, simple predictive model: collapse is the repayment of feedback deferred by low-fidelity worldview. The greater the distortion, the greater the build-up, the harder the crash. You could almost graph it. Fidelity to reality on the X-axis and time to collapse on the Y-axis. And you’d see an inverse exponential curve.
This model has falsifiable (testable) implications.
If accurate, you should see that high-fidelity groups maintaining ecological balance over time, resisting large-scale empire formation, embedding taboos, rituals, and stories that enforce ecological or social limits, and being harder to conquer ideologically, but easier to conquer militarily. And we do see that, don’t we?
If accurate, you should see that low-fidelity groups expanding rapidly and dominating others, delaying collapse through buffering, abstraction, and extraction, pathologizing feedback-sensitive individuals, and experiencing sudden systemic failure. And we see that as well, don’t we?
If accurate, collapse events will often mark the point where simulation becomes completely unmoored from reality, and the return of feedback becomes catastrophic rather than adaptive. And this is exactly what we see in the dramatic phenomenon we call “the collapse of a great civilization,” as well as collapse events we feel around us every day in our own spectacularly unmoored simulation.
What we arrive at isn’t just a description of how civilizations fall…it’s a redefinition of what scale itself demands. Scale isn’t the problem. The problem is simulation without feedback.
Collapse isn’t inevitable because of size. It’s inevitable when scale is managed through simulation that suppresses reality. So the real challenge isn’t to reject abstraction (that’s here to stay)…it’s to embed continuous feedback into abstract systems. Otherwise, they’re on a one-way street to delusion.
I can’t emphasize this enough: collapse isn’t moral or technological failure. It’s a delayed feedback event. It’s about worldview fidelity. Does a symbolic order track reality, or replace it?
Recent Posts
- Survival Stories
- Fidelity to the Group Vs Fidelity to Reality (blue pill vs red pill)
- Scarcity -> Conflict
- WILL
- No…autistic people are not rigid thinkers.
- What IS domestication?
- Where does the real control begin? (How did we get from egalitarianism to building permits and marriage licenses?)
- Domestications V1.0 / V2.0 (hunter-gatherers / suburbanites)
- Why Wrangham’s Hypothesis is Hobbesian (again)
- Was Hobbes right? (and other holes in Wrangham’s narrative)
- Human Self-Domestication…selection against autonomy, not hot heads.
- Different Maps of Reality
- ramble (predictive coding, autism,simulation)
- The Cost of Food = A Seed
- “The Dark Ages” (a civilizational propaganda campaign)
- The sooner civilization collapses, the better.
- Tribalism, Consensus Reality, and Domestication
- What Wrangham Gets Wrong About Human Domestication
- I’m sorry.
- The Predictive Brain: Autistic Edition (or Maybe the Model’s the Problem)
- Social deficit? Or social defi…nitely-don’t-care?
- Human self-domestication, Pathological Demand Avoidance, and “self-control” walk into a bar…
- Domestication and the Warping of Sexual Dimorphism
- Domestication at the Top (When Wolves Build Kennels)
- The Great Culling: How Civilization Engineered the Modern Male
- The Genome in Chains
- Human Self-Domestication–Passive Drift or Violent Control?
- So what is “neurodivergence,” really?
- The Domesticated vs. The Wild
- Compliance vs. Resilence (to Incoherence)
- Is there such thing as a “baseline human?”
- The Civilizing Process IS Domestication
- What’s the “civilizing process,” really?
- Feedback Inversion
- What is “neurotypical” across living systems?
- The Double-Empathy Struggle
- No…autistic people don’t struggle with complexity.
- Is compounding error to blame?
- Is technology to blame?
- Is abstraction to blame?
- Stability Versus “Progress”
- Is civilization inevitable?
- Never-Ending Conflict
- Dominoes
- Nothing
- I Have Nothing of Value to Say
- Premises
- No Feedback = Dominance Hierarchy
- Civilization as a Process
- I’m “divergent” from WHAT, exactly?
- What We Did to the Dog
- Life as Pathology
- Letters to Family after a Late Autism Diagnosis
- My “Alexithymia” Isn’t What They Say It Is
- In Relationship with the World
- Fuck “Nature”
- I Can’t Express my Ideas Properly
- My Abyss
- Overstimulated by Bullshit
- Transitions SHOULD be hard (in this place)
- Masking: The Feedback That Lies Back
- The “Rise of Autism”: Diagnostic Inflation