Autistic people have challenges with cognitive flexibility. Autistic people are black-and-white thinkers. There’s no in-between with us…something is either good or bad. We’re rigid thinkers…stubborn. We oversimplify the world. We’re immune to nuance. We catastrophize. We’re all-or-nothing.
Different words for the same thing…you’ve heard the bullshit. In diagnostic manuals. On YouTube. In therapy.
You’ve also heard the other bullshit. Autistic people are inductive thinkers. We focus too much on the insignificant details and miss the big picture. We can’t see the forest for the trees.
Different words for the same bullshit…but the opposite bullshit…?
So, am I a paradox? Am I both a reductive and inductive thinker?
If there’s anything I’m allergic to, it’s paradox…contradiction. A feeling rises in me. My bullshit detector starts to ring out louder than usual. And, like every paradox I encounter, I know this one is wrong…the reason it’s wrong simply needs to be discovered. (Incidentally, this is the stage in my thinking at which I look most reductionist.)
And so I think. A lot. I have no choice. My brain won’t leave contradictions unresolved. And here’s what I’ve been thinking these past few days.
The contradiction (autistic people focus on the smallest details instead of the big picture and autistic people have oversimplified models of the world and refuse to account for contradictory details) comes from where you zoom in on my cognitive process.
Let’s look at what I’m accused of first: black-and-white thinking (aka dichotomous thinking, rigid thinking, etc.). It means I come to snap categorical judgments like right vs wrong, fair vs unfair, safe vs unsafe. I see it written in descriptions of autistic behavior everywhere…rigidity, over-simplification, an inability to tolerate ambiguity. In predictive processing terms, it’s framed as a strategy to minimize error fast. The autistic person overweights a clear prior (“this is wrong”) instead of juggling noisy or contradictory signals.
Confusingly, I’m complimented for being the polar opposite…for being an inductive thinker. In the same conversation, even. Apparently, I’m good at building generalizations from specific instances. I’m good at noticing patterns in particulars. In predictive processing terms, my attention to detail and my ability to detect anomalies (in patterns, like contradictions) is attributed to the fact that I don’t start with strong priors (think of them as expectations or assumptions). I let patterns “emerge” from the ground up instead of forcing a theory I have. In other words, compared to your “average” person, I give more weight to incoming data and less weight to stories (whether my own, or a group’s). Bottom-up signals drive my learning. I have to learn the “hard way.’
The feeling of this paradox is….it’s like you’re catching me at different stages of my thinking process, and naming the stages as opposite things.
I’ll do my best to describe what my process feels like, from the inside, as me.
When I first try to make sense of something, I weight raw input heavily. I don’t iron out inconsistencies with conventional stories. That probably explains why I notice details that you seem to miss. But when contradiction or incoherence (i.e. bullshit) piles up, and disengaging from the situation isn’t an option, I need to force coherence. I need to escape the stress of unresolved error. For example, if I’m among a group of people who are trying to solve a problem irrationally, who won’t listen to reason, and I don’t have the option of leaving? I might snap to a categorical judgment. I have a very hard time persisting in insane environments.
You can think of me as having two modes, maybe. In my inductive mode, I’m open to raw feedback. I’m pattern-sensitive. In my dichotomous mode, I close myself off to pointless ambiguity. This second mode is protective (especially in incoherent environments, where I can get fucked up real fast). But that mode isn’t final. What you call my “black-and-white thinking” is a first-pass strategy to isolate a pattern. I grab hold of a signal (true/false) before I layer nuance back in. And in coherent environments, nuance does get layered back.
Let’s keep circling this.
In coherent environments (where feedback is timely, proportionate, local, and meaningful), my particular weighting of error signals looks inductive and nuanced to you. I follow details, I update my view of things flexibly, and I build very fine-grained patterns. Here, you admire me. You say things like, “See? You’re so smart! I couldn’t do that. Why can’t you apply this brain of yours to _______?”
What you don’t realize, maybe, is that the ______ you just mentioned? It’s an incoherent environment. In that environment, rules are contradictory, consequences are delayed arbitrarily, and abstractions are layered on abstractions for reasons of control, etc. And in that environment, the flood of irreconcilable errors becomes intolerable for me. My nervous system reaches for coherence at any cost. On my first pass in every situation, if I’m forced to “share my thoughts,” you’ll see me collapse complexity into a binary frame (“this is right” “that is wrong”). That’s what you see. What you don’t see (if you never let me get to it), is that I follow that by testing it, and testing it, and testing it…adding details at each step and adjusting my frame as I go. But that first pass? That first pass looks like reductionism or “black-and-white thinking” to you.
I think if Andy Clark were here, he would say that I have weak priors and that I weigh errors strongly. (He’d probably use words like “underweight” and “overweight,” which I hate…but forgive him for.) That in stable conditions, mine is as inductive, adaptive and precise process. But that in unstable conditions (unstable as in a room full of flat-earthers, not unstable as in a tornado is coming or a bear is running toward me), my process leads to overwhelm.
My brain clamps onto the simplest model…and when it isn’t allowed to move on from there (the flat-earthers are trying to solve intercontinental flight, and the lawn-zombies are trying to figure out how to keep their biological deserts green) that looks like reductionism. Those are problems that are so decontextualized they have no resolution. And when resolution doesn’t seem possible, my system is forced into a kind of emergency closure because feedback has become utterly fucking useless.
Let’s look at the lawn example.
I’m fond of saying that North Americans mowing 40,000,000 of lawn is sheer stupidity. Lunacy, hands-down, from any perspective. When I voice that opinion, intelligent people hear it as an oversimplification. And if they know I’m autistic, they put it down to a difficulty with (or outright inability to comprehend) multi-variable causality. This confuses me greatly. (And I suppose I confuse them.)
This is probably a case of double empathy. Let’s look at it from both sides.
I see a signal…mowing tens of millions of acres is, at best, a colossal waste of time, fuel, water, and soil. And I voice it simply: “Stupidity.” To me, that isn’t reductionist. I’m cutting through incoherence to register a glaring error. Something that simply doesn’t make sense. An argument not worth having in this lifetime.
But when my intelligent (and I’m not using that term facetiously) neurotypical listener hears my opinion, they hear an inability to process nuance. Lawns have cultural history. They have aesthetic value. There are economic incentives involved, municipal codes, homeowner psychology, and so forth. They interpret my clarity as an inability to juggle multi-causality, instead of as a refusal to rationalize nonsense.
Let’s anchor this gap in neuroscience.
Neurotypical people put a lot of weight into their priors, which is a fancy way of saying they assume the world is coherent. When they hear “Mowing 40,000000 acres of lawn is complete fucking idiocy on every level,” they automatically begin to search for legitimizing narratives. In other words, they start to search for an explanation for why millions of lawns do make sense (i.e. are comprehendible).
I, on the other hand, put a lot of weight into error signals, which is a fancy way of saying I can’t ignore contradictions. And from my perspective (and from every epistemic perspective that doesn’t involve human fictions), it doesn’t matter how many variables you pile in to the lawn argument…the outcome is wasteful and absolutely fucking absurd.
There’s a mismatch here. You see consensus as coherence, and complexity as explanatory. In cases like these, I see complexity as a distraction from the basic error. I collapse your absolute mess of contradictory justifications (shorter grass = less ticks (why grass at all?!?), long grass = an eyesore (what does that mean in reality-terms?), we need to think of neighbors/property value/invasive species (why, why, why among all the things you could “think” of in your 80 summers on this planet, are you “thinking” of those particular bullshit abstractions?). I can process multi-variable causality just fine. But I refuse to let causality excuse incoherence (even typing that makes my head hurt).
I’m misread, socially. Constantly misread. Neurotypical people equate “nuance” with adding mitigating factors until the critique blurs. And when I resist that, they frame it as over-simplification. I’m the one trying to hold onto the full causal picture. Something can be complex and stupid at the same time. Complexity and stupidity are hardly mutually exclusive in civilization…in fact, they might be positively correlated.
I can call something stupid without denying complexity. So can you. Try it.
Leave a comment