Fidelity to the Group Vs Fidelity to Reality (blue pill vs red pill)

Can you select against reactivity (and for pro-sociality) without selecting against truthfulness? I don’t think you can.

In Wrangham’s phrasing, pro-sociality seems to be about “friendliness,” tolerance, cooperation. But on the ground, what pro-sociality really looks like is going along with the group, even when the group is wrong (especially when the group is wrong, maybe). Let’s be honest…coalitions don’t just punish aggression, they punish non-conformity. It’s autonomy vs consensus.

Loyalty to the truth and conformity are simply incompatible.

What is reactivity? The kind of reactivity that a group takes issue with? It’s registering error signals and acting on them. And what is loyalty to the truth if it isn’t that? Refusing to override error signals even when everyone else does. If selection is for compliance, then loyalty to truth (which in a group looks like stubbornness, pedantry, dissent) gets you punished by the coalition. Over evolutionary time, that makes loyalty to truth maladaptive inside control systems.

Like most things in evolution, it’s a trade-off. Sure, you can have pro-sociality without truth if the goal is harmony. But you can’t have both pro-sociality and true fidelity to reality. Truth is often disruptive. And so, to preserve group stability, coalitions consistently select against those who react too strongly to contradictions…even if they’re right. Think whistle-blowers…Socrates…Galileo.

In predictive processing terms, the mutual exclusivity of these two drivers becomes even clearer. Pro-sociality is the inflation of social priors and the down-weighting of individual prediction errors. Truth-loyalty is the up-weighting of prediction errors, social priors be damned. A system can’t simultaneously select for both…they’re opposing precision-weighting strategies. The emperor’s either wearing clothes or he isn’t.

Can you select against reactivity (and for pro-sociality) without selecting against truthfulness? I don’t think you can.

In Wrangham’s phrasing, pro-sociality seems to be about “friendliness,” tolerance, cooperation. But on the ground, what pro-sociality really looks like is going along with the group, even when the group is wrong (especially when the group is wrong, maybe). Let’s be honest…coalitions don’t just punish aggression, they punish non-conformity. It’s autonomy vs consensus.

Loyalty to the truth and conformity are simply incompatible.

What is reactivity? The kind of reactivity that a group takes issue with? It’s registering error signals and acting on them. And what is loyalty to the truth if it isn’t that? Refusing to override error signals even when everyone else does. If selection is for compliance, then loyalty to truth (which in a group looks like stubbornness, pedantry, dissent) gets you punished by the coalition. Over evolutionary time, that makes loyalty to truth maladaptive inside control systems.

Like most things in evolution, it’s a trade-off. Sure, you can have pro-sociality without truth if the goal is harmony. But you can’t have both pro-sociality and true fidelity to reality. Truth is often disruptive. And so, to preserve group stability, coalitions consistently select against those who react too strongly to contradictions…even if they’re right. Think whistle-blowers…Socrates…Galileo.

In predictive processing terms, the mutual exclusivity of these two drivers becomes even clearer. Pro-sociality is the inflation of social priors and the down-weighting of individual prediction errors. Truth-loyalty is the up-weighting of prediction errors, social priors be damned. A system can’t simultaneously select for both…they’re opposing precision-weighting strategies. The emperor’s either wearing clothes or he isn’t.

Comments

Leave a comment