Published

- 5 min read

How Thinking Gets Crushed by Simplification

EN | 中文
img of How Thinking Gets Crushed by Simplification

Introduction

When discussing complex issues, people tend to converge rapidly toward highly simplified explanations. These explanations often rely on a single cause as their foundation, constructing a seemingly complete causal chain in one sentence, and thereby achieving a sense of “understanding.”

This pattern is not limited to any specific type of topic. It appears across different layers of cognition, from large-scale social analysis to individual judgment, and even in everyday, real-time conversations.

Rather than attributing this phenomenon to differences in individual capability, it is more useful to treat it as a stable cognitive mechanism, and to examine how it forms and what consequences it produces.


1. The Instinct to Explain

Humans have a fundamental need to explain the world, and this need is not inherently problematic.

When facing uncertainty, whether through myths in early societies or through scientific models in modern contexts, people are always attempting to construct structures that make the world interpretable. The core function of explanation is to transform uncertainty into something cognitively manageable, thereby reducing mental strain.

The issue, therefore, is not whether we explain, but how we do so.


2. The Slide from Explanation to Simplification

When the complexity of a problem exceeds what an individual can comfortably process, the cognitive system begins to compress it.

This can be understood as a “minimum-cost explanation model”: under constraints of limited information, time, and cognitive resources, individuals tend to construct explanations that are low-cost but structurally complete.

This tendency can be interpreted through the dual-system framework proposed by Daniel Kahneman (1934–, Israeli-American psychologist, Nobel laureate in economics, known for his work on judgment and decision-making biases). He distinguishes between two modes of thinking: a fast, intuitive, low-effort system, and a slower, analytical, high-effort system.

In most situations, people default to the former. As a result, an explanation that is “good enough” is often accepted before a more accurate but cognitively expensive one.


3. Forms of Simplification

This mechanism manifests in remarkably consistent patterns across different domains.

At the macro level, complex social phenomena are often reduced to single variables, such as population size, institutional labels, or cultural traits. These explanations produce quick conclusions but fail to capture interactions among multiple variables.

At the individual level, outcomes are frequently compressed into single-cause attributions. Success is linked to one defining factor, failure to one critical flaw, while temporal processes and contextual variables are systematically ignored.

In everyday judgment, the process becomes even more immediate. People form quick impressions and stable labels about others or events within seconds, rarely revisiting or updating those initial judgments.

Although these examples differ in appearance, they share the same structural pattern: complexity is compressed in order to achieve certainty.


4. Why Simplification Is So Attractive

The prevalence of this mechanism is not accidental. It offers clear functional advantages.

First, it significantly reduces cognitive cost. Processing complex problems requires sustained integration and analysis, whereas simplified explanations can be produced almost instantly.

Second, it provides immediate certainty. In the face of ambiguity, a clear explanation fills cognitive gaps and reduces discomfort.

Third, it stabilizes cognitive structures. Once an explanation is accepted, it becomes the basis for future judgments, reducing the likelihood of internal conflict.

In this process, accuracy is often secondary. What matters more is whether the explanation is stable enough to hold.


5. How Simplification Reshapes Thinking

The problem is not simplification itself, but the long-term structural effects of relying on it.

First, information is lost. Compressing a multi-variable system into a single explanation inevitably removes critical elements, weakening the integrity of the overall structure.

Second, feedback loops are closed. When an explanation becomes internally coherent, new information struggles to enter the existing framework, gradually isolating the cognitive system.

Third, models become rigid. Over time, simplified explanations shift from being tools to becoming default patterns, limiting the ability to process complexity.

This phenomenon can be further understood through the concept of the “narrative fallacy,” introduced by Nassim Nicholas Taleb (1960–, Lebanese-American scholar known for his work on uncertainty and complex systems). He argues that humans tend to compress complex, random, and chaotic realities into linear, coherent stories. These narratives create an illusion of understanding, making people believe they have grasped causality.

As this illusion becomes normalized, sensitivity to complexity diminishes.


6. What Complexity Actually Demands

In complex systems, problems rarely admit single, immediate answers.

The real challenge is not finding an explanation, but maintaining the ability to process evolving information under uncertainty, continuously adjusting existing cognitive structures. This requires time, multi-variable integration, and tolerance for ambiguity.

In what I once described as the “firewood theory,” experience accumulates gradually like collected wood, while understanding emerges only when those pieces are later integrated and activated. The same applies here: conclusions are not produced instantly, but formed through ongoing reorganization of information.

The minimum-cost explanation model bypasses this entire process.


Conclusion

When problems are structurally complex but explanations remain persistently simplified, a misalignment emerges within the cognitive process itself.

At that point, discussion becomes difficult, because participants are no longer operating within the same level of cognitive structure. Continuing to argue at that level does not introduce new information; it only increases inefficiency.

Recognizing this mechanism shifts the focus away from whether specific viewpoints are right or wrong, and toward examining the structure of thinking itself.


References

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

Taleb, N. N. (2007). The black swan: The impact of the highly improbable. New York, NY: Random House.

Related Posts

There are no related posts yet.