Michael Polanyi: Tacit Knowledge, Personal Knowing, and What AI Still Cannot Tell
Published:
This post supports English / 中文 switching via the site language toggle in the top navigation.
Introduction
In 1966, the Hungarian-British polymath Michael Polanyi (1891–1976) opened his book The Tacit Dimension with a deceptively simple sentence:
“We can know more than we can tell.”
Six decades later, this observation has become one of the most cited ideas in AI research, organizational theory, and cognitive science — and arguably the deepest unsolved challenge for building truly intelligent machines. Polanyi spent his career as a physical chemist before turning to philosophy, and his work offers a remarkably prescient framework for understanding why modern AI systems, despite their extraordinary capabilities, still struggle with the kind of knowing that humans take for granted.
This post introduces Polanyi’s four major ideas — tacit knowledge, personal knowledge, the fiduciary framework, and his critique of scientism — and explores how each one illuminates fundamental challenges in AI learning systems and robotics today.
1. Tacit Knowledge: “We Can Know More Than We Can Tell”
The Idea
Polanyi’s most famous contribution is the concept of tacit knowledge — knowledge that we possess and use but cannot fully articulate or formalize. You can recognize a friend’s face in a crowd of thousands, but you cannot explain how you do it. A skilled cyclist maintains balance through subtle adjustments, but if asked to describe the physics involved, they would be at a loss.
Polanyi identified a precise from-to structure in tacit knowing:
- Proximal term (subsidiary awareness): the clues, tools, or particulars we rely on but do not focus on directly.
- Distal term (focal awareness): the comprehensive entity or meaning we attend to.
We attend from subsidiary particulars to the focal whole. Crucially, if we shift our attention to the subsidiaries themselves, we destroy the meaning — like how focusing on the individual finger movements while typing makes you unable to type fluently.
“While tacit knowledge can be possessed by itself, explicit knowledge must rely on being tacitly understood and applied. Hence all knowledge is either tacit or rooted in tacit knowledge. A wholly explicit knowledge is unthinkable.”
Implications for AI
This idea was formally named Polanyi’s Paradox by MIT economist David Autor in 2014: because we know more than we can tell, we cannot straightforwardly program computers to do everything we can do. For decades, this paradox explained why tasks requiring tacit knowledge — driving, medical diagnosis, natural conversation, grasping objects — resisted automation far longer than explicit, rule-based tasks.
Deep learning has made dramatic progress on some of these tasks, but in a deeply ironic way. As AI researcher Subbarao Kambhampati argued in his 2021 article “Polanyi’s Revenge”, neural networks have swung from the old paradigm of encoding explicit knowledge (classical expert systems) to learning tacit knowledge from data — and now the AI itself “knows more than it can tell.” These models develop capabilities they cannot explain, creating the modern crisis of interpretability, bias, and robustness.
In robotics, the challenge is even starker. A human craftsman’s hands “know” how much pressure to apply when shaping clay — this is embodied tacit knowledge built through years of practice. Modern robot learning through reinforcement learning and imitation learning attempts to develop analogous sensorimotor knowledge through trial-and-error interaction with the physical world. But the resulting policies remain brittle: they work in trained scenarios but fail when the context shifts even slightly, suggesting that something essential about tacit knowing — its contextual, integrated, flexible nature — has not been captured.
2. Personal Knowledge: The Knower Cannot Be Eliminated
The Idea
In his magnum opus Personal Knowledge: Towards a Post-Critical Philosophy (1958), Polanyi argued that all knowledge is personal knowledge. There is no purely objective, detached, impersonal knowing. Every act of knowing involves the passionate participation of the knower:
“Into every act of knowing there enters a passionate contribution of the person knowing what is being known, and this coefficient is no mere imperfection but a vital component of his knowledge.”
The scientist does not stand apart from the universe but participates within it. Discovery requires intellectual passions — a sense of beauty, elegance, and significance that guides the scientist toward fruitful questions. Copernicus arrived at the heliocentric model not by following a mechanical method, but via what Polanyi described as “the greater intellectual satisfaction he derived from the celestial panorama as seen from the Sun instead of the Earth.”
Polanyi called this “indwelling”: the knower immerses themselves in the subject matter, understanding it from within rather than merely observing from a detached standpoint.
Implications for AI
Modern AI systems have no “personal” stake in their knowledge. A language model processes tokens; a vision model processes pixels. They have no intellectual passions, no sense of beauty or significance guiding their exploration. This is not merely a poetic observation — it has practical consequences.
Consider active learning and curiosity-driven exploration in reinforcement learning. Researchers have tried to give agents intrinsic motivation to explore — but these are engineered reward signals, not genuine intellectual passions emerging from a committed knower embedded in a world. The agent does not care about what it discovers.
Polanyi’s concept of indwelling also challenges the dominant paradigm in robot learning. Current approaches treat the robot as an external observer that builds a model of the world. But Polanyi would argue that genuine understanding requires the knower to be in the world, not merely modeling it. This resonates with the embodied cognition movement in cognitive science and robotics, which argues that intelligence is not just computation but is constituted by the agent’s bodily interaction with its environment.
The philosopher Hubert Dreyfus, deeply influenced by Polanyi, built his landmark critique of AI (What Computers Can’t Do, 1972) on exactly this insight: intelligence cannot be reduced to rule-following on symbols; it requires embodiment, context, and a kind of engaged know-how that resists formalization.
3. The Fiduciary Framework: Knowledge Rests on Trust
The Idea
“Fiduciary” comes from the Latin fiducia (trust). Polanyi’s fiduciary framework holds that all knowing rests on an act of faith — a personal commitment to beliefs that could conceivably be false.
We must trust our senses, our intellectual faculties, our inherited frameworks, and the community of knowers before any knowledge is possible. This is not a weakness to be overcome but the very foundation of knowing:
“We believe more than we can know, and know more than we can say.”
When a knower asserts something, they make a personal commitment with universal intent — they believe it to be true for everyone, not just for themselves. This is what distinguishes personal knowledge from mere subjective opinion. Knowledge grows in the “fertile soil of traditioned community and apprenticeship to masters” — what Polanyi called conviviality.
Implications for AI
The fiduciary framework maps surprisingly well onto problems in modern AI:
Trust in training data. Every machine learning system makes a fundamental fiduciary commitment: it trusts that its training data is representative of the reality it will encounter. When this trust is misplaced — when the data is biased, corrupted, or non-representative — the system fails. But unlike a human knower who can reflect on and question their commitments, most AI systems have no mechanism for interrogating the foundations of their own knowledge.
Community and tradition in learning. Polanyi emphasized that knowledge is transmitted through apprenticeship, imitation, and participation in a community of practice. This resonates with recent work in AI on learning from demonstration, human-in-the-loop training, and RLHF (Reinforcement Learning from Human Feedback). These methods implicitly acknowledge that knowledge cannot be fully formalized — it must be transmitted through example and correction, much as a master craftsman teaches an apprentice.
The alignment problem. Polanyi’s insistence that knowledge involves commitment with universal intent has echoes in the AI alignment problem. We want AI systems to make commitments (decisions, recommendations) that are aligned with human values — but how do you instill genuine commitment in a system that has no personal stake in truth?
4. Critique of Scientism: Against Reductionism
The Idea
Polanyi was, as one scholar put it, “a scientist against scientism.” He opposed several dominant assumptions:
- Objectivism: that genuine knowledge requires the complete elimination of the personal element.
- Positivism/Scientism: that science is the only real source of truth and provides a purely objective method.
- Reductionism: that all phenomena can be fully explained by reducing them to physics and chemistry.
His most original contribution here was the concept of dual control and boundary conditions, developed in his 1968 paper “Life’s Irreducible Structure” published in Science:
Every machine (and every living organism) is subject to two kinds of control. The lower level obeys the laws of physics and chemistry. But the upper level — the design, purpose, or organizational principle — harnesses those laws by imposing boundary conditions that physics leaves open. You cannot derive the meaning of a sentence from the physics of sound waves. You cannot derive the function of a machine from the chemistry of its materials. Reality forms a hierarchy:
physics < chemistry < biology < consciousness < culture
Each level relies on the principles below it but is irreducible to them. Higher-level principles exercise genuine “downward causation.”
Implications for AI
Polanyi’s anti-reductionism speaks directly to the architecture of modern AI systems and the challenge of building robots that genuinely understand the world:
The symbol grounding problem. Classical AI tried to build intelligence from symbols and rules (top-down). Deep learning tries to build it from raw sensory data (bottom-up). Polanyi would argue that neither approach alone can succeed, because meaning emerges at the boundary between levels — it is neither reducible to low-level patterns nor fully capturable in high-level rules. Modern neuro-symbolic AI and foundation models for robotics that combine learned representations with structured reasoning are, perhaps unknowingly, moving toward Polanyi’s vision.
Emergence in complex systems. When we train a large language model, emergent capabilities appear at scale that were not explicitly programmed. Polanyi’s framework of hierarchical levels with irreducible emergent properties provides a philosophical lens for understanding why these capabilities appear and why they resist reductionist explanation. The model’s “understanding” (if we can call it that) exists at a level that cannot be fully explained by examining individual weights or neurons.
Robot manipulation. A robot grasping an egg must integrate physics (forces, friction), perception (shape, texture), and purpose (don’t break it; place it gently). These operate at different levels of Polanyi’s hierarchy, and successful manipulation requires what he would call the higher level imposing boundary conditions on the lower. Current end-to-end learning approaches try to collapse this hierarchy into a single function approximator — which may explain why they succeed in narrow tasks but fail to generalize.
Conclusion: What Polanyi Teaches Us About the Future of AI
Michael Polanyi died in 1976, long before the deep learning revolution. Yet his ideas remain startlingly relevant:
| Polanyi’s Insight | Modern AI Challenge |
|---|---|
| Tacit knowledge cannot be fully articulated | Polanyi’s Paradox; the interpretability crisis |
| All knowledge is personal and committed | The alignment problem; lack of genuine understanding |
| Knowledge rests on trust and community | Data quality; learning from human feedback |
| Reality is hierarchically organized and irreducible | Symbol grounding; emergent capabilities; generalization |
Perhaps the deepest lesson from Polanyi is a kind of epistemic humility. The dream of fully explicit, fully formalized, fully objective knowledge — whether in science or in AI — is not merely difficult to achieve; it is, in principle, unachievable. All knowledge is rooted in tacit knowing, sustained by personal commitment, and embedded in a community of trust.
This does not mean we should stop building AI systems. It means we should build them with an awareness of what they cannot be — and design them to work with human knowers rather than to replace them. The most promising directions in AI today — human-in-the-loop learning, embodied robotics, neuro-symbolic reasoning, interpretable AI — all implicitly acknowledge Polanyi’s insights, even when they don’t cite his name.
As Polanyi might have said: the machine can learn more than it can tell us. The question is whether we can learn to trust what it knows — and whether it can learn to know what we trust.
Further Reading
- Polanyi, M. (1958). Personal Knowledge: Towards a Post-Critical Philosophy. University of Chicago Press.
- Polanyi, M. (1966). The Tacit Dimension. Doubleday.
- Polanyi, M. (1968). Life’s Irreducible Structure. Science, 160(3834), 1308–1312.
- Autor, D. (2014). Polanyi’s Paradox and the Shape of Employment Growth. NBER Working Paper 20485.
- Kambhampati, S. (2021). Polanyi’s Revenge. Communications of the ACM.
- Dreyfus, H. (1972). What Computers Can’t Do: The Limits of Artificial Intelligence. MIT Press.
