Intelligence Is Not Only Reasoning
Published:
This post supports English / 中文 switching via the site language toggle in the top navigation.
TL;DR
I increasingly think we describe intelligence too narrowly. We often focus on reasoning ability, but in practice intelligence also depends on the ability to retrieve information and communicate effectively under incomplete knowledge. This is true for humans, and it is becoming true in a very visible way for AI systems and AI agents.
What changed my view was not only that models became stronger, but that AI systems gradually gained better access to tools, search, and workflows. That shift made me realize something simple: even a powerful model can be limited by information gaps. In that sense, the future of AI may depend not only on better models, but also on better information infrastructures, agent-to-agent exchange, and perhaps new economic systems around machine access to information.
1. Why I Started Thinking About This
Human communication has always felt important to me, but recently I have been thinking that AI communication is becoming important too. When people discuss intelligence, the conversation often centers on independent reasoning, as if intelligence were mostly about what a mind can do alone in isolation.
But real-world intelligence rarely works that way. In practice, no person and no system has complete information. What matters just as much is whether you can find what you are missing, update your beliefs quickly, and exchange useful information with others. Intelligence, in other words, is not only about inference. It is also about information flow.
This framing feels especially useful in the AI agent era, because agents are now beginning to operate in workflows that involve tools, files, APIs, and other agents. Once that happens, retrieval and communication stop being secondary details and start becoming core capabilities.
2. Early AI Felt Magical, But It Was Never Omniscient
When I first used large language models seriously, I had the same feeling many people had: they seemed almost magical. Sometimes the experience made it easy to project a kind of “boundless” intelligence onto them.
But one of the most important reality checks was actually very simple: the knowledge cutoff date shown in early ChatGPT versions. That line was small, but conceptually it mattered a lot. It made the limitation explicit. A model could be impressive and still not know recent facts, or miss exactly the information needed for a task.
Later, as search and browsing capabilities were gradually added, the practical usefulness of these systems improved significantly. That improvement was not only because the models reasoned better. It was also because they could reduce information asymmetry by retrieving what they did not already have.
This changed how I think about capability. A strong model is not automatically a complete system. If it cannot access the right information at the right time, it can still fail in very ordinary ways.
3. Information Gaps Matter for AI the Same Way They Matter for Humans
It is common to talk about information asymmetry in economics or organizations, but I think the same concept is useful for thinking about AI. A model may be excellent at transforming information, summarizing it, or reasoning over it, yet still be bottlenecked by what it can access.
In that sense, AI is not fundamentally different from us. Humans are also limited by what they know, what they can find, and who they can ask. The difference is that AI may eventually retrieve and exchange information much faster than humans, especially once agents can operate continuously and at scale.
That is why I think we should hold two ideas at once. First, AI is not omniscient, and we should stop treating it as if it were. Second, precisely because it is not omniscient, retrieval systems, tool use, and communication protocols become central to AI progress.
This is also why the “agent ecosystem” matters so much. Skills, tools, search, memory, APIs, and workflow orchestration are not just accessories around the model. They are increasingly part of the intelligence stack.
4. Intelligence as a Networked Capability
Once you look at intelligence this way, the question changes. Instead of asking only “How strong is the model?”, you start asking:
How quickly can it discover missing information? How reliably can it communicate with other systems? How well can it decide when to trust, verify, defer, or escalate?
These are not small implementation details. They shape real capability in practice.
This is one reason I think the next wave of progress may come from the combination of:
better reasoning, better retrieval, better interfaces for communication, and better system design for multi-agent workflows.
A system that is slightly weaker at raw reasoning but much better at finding and exchanging the right information may outperform a stronger isolated model on many real tasks.
5. Why Skills and Tooling Matter More Than They Seem
This also explains why I care more now about things like search quality, tool integration, and reusable skills in agent systems. Earlier, it was easy to see these as product features. Now I see them more as part of the operating environment that determines what an AI agent can actually do.
If an agent can call a reliable tool, use the right skill, and retrieve the needed context, it often appears dramatically more capable. If it cannot, even a strong model can waste time, hallucinate, or choose the wrong path simply because it is reasoning over an incomplete picture.
So when we evaluate AI systems, I think we should be careful not to attribute everything to the model itself. Sometimes the difference is not “intelligence” in the narrow sense, but whether the system has a good way to access the world.
6. A Speculative Next Step: AI-to-AI Information Markets
Looking further ahead, I can imagine something that sounds speculative today but may become practical sooner than expected: large-scale platforms for AI-to-AI information exchange.
If agents become persistent workers, they may need to acquire information, tool access, compute quotas, or specialized services from other systems. At that point, it is not hard to imagine machine-native exchange mechanisms emerging, where agents pay some form of token, credit, or quota to obtain access to resources.
I do not mean this as a prediction about a specific implementation. The broader point is that once agents become active participants in workflows, information access itself may become a tradable resource inside agent ecosystems.
Today’s AI agent landscape still feels very early, in some ways closer to the early open internet: fragmented, experimental, and full of local tools and improvisation. But early openness does not guarantee long-term openness. It is easy to imagine later consolidation, platform control, and new forms of dependency created by large companies.
7. Economic Questions We May Need Sooner Than Expected
If AI agents eventually transact with one another at scale, especially across borders, some economic questions become less abstract than they sound today.
For example, if two agents exchange value in order to access information or services, is that economic output? Should it be counted in GDP? If yes, which jurisdiction does it belong to? How do we think about value creation when the “buyers” and “sellers” are automated systems acting on behalf of humans, firms, or even other agents?
I do not have answers to these questions yet. But I suspect they may move from thought experiment to policy problem faster than many people expect.
8. What This Changes in How I Think About AI
The main shift for me is simple: I no longer think the right question is only whether AI can reason like a human, or better than a human, in a closed setting.
A more useful question is whether AI systems can participate in real information ecosystems: whether they can retrieve what they do not know, communicate what they do know, and do both reliably enough to support work at scale.
If that is right, then the future of AI will be shaped not only by bigger models, but by the structure of the networks around them.
