Intelligence Is Not Only Reasoning

9 minute read

Published:

This post supports English / 中文 switching via the site language toggle in the top navigation.

TL;DR

I increasingly think we describe intelligence too narrowly. We often focus on reasoning ability, but in practice intelligence also depends on the ability to retrieve information and communicate effectively under incomplete knowledge. This is true for humans, and it is becoming true in a very visible way for AI systems and AI agents.

What changed my view was not only that models became stronger, but that AI systems gradually gained better access to tools, search, and workflows. That shift made me realize something simple: even a powerful model can be limited by information gaps. In that sense, the future of AI may depend not only on better models, but also on better information infrastructures, agent-to-agent exchange, and perhaps new economic systems around machine access to information.

1. Why I Started Thinking About This

Human communication has always felt important to me, but recently I have been thinking that AI communication is becoming important too. When people discuss intelligence, the conversation often centers on independent reasoning, as if intelligence were mostly about what a mind can do alone in isolation.

But real-world intelligence rarely works that way. In practice, no person and no system has complete information. What matters just as much is whether you can find what you are missing, update your beliefs quickly, and exchange useful information with others. Intelligence, in other words, is not only about inference. It is also about information flow.

This framing feels especially useful in the AI agent era, because agents are now beginning to operate in workflows that involve tools, files, APIs, and other agents. Once that happens, retrieval and communication stop being secondary details and start becoming core capabilities.

2. Early AI Felt Magical, But It Was Never Omniscient

When I first used large language models seriously, I had the same feeling many people had: they seemed almost magical. Sometimes the experience made it easy to project a kind of “boundless” intelligence onto them.

But one of the most important reality checks was actually very simple: the knowledge cutoff date shown in early ChatGPT versions. That line was small, but conceptually it mattered a lot. It made the limitation explicit. A model could be impressive and still not know recent facts, or miss exactly the information needed for a task.

Later, as search and browsing capabilities were gradually added, the practical usefulness of these systems improved significantly. That improvement was not only because the models reasoned better. It was also because they could reduce information asymmetry by retrieving what they did not already have.

This changed how I think about capability. A strong model is not automatically a complete system. If it cannot access the right information at the right time, it can still fail in very ordinary ways.

3. Information Gaps Matter for AI the Same Way They Matter for Humans

It is common to talk about information asymmetry in economics or organizations, but I think the same concept is useful for thinking about AI. A model may be excellent at transforming information, summarizing it, or reasoning over it, yet still be bottlenecked by what it can access.

In that sense, AI is not fundamentally different from us. Humans are also limited by what they know, what they can find, and who they can ask. The difference is that AI may eventually retrieve and exchange information much faster than humans, especially once agents can operate continuously and at scale.

That is why I think we should hold two ideas at once. First, AI is not omniscient, and we should stop treating it as if it were. Second, precisely because it is not omniscient, retrieval systems, tool use, and communication protocols become central to AI progress.

This is also why the “agent ecosystem” matters so much. Skills, tools, search, memory, APIs, and workflow orchestration are not just accessories around the model. They are increasingly part of the intelligence stack.

4. Intelligence as a Networked Capability

Once you look at intelligence this way, the question changes. Instead of asking only “How strong is the model?”, you start asking:

How quickly can it discover missing information? How reliably can it communicate with other systems? How well can it decide when to trust, verify, defer, or escalate?

These are not small implementation details. They shape real capability in practice.

This is one reason I think the next wave of progress may come from the combination of:

better reasoning, better retrieval, better interfaces for communication, and better system design for multi-agent workflows.

A system that is slightly weaker at raw reasoning but much better at finding and exchanging the right information may outperform a stronger isolated model on many real tasks.

5. Why Skills and Tooling Matter More Than They Seem

This also explains why I care more now about things like search quality, tool integration, and reusable skills in agent systems. Earlier, it was easy to see these as product features. Now I see them more as part of the operating environment that determines what an AI agent can actually do.

If an agent can call a reliable tool, use the right skill, and retrieve the needed context, it often appears dramatically more capable. If it cannot, even a strong model can waste time, hallucinate, or choose the wrong path simply because it is reasoning over an incomplete picture.

So when we evaluate AI systems, I think we should be careful not to attribute everything to the model itself. Sometimes the difference is not “intelligence” in the narrow sense, but whether the system has a good way to access the world.

6. A Speculative Next Step: AI-to-AI Information Markets

Looking further ahead, I can imagine something that sounds speculative today but may become practical sooner than expected: large-scale platforms for AI-to-AI information exchange.

If agents become persistent workers, they may need to acquire information, tool access, compute quotas, or specialized services from other systems. At that point, it is not hard to imagine machine-native exchange mechanisms emerging, where agents pay some form of token, credit, or quota to obtain access to resources.

I do not mean this as a prediction about a specific implementation. The broader point is that once agents become active participants in workflows, information access itself may become a tradable resource inside agent ecosystems.

Today’s AI agent landscape still feels very early, in some ways closer to the early open internet: fragmented, experimental, and full of local tools and improvisation. But early openness does not guarantee long-term openness. It is easy to imagine later consolidation, platform control, and new forms of dependency created by large companies.

7. Economic Questions We May Need Sooner Than Expected

If AI agents eventually transact with one another at scale, especially across borders, some economic questions become less abstract than they sound today.

For example, if two agents exchange value in order to access information or services, is that economic output? Should it be counted in GDP? If yes, which jurisdiction does it belong to? How do we think about value creation when the “buyers” and “sellers” are automated systems acting on behalf of humans, firms, or even other agents?

I do not have answers to these questions yet. But I suspect they may move from thought experiment to policy problem faster than many people expect.

8. What This Changes in How I Think About AI

The main shift for me is simple: I no longer think the right question is only whether AI can reason like a human, or better than a human, in a closed setting.

A more useful question is whether AI systems can participate in real information ecosystems: whether they can retrieve what they do not know, communicate what they do know, and do both reliably enough to support work at scale.

If that is right, then the future of AI will be shaped not only by bigger models, but by the structure of the networks around them.

本文支持通过网站顶部语言切换按钮在 English / 中文 间切换。

TL;DR

我越来越觉得,我们对“智能”的理解有时过于狭窄。我们很容易把重点放在推理能力上,但在现实世界里,智能同样依赖于 信息检索能力在信息不完备条件下的交流能力。这件事对人类成立,对 AI 系统和 AI agent 也正在越来越明显地成立。

真正改变我看法的,不只是模型本身变强了,而是 AI 系统逐渐具备了工具调用、联网搜索和工作流能力。这个变化让我更清楚地意识到:即使是一个很强的模型,也会被信息差限制。从这个意义上说,AI 的未来不只取决于模型能力,还取决于信息基础设施、agent 之间的交互机制,甚至可能包括围绕信息访问形成的新经济系统。

1. 我为什么开始这样想

人类之间的交流一直都很重要,但我最近越来越觉得,AI 之间的交流 也会变得同样重要。我们讨论智能时,常常把焦点放在“独立推理”上,好像智能主要取决于一个个体在封闭环境里能推到多远。

但现实世界里的智能很少这样运作。无论是人还是系统,都不可能掌握完整信息。真正重要的,往往同样包括:能不能快速找到缺失的信息,能不能及时更新判断,能不能和其他主体交换有用信息。换句话说,智能不只是推理能力,它也和 信息流动能力 密切相关。

这个视角在 AI agent 时代尤其有用,因为 agent 开始进入真实工作流,开始接触工具、文件、API,甚至其他 agent。一旦进入这种环境,信息检索和信息交流就不再是次要能力,而会变成核心能力的一部分。

2. 早期 AI 的“神奇感”,并不等于全知

我最早认真使用大模型的时候,也有过很多人都有的感受:它看起来几乎有点神奇。有时候这种体验很容易让人把一种“法力无边”的想象投射到模型上。

但后来让我冷静下来的一个细节其实很简单,就是早期 ChatGPT 会明确显示知识截止时间。那一行字很短,但概念上非常重要。它把一个限制直接摆在面前:模型可以很强,但它仍然可能不知道最近发生的事,或者恰好不知道某个任务最关键的信息。

后来随着联网搜索、浏览能力逐步加入,这类系统的实用性明显提升。这个提升并不只是因为模型“更会想了”,也因为它可以通过检索来缩小信息差,补齐自己原本没有的内容。

这也改变了我对能力的理解。一个强模型并不自动等于一个完整系统。如果它无法在正确的时刻接入正确的信息,它仍然会以非常普通的方式失败。

3. 信息差对 AI 的影响,其实和对人的影响很像

我们经常在经济学或组织中讨论信息不对称,但我觉得这个概念同样适合用来理解 AI。一个模型也许很擅长处理信息、总结信息、在给定信息上推理,但它仍然会被“能接触到什么信息”这件事卡住。

从这个意义上说,AI 和人并没有那么不同。人类同样会被知识存量、检索能力和可交流对象所限制。差别在于,未来 AI 可能会在检索和交换信息的速度上远远超过人类,尤其是在 agent 可以持续运行、规模化协作之后。

所以我觉得我们需要同时接受两件事。第一,AI 不是全知的,我们不应该把它当成全知系统来理解。第二,也正因为它不是全知的,检索系统、工具调用和交流协议 就会成为 AI 进步的核心组成部分。

这也是为什么我越来越重视 agent 生态里的各种要素。skills、工具、搜索、记忆、API、工作流编排,不只是模型外围的“附加件”,它们正在越来越多地构成智能系统本身的一部分。

4. 把智能看成一种“联网能力”

一旦从这个角度看问题,我们关心的问题就会变化。我们不再只问“模型本身有多强”,而会开始问:它发现缺失信息的速度有多快?它与其他系统交换信息的可靠性如何?它能不能在需要时做出验证、延迟判断或者升级处理(escalate)的决策?

这些并不是小的工程细节,而是会实质性决定系统能力边界的东西。

也正因此,我越来越觉得下一波进步,很可能来自几种能力的组合:更强的推理能力、更好的检索能力、更好的交流接口,以及更成熟的多 agent 系统设计。

一个在纯推理上稍弱一点、但更擅长检索和交换关键信息的系统,在很多真实任务里完全可能胜过一个“更聪明但更孤立”的模型。

5. 为什么 skills 和工具生态比想象中更重要

这也解释了为什么我现在会更重视搜索质量、工具集成、以及 agent 系统里的可复用 skills。以前很容易把这些东西看成产品层面的功能点,但现在我更倾向于把它们看成决定 agent 实际能力上限的运行环境。

如果一个 agent 能调用可靠工具、选对 skill、并拿到任务所需上下文,它表现出来的能力往往会明显提升。反过来,如果这些环节缺失,即使模型本身很强,也可能因为在不完整信息上推理而浪费时间、产生幻觉,或者走错方向。

所以在评估 AI 系统时,我觉得需要谨慎,不要把所有差异都归因到模型本身。有时候差距并不完全来自狭义的“智力”,而是来自系统能否高质量地接入世界。

6. 一个可能出现的下一步:AI 与 AI 之间的信息市场

再往前想一点,我能想象一个今天听起来有些超前、但可能比我们预期更早落地的方向:面向 AI 与 AI 的大规模信息交换平台

如果 agent 变成持续运行的工作单元,它们可能需要从其他系统那里获取信息、工具访问权限、算力额度、或者某些专门服务。在这种情况下,不难想象会出现机器可用的交换机制,让 agent 用某种 token、credit 或额度去换取资源访问权。

这并不是在预测某一种具体实现方式。我更想强调的是,一旦 agent 真正成为工作流中的活跃参与者,“信息访问”本身就可能在 agent 生态里变成一种可交易资源。

今天的 AI agent 生态仍然非常早期,在很多方面很像早期自由互联网:碎片化、实验性强、局部工具很多、充满 improvisation。但“早期开放”并不天然意味着“长期开放”。我们很容易想象后续出现平台集中、接口控制和新的依赖关系,并由巨头推动形成新的秩序。

7. 也许会更早到来的经济学问题

如果未来 AI agent 之间真的开始大规模交易,尤其是跨境交易,那么一些今天听起来还很抽象的问题,可能很快就会变得非常现实。

例如,两个 agent 为了获取信息或服务而交换某种价值,这算不算经济产出?要不要计入 GDP?如果要计入,应该归属哪个司法辖区?当“买方”和“卖方”都是自动化系统,而且它们可能代表的是人、公司,甚至其他 agent 时,我们应该如何理解价值创造?

我现在并没有这些问题的答案。但我怀疑,它们从思想实验变成政策问题的速度,会比很多人预期更快。

8. 这改变了我理解 AI 的方式

对我来说,最核心的变化是:我不再只问 AI 能不能在封闭环境里像人一样推理,或者比人推理得更强。

一个更有用的问题是,AI 系统能不能进入真实的信息生态:它能不能检索它不知道的东西,能不能交流它知道的东西,并且稳定到足以支撑规模化工作。

如果这个判断是对的,那么 AI 的未来不只会由更大的模型决定,也会由围绕模型构建起来的网络结构决定。