Bootstrapping is the same pattern across compilers, AI, and startups: borrow external structure to cold-start, then recursively replace dependencies until the system sustains itself.
Polanyi’s framework of tacit knowledge and personal knowing offers a lens for understanding what AI still cannot do — and why embodied, context-dependent skill remains hard to formalize.
A survey of projects that compress human personas into AI agent skills — from departed colleagues to public figures — and what this trend reveals about AI, memory, and identity.
Autonomous agents are now cheap and networked enough to spread faster than any institution can contain — the question is what order emerges after control becomes partial.
As AI tools get more capable, the bottleneck shifts from tool friction to human cognitive bandwidth — compressing intent and steering effectively becomes the key skill.
This paper learns two models: a world model trained on off-policy sequences through supervised learning, and an actor-critic model to learn behaviors from trajectories predicted by the learned model.
The data collection and learning updates are decoupled, enabling fast training without waiting for the environment. A learner thread continuously trains the world model and actor-critic behavior, while an actor thread in parallel computes actions for environment interaction.