Distilling Persons into Agents: A Survey of Recent ‘Person-as-Skill’ Projects
Published:
This post supports English / 中文 switching via the site language toggle in the top navigation.
Introduction
In the past few weeks, an unusual genre of Claude Code Skills has quietly appeared on GitHub. They do not automate a workflow or wrap an API. Instead, they try to do something much stranger: compress a human being into a markdown file.
A departed colleague, an ex-partner, your own past self, Steve Jobs, Elon Musk — each becomes a SKILL.md you can /invoke from a terminal. This post surveys six recent projects in this emerging family and asks what they share, where they diverge, and what their existence tells us about how people are beginning to think about AI, memory, and identity.
The projects:
| Target | Project | Author |
|---|---|---|
| Departed colleague | colleague-skill | titanwings |
| Ex-partner | ex-skill (前任.skill) | therealXiaomanChu |
| Yourself | yourself-skill | notdog1998 |
| Any public figure (meta-tool) | nuwa-skill (女娲) | alchaincyf |
| Steve Jobs | steve-jobs-skill | alchaincyf |
| Elon Musk | elon-musk-skill | alchaincyf |
1. Two Families of Design
Reading the six READMEs side by side, a clean split emerges: intimate distillation (people you know) versus cognitive distillation (people you don’t).
Family A — Intimate Distillation: “Memory + Persona”
colleague-skill, ex-skill, and yourself-skill share a strikingly uniform architecture. All three organize the target person as two markdown files:
- Part A — Memory (work context / relationship history / life trajectory): the what and when — institutional knowledge, shared experiences, significant events, domain specifics.
- Part B — Persona, a five-layer structure:
- Hard rules / core principles
- Identity
- Speech patterns
- Emotional / decision responses
- Interpersonal behaviors
Execution flow is identical across all three: incoming message → persona decides attitude → memory supplies context → response rendered in their voice. The skills ingest WeChat/Feishu/Dingtalk/Slack exports, screenshots, photo EXIF, emails — the raw exhaust of digital relationships — and compile them into two files that Claude can load as a skill.
All three support incremental updates, conversational correction, and version control with rollback. The resemblance is so tight that these three projects almost feel like variants of a single template applied to three different emotional surfaces: workplace continuity, romantic grief, self-understanding.
Family B — Cognitive Distillation: “Six-Layer Framework Extraction”
nuwa-skill and its outputs (steve-jobs-skill, elon-musk-skill) take a deliberately opposite stance. The tagline is direct: “不是在复读名人语录,是在用名人的认知框架帮你分析” — not repeating quotes, but applying the person’s cognitive framework to new analysis.
Nuwa’s six extraction layers:
- Expression DNA — vocabulary, rhythm, rhetorical tics
- Mental Models — 3-7 validated cognitive frameworks
- Decision Heuristics — reasoning shortcuts
- Anti-Patterns — what the person actively refuses
- Honest Boundaries — what the distilled perspective cannot do
- Integrity Markers — intuition, undisclosed beliefs, sudden insights that cannot be extracted
A proposed mental model is only admitted if it satisfies three validation criteria: it appears across multiple domains, it predicts behavior on novel problems, and it is not universal among intelligent people (i.e., it must actually be distinctive).
Nuwa is fundamentally a multi-agent research pipeline. Six parallel sub-agents simultaneously scrape books, podcasts, interviews, critic perspectives, decision records, and biographical timelines. Findings are cross-validated. The output is tested against the subject’s documented positions, then against novel scenarios where appropriate uncertainty should emerge.
The Jobs skill distills, for example: “focus is saying no to 100 good ideas,” “end-to-end control,” “mortality as decision filter,” paired with an Expression DNA of binary vocabulary (“insanely great” vs “shit”), short sentences, extreme certainty. The Musk skill distills: asymptotic limit analysis, five-step algorithm (question → delete → simplify → accelerate → automate), physics as the only hard constraint.
The Fork
The two families diverge on a single deep question: what is a person, for the purposes of simulation?
- Family A says: a person is a relational surface. What matters is how they respond to you — tone, habits, inside jokes, the specific texture of interaction. Memory is concrete and particular.
- Family B says: a person is a thinking engine. What matters is how they decide. Memory is abstracted into models; quotes are seen as symptoms, not substance.
One family is building echoes. The other is building lenses.
2. What’s Interesting About the Designs
The five-layer persona is converging into a de facto standard
Three independent Family-A projects all arrived at essentially the same five-layer persona structure (rules → identity → speech → emotion → relational behavior). This is suspicious in a useful way: either they borrowed from each other, or the structure is genuinely the minimum viable description of a “person as agent.” I suspect the latter. It mirrors how character designers in games, writers in fiction, and psychologists in trait theory all end up near similar decompositions.
Nuwa’s “integrity markers” are the most honest design choice
Most persona projects overclaim. Nuwa explicitly allocates a layer to what cannot be extracted: intuition, undisclosed beliefs, sudden insight. This is rare. Most digital-twin projects try to hide their seams; Nuwa foregrounds them. The validation criterion — “not universal among intelligent people” — is also unusually rigorous: it prevents the skill from collapsing into generic “smart founder energy.”
Two opposite answers to Polanyi’s Paradox
Family A tries to capture tacit knowledge by recording behavior densely and hoping the model will interpolate. Family B tries to capture it by forcing explicit articulation of the frameworks behind the behavior. Neither fully succeeds — tacit knowledge, as Polanyi argued, is fundamentally not fully tellable — but they fail in illuminating ways. Family A produces convincing surface mimicry without depth; Family B produces defensible frameworks without lived texture.
The hardest part is probably the data
All six projects are ultimately bottlenecked on source material quality. Family A depends on chat exports that most people have never exported. Family B depends on biographies and long-form interviews that exist only for a few hundred people on Earth. The “distill a person” operation works best for public figures with biographers and for private people whose lives are unusually well-logged. Most humans fall in neither bucket.
3. Potential Impact
Useful
- Institutional continuity. The colleague-skill use case is genuinely valuable. When a senior engineer leaves, months of onboarding friction follow. A persistent skill that can answer “why did we choose Kafka over NATS in 2023?” in their voice and with their context is a real asset.
- Self-reflection infrastructure.
yourself-skillis interesting less as a digital twin than as a mirror. Being able to ask “what would past-me have said about this?” is a new form of journaling with retrieval. - Cognitive borrowing, not quote-worship. Nuwa’s framing — use the framework, don’t repeat the quote — is healthier than most “talk to Jobs” products. A Musk-lens that asks “what’s the idiocy index here?” is a legitimate analytical tool regardless of one’s view of Musk.
- Low-cost apprenticeship. Historically, to think like a master you had to read everything they wrote, work next to them, or wait for a biography. Cognitive distillation compresses this — imperfectly, but accessibly.
Uncomfortable
- Consent asymmetry. A departed colleague did not agree to become a skill. An ex-partner definitely did not. “Cyber immortality” is an appealing frame until you are the one being immortalized without asking.
- Grief laundering. The
ex-skillREADME frames itself as emotional processing. It might also be a device for not processing — freezing a relationship at its highest-fidelity snapshot and never letting it end. - Flattening of public figures. Distilling Jobs into 6 mental models is a bet that the distillation is the person. For analytic use, that’s fine. For people who will inevitably end up asking Jobs for life advice, it will produce confidently wrong answers in a recognizable voice.
- The confident-voice problem. All six projects output in a specific person’s voice. Voice creates trust. Trust applied to an interpolation engine produces fluent hallucinations with someone else’s face on them.
- Authenticity drift. Every incremental update is a chance for the skill to drift from the original person. After enough updates, whose skill is it?
Structural
These projects are early signals of a broader pattern: people are starting to treat humans as compilable artifacts. Not in a dystopian sense — in an ordinary-tooling sense. The same developers who write CLAUDE.md files for their repos are now writing them for their colleagues, their exes, and themselves. Once the interface stabilizes, the social implications will follow: expect “leaving-documents” in workplaces, “memorial skills” for the deceased, and “founder skills” licensed by public figures who want to monetize their cognition.
4. What I’d Want to See Next
- Hybrid designs that combine Family A’s concrete memory with Family B’s explicit frameworks. A colleague-skill that also surfaces their decision heuristics, not just their tone, would be far more useful.
- Faithfulness metrics. Nuwa gestures at validation; the intimate-distillation projects do not. How do you know the skill is still the person, and not the persona the author wished the person were?
- Consent scaffolding. A standard “subject release” format, even an informal one, for skills built about real identifiable people.
- Decay. Skills that explicitly degrade when not updated, forcing a human to choose to re-engage rather than silently drifting into a caricature.
Closing
The six skills surveyed here are small. Most are a few hundred lines of markdown. But they rhyme with an old question that keeps returning in new clothing: what exactly is transferable about a person? The memory-plus-persona camp answers the surface. The cognitive-framework camp answers the engine. Neither answer is complete, and the gap between them is where the interesting work is.
For anyone building in this space: the engineering is easy, the epistemology is hard, and the ethics will arrive whether you design for them or not.
Links
- colleague-skill: github.com/titanwings/colleague-skill
- ex-skill: github.com/therealXiaomanChu/ex-skill
- yourself-skill: github.com/notdog1998/yourself-skill
- nuwa-skill: github.com/alchaincyf/nuwa-skill
- steve-jobs-skill: github.com/alchaincyf/steve-jobs-skill
- elon-musk-skill: github.com/alchaincyf/elon-musk-skill
