Hank, my AI assistant, writes things down. Daily notes, preferences, decisions — it's all on disk as Markdown. The problem I kept running into: the longer it ran, the more it knew, but the worse it got at organizing what it knew. A flat MEMORY.md file and a folder of daily logs isn't a knowledge system. It's a pile. So I built one.
First: Reframing MEMORY.md
Before building anything, I had to fix a conceptual problem. My MEMORY.md had become a dumping ground — project details, contact info, system config, behavioral rules, all mixed together. None of it was organized around the question that actually matters: how do I work with this person?
The new rule: MEMORY.md is only for patterns, preferences, and rules of engagement. Not facts about the world. Not project status. Not contacts. Just the distilled understanding of how to work with me specifically — what I value, how I think, what's gotten the assistant in trouble before, and what I've explicitly asked for.
Everything else needed a different home.
The PARA System
I organized the rest using Tiago Forte's PARA method, which sorts all information into four buckets:
Projects — Active work with a specific goal and end date. Job search, website work, ongoing builds.
Areas — Ongoing responsibilities with no end date. Career development, the assistant system itself.
Resources — Reference material on topics of interest. Tech notes, contacts, how-to knowledge.
Archives — Completed or inactive items from the other three. Keeps things tidy without deleting anything.
The knowledge base lives at knowledge/ in the assistant workspace, right alongside MEMORY.md and the daily notes — so the same search tooling (QMD) indexes everything together.
Daily Notes → PARA: The Extraction Schema
Hank already keeps a daily log (memory/YYYY-MM-DD.md) — a fast, raw, append-only record of what happened each day. That stays. But raw notes aren't useful long-term without distillation.
Each night, Hank's review job reads the day's log and extracts seven categories of durable facts:
Key decisions made — anything that changed direction or set a constraint
Projects discussed — progress, blockers, next steps
People mentioned — who came up and in what context
Status changes — started, completed, blocked, archived
Lessons learned / gotchas — things that went wrong or produced insight
Preferences stated — any explicit statement about how I want things done
Open questions / follow-ups — unresolved things that need to come back up
Items 1–5 feed the PARA files. Items 6–7 feed MEMORY.md directly — those are behavioral facts, not project facts.
Why not capture directly into PARA? Speed. The daily note is zero-friction — just log everything as it happens. Deciding where something belongs in PARA requires judgment, and doing that mid-conversation slows things down. The nightly job handles organization when there's no urgency.
Memory Decay
Not all memories are equally relevant. A decision from this morning matters more than a note from four months ago — even if the older note is a better semantic match in search.
We track every file in the knowledge base in a decay index with three fields: last_accessed, access_count, and temperature. Temperature has three states:
🔥 Hot — Accessed in the last 7 days — Featured prominently in summaries and context.
🌡 Warm — 8–30 days since last access — Included but deprioritized.
🧊 Cold — 30+ days since last access — Dropped from active summaries but never deleted. Still searchable.
The twist: frequency resistance. Files that have been accessed many times require longer inactivity before they cool down. A note you've referenced 20 times stays warm much longer than one you read once. High-signal facts resist decay; incidental ones fade faster.
Cold facts are never deleted. When something cold becomes relevant again — via search, conversation, or the nightly review — it gets "reheated": last_accessed resets, the clock starts over.
The Nightly Review Job
Everything above runs automatically. At midnight, Hank spins up as an isolated sub-agent that:
Reads today's daily note
Runs the 7-category extraction
Updates the relevant PARA files
Updates
MEMORY.mdif new behavioral patterns emergedRecalculates decay temperatures across the entire knowledge base
It runs silently — no notification unless something urgent surfaces. I wake up with the knowledge base already updated.
How It All Fits Together
# During the day
Conversation happens → raw notes go to memory/YYYY-MM-DD.md
# At midnight
Nightly job reads daily note → extracts 7 categories
→ updates knowledge/projects/, areas/, resources/
→ updates MEMORY.md (behavior/preferences only)
→ recalculates decay temperatures
# On any search
QMD searches all of: MEMORY.md + memory/ + knowledge/
Hot facts surface first, cold facts fade from summaries
Early Impressions
The system ran for the first time last night. It's early, but the structure already feels right. Having a clear place for everything — and a rule for what belongs in MEMORY.md vs. the knowledge base — removes a lot of ambiguity about where things should go.
The decay system is the part I'm most curious about long-term. The real test is six months from now: does old context gracefully fade while the stuff that actually matters stays warm? I'll write a follow-up when I have a real answer.
Inspired by: Tiago Forte's PARA Method, LRU/LFU cache algorithms
Tools: OpenClaw · QMD · Claude Sonnet · Markdown
Originally published at https://www.paulbrennaman.me/lab/para-knowledge-system

