Inspired by Andrej Karpathy's Filesystem Memory Concept

Persistent Memory
for AI Agents

memres extends Andrej Karpathy's filesystem-as-memory insight into a production-grade knowledge management system — typed entities, dual-layer graph+wiki architecture, automated ingestion pipelines, and continuous quality enforcement.

# Ask your agent anything — it remembers
$ hermes-cli ask "What is Vijay's laptop IP?"
→ Vijay's laptop (vijay-laptop) is at 192.168.0.6
  Source: Telegram DM · Mar 28, 2026
# No hallucinations. Full provenance chain.

The Idea That Started It All

In late 2024, Andrej Karpathy described a deceptively simple approach: treat the filesystem as an LLM's knowledge base. Markdown files organized by topic, git for versioning, context injection at inference time. Zero infrastructure. The simplicity was the point.

Figure 1: Karpathy's Original Filesystem Model
┌─────────────────────────────────────────────────────────────┐
│                    LLM Query                                │
│         "What do you know about Project X?"                 │
└──────────────────────────┬──────────────────────────────────┘
                           │ injects relevant *.md files
                           ▼
┌─────────────────────────────────────────────────────────────┐
│                     Context Window                           │
│   ┌──────────────────┐  ┌──────────────────┐  ┌─────────┐ │
│   │ memory/          │  │ memory/          │  │ memory/ │ │
│   │ project-x.md    │  │ team/alice.md   │  │ sys-y.md│ │
│   │                  │  │                  │  │         │ │
│   │ # Project X      │  │ # Alice          │  │ # Sys Y │ │
│   │ Built with Python│  │ Works on Python  │  │ Hosted  │ │
│   └──────────────────┘  └──────────────────┘  └─────────┘ │
└──────────────────────────┬──────────────────────────────────┘
                           │ git blame / git log
                           ▼
┌─────────────────────────────────────────────────────────────┐
│              Filesystem (git repository)                     │
│  memory/                                                    │
│    project-x.md  ← "Project X uses Python, deployed to VPS" │
│    team/alice.md ← "Alice is the lead engineer"            │
│    system-y.md  ← "System Y runs on 10.0.0.50"             │
│  .git/  ← every commit = timestamped memory entry          │
└─────────────────────────────────────────────────────────────┘
Karpathy's model — flat Markdown files + git versioning. Simple, human-readable, zero infrastructure. The foundation memres builds on.

What Karpathy Got Right

  • Zero infrastructure — no database, no schema, just files and git
  • Human-readable — anyone can open a .md file and read what the LLM knows
  • Versioned by default — git history is the changelog
  • Contextual injection — only relevant files loaded into context

Where It Hits Walls in Production

  • No typed entities — "who works on Project X?" requires grep through prose
  • No consistency enforcement — two files can contradict each other silently
  • No ingestion workflow — no mechanism for Telegram messages, cron outputs, or session notes
  • No quality checks — stale facts persist until manually corrected

memres vs Karpathy's Model

memres keeps the core insight — filesystem as memory, git as version control — and adds what production knowledge management actually needs.

Capability Karpathy Model memres
Storage Flat .md files Graph (JSON) + Wiki (Markdown)
Entity types None — everything is prose person, project, system, process, topic
Relationships Implicit — grep through text Explicit — relationship fields in JSON
Ingestion Manual editing only Inbox → process → archive pipeline
Source tracking Git blame (timestamp only) Named sources + immutable raw archive
Consistency None — silent contradictions possible Graph ↔ frontmatter reconciliation
Quality checks None Stale, orphan, broken-link detection
Critical facts None MEMORY_MANIFEST with reconcile guard
Dependencies None (git + filesystem) Python stdlib only (no pip install)
Telegram capture None Hermes hook → inbox → wiki-ingest
Figure 2: memres Dual-Layer Architecture
┌──────────────────────────────────────────────────────────────────────────┐
│                         Inbox Sources                                     │
│  ┌──────────────┐   ┌──────────────┐   ┌──────────────────────────┐   │
│  │  Telegram    │   │  Cron Job    │   │  Terminal Session          │   │
│  │  "Update X"  │   │  health check│   │  "Note: deploy Y"         │   │
│  └──────┬───────┘   └──────┬───────┘   └──────────┬─────────────────┘   │
│         └────────────────────┼──────────────────────┘                    │
│                              ▼                                             │
│                   ┌────────────────────────┐                              │
│                   │        inbox/           │                              │
│                   │  telegram/  cron/  manual/                             │
│                   └───────────┬────────────┘                              │
│                               │ wiki-ingest --source telegram              │
│                               ▼                                            │
│                   ┌────────────────────────┐                              │
│                   │     wiki-ingest        │                              │
│                   │ 1. Parse frontmatter   │                              │
│                   │ 2. Resolve [[links]]    │                              │
│                   │ 3. Update graph JSON    │                              │
│                   │ 4. Sync frontmatter     │                              │
│                   │ 5. Archive to raw/      │                              │
│                   └───────────┬────────────┘                              │
│                   ┌───────────┴────────────┐                              │
│                   ▼                         ▼                              │
│        ┌─────────────────────┐     ┌─────────────────────┐                │
│        │       graph/        │◄───►│        wiki/        │                │
│        │  [JSON]             │SYNC │  [Markdown+YAML]   │                │
│        │  people/alice.json │     │  people/alice.md   │                │
│        │  Fast attribute    │     │  Human-readable    │                │
│        │  lookup for LLM    │     │  wiki with links   │                │
│        └─────────────────────┘     └─────────────────────┘                │
│                   │                          │                             │
│                   │  wiki-ingest --lint      │                             │
│                   │  wiki-reconcile          │                             │
│                   └──────────┬────────────────┘                             │
│                              ▼                                              │
│        ┌─────────────────────────────────────────────────────┐            │
│        │              MEMORY_MANIFEST                         │            │
│        │  Ground truth for critical operational facts.       │            │
│        │  reconcile: manifest ↔ wiki, report + fix          │            │
│        └─────────────────────────────────────────────────────┘            │
└──────────────────────────────────────────────────────────────────────────┘

Five Capabilities the Filesystem Model Doesn't Have

Dual-Layer Architecture

A machine-readable knowledge graph (JSON) drives LLM context retrieval. A human-readable wiki (Markdown+YAML) is the interface for human review and editing. Both stay synchronized.

Typed Entities

person, project, system, process, topic — each with required and optional attributes. Relationships are first-class fields, not prose. Query "who works on Project X?" without grep.

Ingestion Pipeline

Telegram messages, cron outputs, terminal sessions — all flow through inbox → process → archive. Every fact is attributed to its source. Raw sources are never deleted, only archived.

Quality Enforcement

MEMORY_MANIFEST is a hardcoded list of critical operational facts. The reconcile job runs on every lint cycle — if the manifest and wiki diverge, it reports and fixes the discrepancy automatically.

Source Provenance

Every fact in the wiki has a sources entry. The raw source is preserved in raw/ and never modified. To audit any fact: find it in the wiki → check its sources field → read the original raw file.

Bidirectional Sync

Graph drives the wiki. Wiki edits update the graph. If a pipeline crashes mid-write, reconcile detects the inconsistency and fixes it. Graph and wiki never drift.

How It Works

01

Capture

A fact arrives — via Telegram DM, cron job output, or terminal note. It lands in the inbox, tagged by source.

02

Process

wiki-ingest extracts entity links ([[alice]]) and key-value pairs. It resolves relationships, attributes the source, and prepares the graph update.

03

Store

The graph JSON and wiki Markdown are updated in sync. The raw source is archived (not deleted). Both layers agree — always.

04

Enforce

MEMORY_MANIFEST holds ground truth. reconcile compares it against the wiki. Stale pages, orphan pages, broken links — all caught and fixed automatically.

System Design

Two parallel layers, one sync protocol, one ground-truth manifest.

Inbox Sources
Telegram Cron Manual Webhook
inbox/
telegram/ · cron/ · manual/ · pending/
↓ wiki-ingest
Pipeline
parse · link · update graph · sync wiki · archive
graph/ [JSON]
people/alice.json projects/proj-x.json systems/server-a.json _types.json · _index.json
Fast · programmatic · LLM query
wiki/ [Markdown+YAML]
people/alice.md projects/proj-x.md systems/server-a.md meta/stale-pages · orphan-pages
Human-readable · editable · auditable
↓ lint + reconcile
MEMORY_MANIFEST [Ground Truth]
("vijay", "timezone", "Asia/Kolkata") · ("server-a", "ip", "10.0.0.50") · ...

How We Can Help

Implementation Consulting

Get memres running in your environment. We handle architecture, onboarding workflow, and team handoff.

  • Full system architecture setup
  • Entity schema design
  • Onboarding workflow definition
  • Team training and runbook

Custom Development

Build custom ingestion sources, pipeline stages, or entity types specific to your stack.

  • Custom inbox source adapters
  • Pipeline stage development
  • Custom entity type schemas
  • LLM context retrieval tuning

Enterprise Hosting

Fully managed memres deployment on your infrastructure or ours. SOC2-ready, with SLA guarantees.

  • Managed VPS or private cloud
  • Automated backup and recovery
  • Monitoring and alerting
  • 99.9% uptime SLA

Team Training

Two-day intensive workshop for your team to understand, operate, and extend memres confidently.

  • Architecture deep-dive
  • Hands-on entity management
  • Pipeline debugging and extension
  • Knowledge management best practices

Open Source. MIT Licensed.

memres core is open source under the MIT license. Built on Karpathy's insight, extended for production use. No attribution required — but appreciated.

Get in Touch

Email

For project inquiries, implementation discussions, and everything else.

info@processbricks.com

Telegram

Quick questions and real-time discussion. DM or group — your choice.

@processbricks

Website

ProcessBricks — AI infrastructure for teams that ship.

processbricks.com