How Engram Works
Your personal AI infrastructure is five portable layers of plain files. When the model underneath changes, your Engram stays the same.
Think of it like a harness
A harness doesn't generate power. It directs it, safely and reliably. You can swap what's on the other end without rebuilding your setup. Engram does the same for AI — stable infrastructure that makes everything else work.
Models change
GPT-5, Opus 4.6, Gemini 3 — new releases every month
Your Engram stays
Skills, memory, identity — portable markdown and YAML files
Nothing breaks
Model updates are firmware upgrades, not system replacements
Five layers. All portable.
Every layer is plain files — markdown, YAML, TypeScript. You can read them, edit them, version them, and move them to any AI system.
Layer 1
Context
What the AI knows about you and your work
Persistent configuration that loads automatically every session. Like CSS specificity — global settings are overridden by project settings, which are overridden by task settings. Your AI always starts with full context.
Global Config
CLAUDE.md — skills, stack prefs, security rules
Settings
settings.json — env vars, permissions, hooks
Project Config
.claude/CLAUDE.md — per-project conventions
Personal Context
context.md — who you are, your goals, your projects
Example
# context.md Name: Alex Chen Role: Freelance designer Stack: Figma, React, Tailwind Current project: Redesigning client portal Goal: Ship MVP by March
Layer 2
Hooks
How the AI's behavior is observed and modified
Event-driven automation that makes AI behavior observable, auditable, and modifiable. Every action the AI takes fires a lifecycle event that your hooks can intercept — like middleware for your AI.
SessionStart
Load context, show greeting when conversation begins
PreToolUse
Security validation before any tool executes
PostToolUse
Capture events, extract learnings after actions
Stop
Summarize session, save memory when conversation ends
Example
// SecurityValidator hook
// Fires before every tool use
if (tool === "Bash" && command.includes("rm -rf")) {
return { decision: "block", reason: "Destructive command" };
}Layer 3
Skills
What the AI can do
Portable, self-contained units of domain expertise. Each skill is a markdown spec with workflows and optional tools. Skills self-activate based on natural language triggers — no memorizing commands.
SKILL.md
Frontmatter + routing table + usage examples
Workflows/
Step-by-step execution procedures in markdown
Tools/
Optional CLI utilities in TypeScript
Triggers
Natural language — "research X" activates Research skill
Example
Research/
├── SKILL.md # "USE WHEN user asks to research a topic"
├── Workflows/
│ ├── DeepDive.md # Multi-source research workflow
│ └── QuickScan.md # Fast surface-level scan
└── Tools/
└── source-validator.tsLayer 4
Memory
What the AI remembers across sessions
Cross-session persistence that makes your AI smarter over time. Project memories, learnings from past mistakes, and session journals — all stored in plain files you can read, edit, or move.
Project Memory
Per-project learnings and patterns discovered
Session Journals
What happened each session, decisions made
Learnings
Mistakes caught, patterns confirmed, insights extracted
Auto-Update
Memory files update themselves as you work
Example
# MEMORY.md ## Key Learnings - pdfjs-dist v5 breaks SSR — use dynamic imports - This project uses Tailwind v4 layers — avoid global CSS resets that conflict with utilities - API routes with POST are auto-detected as dynamic
Layer 5
Identity
Who the AI is, how it behaves
Consistent AI behavior defined in plain files. Your AI's personality, values, communication style, and boundaries — all declarative, all portable. Change your AI's personality by editing a YAML file.
Constitution
Core values and operating principles
Personality
Humor, directness, curiosity — tunable knobs
Voice
How your AI speaks — professional, casual, technical
Boundaries
What your AI will and won't do
Example
# personality calibration personality: humor: 60 # dry -> witty directness: 80 # diplomatic -> blunt curiosity: 90 # focused -> exploratory precision: 95 # approximate -> exact
Any AI Model
Claude, GPT, Gemini, local models — swap freely
What Engram is not
A new AI model
An infrastructure layer that any model plugs into
A prompt library
A skill system with workflows, tools, and routing
A chatbot wrapper
A full lifecycle with hooks, memory, and identity
An agent framework for developers
Infrastructure for everyone — markdown and YAML, not code
How it compares
Engram sits in the gap between raw config files and heavyweight developer frameworks.
| Feature | Engram | Raw Config | Dev Frameworks | Platform Features |
|---|---|---|---|---|
| Model-agnostic | ✓ | — | — | — |
| Non-technical users | ✓ | — | — | ✓ |
| Portable skills | ✓ | — | — | — |
| Lifecycle hooks | ✓ | — | ✓ | — |
| Persistent memory | ✓ | — | — | — |
| Identity system | ✓ | — | — | — |
| Open source | ✓ | ✓ | ✓ | — |
| No vendor lock-in | ✓ | ✓ | ✓ | — |
Ready to build your Engram?
Join the waitlist for early access, or start building with the open-source spec today.