inomem v1.0

Your AI forgets.
Inomem remembers.

Memory that persists across sessions and learns automatically. One command to install. Zero config. Runs silently.

$ bash bin/install.sh --openrouter-key sk-... --logging
See how it works
✓ One-line install ✓ No daemon, no Docker ✓ ~$0.15/month to run
Anthropic Claude Code
OpenAI GPT · Soon
Gemini Gemini · Soon

Not a log file. Not a dump. Not a prompt hack.

An AI librarian that curates your knowledge.

Contradiction detection

Migrated from SQLite to MySQL? The librarian catches the contradiction and updates the old entry. No stale facts polluting your context.

Staleness pruning

Learnings that no longer apply get removed automatically. Your memory file stays lean — 80 lines max, every one of them current and useful.

Deduplication

Said "use bun not npm" three times? The librarian keeps one entry, not three. Your corrections compound — they don't pile up.

Your corrections are the highest-value signal

When you say "no, use Zepto Mail not SendGrid" — that correction is auto-detected, tagged, and prioritized by the librarian. Next session, Claude already knows. You never repeat yourself again.

Other tools dump raw logs into a file and call it "memory".
Inomem is the only tool with an AI librarian that reads, thinks, and curates — so your CLAUDE.md is always clean, current, and useful.

How it compares

Memory tools for Claude Code, compared honestly.

← Scroll to compare →

Feature
inomem
$29.90 one-time
CC Memory v3
free / complex
Spark Intelligence
134k lines of Python
Manual CLAUDE.md
free / tedious
Install complexity 1 command Many steps Very complex Manual
Automatic capture
AI librarian (curate, not dump) ✓ Full Partial Partial
Contradiction detection
Correction prioritization
MCP real-time recall
Secret scrubbing
No daemon / server
Context footprint ~80 lines Large 1,232+ skills Manual
Running cost ~$0.15/mo ~$0.50/mo High $0
Push notifications
Full source code

The problem with AI coding agents

You're paying for the most capable AI on earth. But every session, it starts from zero.

Total amnesia, every time

Claude Code starts fresh each session. Your project's architecture, your preferences, the gotchas you discovered together — all gone.

You keep paying to re-teach

"We use Zepto Mail." "Use @context in Blade." "Prefer bun over npm." The same instructions, the same tokens, every single session.

The same mistakes, on repeat

It hit the same Cloudflare caching bug last week. And the week before. It can't learn from its mistakes because it can't remember making them.

The full pipeline

Five stages. Fully automatic. You just code — Inomem handles the rest.

1

Capture — hooks + correction tagging

Hooks fire on tool errors, prompts, and session end. Successful edits and reads are filtered out — only errors, corrections, and conclusions get logged. Corrections are auto-detected via regex and tagged as high-value learnings.

{"type":"tool_use","tool":"Bash","status":"error","error_snippet":"Permission denied"}
{"type":"correction","prompt":"no, use @context not {{'@'}}context in blade"}
{"type":"conclusion","hook":"Stop","summary":"implemented waitlist flow"}
2

Scrub — secrets stripped before logging

API keys, tokens, and credentials are redacted before anything hits disk. Your learnings file stays clean — safe to commit, safe to share.

before: sk-or-v1-abc123def456ghi789jkl012mno345
after:  [REDACTED_KEY]

before: ghp_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
after:  [REDACTED_KEY]
3

Distill — the AI librarian

This is the core. A cheap LLM (~$0.001/call) doesn't just log — it reads, thinks, and curates. It extracts the why and how from raw events, detects contradictions with existing knowledge, deduplicates, and prunes what's stale. Your corrections get the highest priority. Each project gets its own isolated memory.

## Gotchas
- Use @context in Blade JSON-LD [HIGH]
- Waitlist tokens expire after 72h
- Uses SQLite ← removed, migrated to MySQL
4

Notify — Pushover to your phone

After distillation, you get a push notification with exactly what changed. New learnings added, stale ones removed, contradictions resolved — all in one glance.

📱 Pushover notification:
inomem • kraite.com
+2 new, ~1 updated, -1 removed
5

Recall — auto-loaded + on-demand

CLAUDE.md is auto-loaded at session start — Claude already knows your project. Mid-session, the MCP recall_memory tool searches deeper memory in real-time. No restart needed.

Claude: recall_memory("email config")
Inomem: "Kraite uses Zepto Mail. Key in .env as ZEPTOMAIL_MAIL_KEY"
Claude: *proceeds correctly without asking you*

The learning loop

Every session makes Claude smarter. Your corrections compound. The librarian curates. You just code.

You work
Capture
Scrub
Distill
Notify
Recall

What's included

capture.sh

Silent event capture

Three hooks: PostToolUse, UserPromptSubmit, Stop. Only errors and corrections logged — noise filtered at source.

distill.sh

AI librarian

Curates knowledge, not logs. Detects contradictions, deduplicates entries, prunes what's stale. Corrections get top priority.

mcp-server.js

Real-time recall

MCP tool for mid-session memory search. Claude asks, Inomem answers. No restart needed.

/inomem

9 slash commands

Status, distill, forget, show, logs, doctor, credential management. Full control from inside Claude Code.

config.env

Pushover notifications

Get notified on your phone when the librarian updates memory. Configurable, easy to disable.

Secret scrubbing

Privacy first

API keys, JWTs, passwords, .env values are scrubbed before any event is logged or sent to the LLM.

~200
lines of code total
$0.001
per distill cycle
1
command to install
0
sessions to re-teach

One price. Full source. Forever yours.

No subscription. No SaaS. No vendor lock-in. You own the code.

INOMEM
$29.90
One-time payment
  • Full source code (hooks, distill, MCP server)
  • One-line install — works in minutes
  • Works on any project, any stack
  • No daemon, no Docker, no subscription
  • MCP real-time recall server
  • Pushover notifications (optional)
  • Runs on ~$0.15/month (OpenRouter)

Instant download. No account required.

FAQ

Most tools dump raw logs into a file and call it "memory". Inomem has an AI librarian that actually reads, curates, and maintains your knowledge — detecting contradictions, deduplicating entries, pruning what's stale, and prioritizing your corrections. All in ~200 lines of bash + JS. No 134,000-line Python frameworks, no soul kernels, no emotion states. Just curated knowledge that gets smarter every session.
Currently designed for Claude Code's hook system. The MCP server works with any agent that supports MCP. The capture/distill pipeline can be adapted to any agent with event hooks.
By default, Gemini Flash via OpenRouter. Costs ~$0.001 per distillation run. You can configure any model in config.env.
Only tool errors, your corrections, and session summaries are sent to the librarian LLM. Successful tool calls are never logged. Full file contents are never sent. The MCP server is 100% local. Passwords and API keys are automatically scrubbed before any event is logged.
Three safeguards: (1) Backups before every update. (2) /inomem forget removes specific entries instantly. (3) Pushover notifications let you review changes in real-time.