Skip to main content
v0.1 · Source Available

AI memory that stays
on your machine.

VeritasMemoria is a structured, auditable memory layer for AI-assisted workflows. Every record is cryptographically signed and hash-chained. No cloud. No accounts. No telemetry.

Download for Windows View Source
All data stored locally
HMAC-signed records
Works fully offline
Any LLM or none
Core Properties

Built different by design

Most memory systems trade transparency for convenience. VM trades neither.

Local-First

All data stays on the machine running it. Nothing is sent to any server — not even to check for updates. Your memories are yours.

Tamper-Evident

Records are HMAC-signed. The audit log is hash-chained and append-only. You can verify that nothing has been altered — ever.

Human-in-the-Loop

The system cannot commit or retire memories without passing configurable oversight gates. You stay in control of what gets kept.

LLM-Agnostic

Works with OpenAI, Anthropic, Ollama, or any OpenAI-compatible endpoint. Runs fully offline in copy-paste mode with no LLM required.

Semantic Retrieval

Hybrid search combining dense vector embeddings and keyword indexing. Finds the right memory even when you don't remember exactly how you phrased it.

Transparent Storage

SQLite by default — a single file you can inspect, back up, or move. No opaque binary formats. Your data is always accessible without VM.

How It Works

Simple architecture, serious guarantees

Step 1

You write a memory

Paste text, upload a document, or let an agent push context. VM assigns it to working memory.

Step 2

VM gates and signs it

Coherence and oversight checks run. On approval the record is HMAC-signed and appended to the hash chain.

Step 3

Retrieve it later

Semantic and keyword search surface the right context instantly. Pass it to any LLM or read it yourself.

Get Started

Running in under a minute

Pre-built binary for Windows — no Python, no dependencies, no installer.

Windows Binary
Recommended · No Python required
  1. Download the .zip from the Releases page
  2. Extract to a folder you can write to
  3. Double-click VeritasMemoria.exe
  4. Open localhost:8000
From Source
macOS · Linux · Windows · Python 3.10+
# clone and enter
$ git clone https://github.com/your-org/veritas-memoria
$ cd veritas-memoria
 
# install dependencies
$ pip install -r requirements.txt
 
# run
$ python run.py
INFO: Uvicorn running on http://127.0.0.1:8000

Minimum: Windows 10 64-bit, 2 GB RAM, 500 MB disk. macOS 12+ and Ubuntu 20.04+ supported via source install. GPU not required.

LLM Support

Works with what you already use

Switch providers in a single line of config. Or skip the LLM entirely and run in copy-paste mode.

OpenAI Anthropic / Claude Ollama (local) Any OpenAI-compatible API Fully offline (copy-paste mode)
.env
# OpenAI
VM_LLM_PROVIDER=openai
VM_LLM_MODEL=gpt-4o
 
# Anthropic
VM_LLM_PROVIDER=anthropic
VM_LLM_MODEL=claude-sonnet-4-6
 
# Ollama (local, fully offline)
VM_LLM_PROVIDER=openai
VM_LLM_BASE_URL=http://localhost:11434/v1
 
# No LLM at all
VM_LLM_PROVIDER=copy_paste

Your data. Your machine. Your call.

VeritasMemoria ships blank and learns only what you explicitly give it. No defaults phoning home. No model training on your memories.

Get VeritasMemoria Back to Templet Solutions