E
Engram
Core Concepts

The Privacy Model

Engram inverts the traditional AI paradigm. Instead of sending your sensitive data to a model in the cloud, the model runs directly on your machine. Inference, memory storage, and document processing stay within your Docker environment.

Air-Gap Capable

Once the model weights (llama3.1:latest, nomic-embed-text) are pulled via Ollama, the core chat and RAG pipeline requires zero internet connection. Third-party integrations (Google Calendar, Gmail, Linear, Jira) are opt-in and require their respective OAuth or API credentials.

Local Vector Storage

Memories and document embeddings are stored in a local Qdrant instance running in Docker. Personal memories use the second_brain collection (AES-128 encrypted). Crawled documentation uses the doc_knowledge collection. Both live on your local disk in the Qdrant Docker volume — never on a managed cloud database.


Data Flow Architecture

Traditional AI Data Leaks Possible
Your Data
HTTPS / TLS
Public Cloud
Engram Architecture Private by Architecture
Your Data
LOCAL BUS
Engram Core