E
Engram

Installation

Engram runs entirely on your machine via Docker. The setup script handles dependencies, secret generation, and directory structure — then a single start command brings up the full stack.

1. Prerequisites

Install these before continuing

  • Docker Desktop — runs all backend services
  • Python 3.11+ — used by the setup script and host-side sensors
  • Ollama running at localhost:11434 — pull the two required models:
>ollama pull llama3.1:latest && ollama pull nomic-embed-text:latest

8GB RAM minimum — Qdrant (2GB) + API + APScheduler (2GB) + Dashboard (512MB) + Ollama model (varies). Recommended: 16GB for full-quality Llama 3.1 inference.

2. Clone & Set Up

Clone the repo and run the one-time setup script. It creates a Python venv, installs dependencies, and auto-generates the encryption key and audit secret into .env.

>git clone https://github.com/engram-os/engram-os.git && cd engram-os
>chmod +x scripts/setup.sh && ./scripts/setup.sh
Dependencies

Creates venv and installs all Python packages from requirements.txt.

Secrets

Generates ENGRAM_ENCRYPTION_KEY and AUDIT_HMAC_SECRET locally using OS secure random. Nothing transmitted externally.

Models

Auto-pulls llama3.1:latest and nomic-embed-text:latest from Ollama if not already downloaded.

3. Launch

Start the full stack — Docker services, host-side sensors, and the dashboard.

>./scripts/start.sh
Services after startup: localhost:8000 (API) · localhost:8501 (Dashboard) · APScheduler agents run on a 15-minute heartbeat — in-process, no separate worker needed.