Knowledge Base
Turn Engram into a domain expert. Crawls external documentation sites, vectorizes the content into a local ChromaDB instance, and answers complex questions with citation-backed accuracy.
Intelligently traverses documentation pages, respecting domain boundaries and separating code blocks from prose.
Uses ChromaDB to store high-dimensional embeddings on your disk. No data ever leaves your machine.
Retrieval-Augmented Generation fetches the exact documentation snippet needed to ground the AI's answer.
Interface Options
Engram provides two ways to interact with your knowledge base: a visual dashboard for ease of use, and a CLI tool for automation.
@app.route() decorator.# Example
@app.route("/")
def hello():
return "Hello World"
Technical Setup
01Install Dependencies
The crawler requires specific Python libraries for scraping and vector storage. Ensure these are in your backend container.
beautifulsoup4
requests
chromadb02Git Exclusion (Critical)
The vector database (`engram_db`) is heavy and contains generated data. You must exclude it from version control.
# .gitignore
engram_db/
chroma.sqlite3