Skip to main content
Docker Compose is the simplest deployment option, all services are deployed on a single node. For high availability, auto-scaling, or multi-region deployments, see AWS Deployment with Terraform.

Prerequisites

  • Linux/macOS/Windows with WSL
  • Docker 24.0+ with Compose v2 installed
ResourceRecommended
CPU4 cores
RAM8 GB
Storage50 GB SSD
Memory and storage might need to be adjusted depending on the total data indexed.

Step 1: Download Docker Compose Configuration

# Create project directory
mkdir -p omni/docker && cd omni

# Download required files
curl -fsSL -o docker/docker-compose.yml https://raw.githubusercontent.com/getomnico/omni/master/docker/docker-compose.yml
curl -fsSL -o Caddyfile https://raw.githubusercontent.com/getomnico/omni/master/Caddyfile
curl -fsSL -o .env.example https://raw.githubusercontent.com/getomnico/omni/master/.env.example

Step 2: Setup Environment

cp .env.example .env Edit .env and update the following variables:
# Database (generate secure password, e.g. openssl rand -base64 32)
DATABASE_PASSWORD=<your_db_password>

# Security (generate secure keys, e.g., openssl rand -hex 16)
ENCRYPTION_KEY=<your_encryption_key>
ENCRYPTION_SALT=<your_encryption_salt>

# Application
OMNI_DOMAIN=<your_domain_name>
APP_URL=https://<your_domain_name>

# Enabled Connectors
# Specify connector names as a comma-separated string
# Pick and choose connectors you wish to run: google, slack, atlassian, etc. (See the .env file for valid values)
ENABLED_CONNECTORS=google,web
See Configuration Reference for all options.

Step 3: Start Services

For convenience, define an alias:
alias omni-compose="docker compose -f docker/docker-compose.yml --env-file .env"
Start Omni:
omni-compose up -d
Monitor startup:
omni-compose logs -f
omni-compose ps # all services should show "healthy"
Access at https://<your_domain_name>. First user becomes admin. Once all services are healthy, follow the Initial Setup guide to configure LLM providers, embeddings, and connectors.

GPU Acceleration (for local models)

If you wish to use local inference (for either language or embedding models), you will almost definitely need GPU acceleration. To enable this, download the GPU override:
curl -fsSL -o docker/docker-compose.gpu.yml https://raw.githubusercontent.com/getomnico/omni/master/docker/docker-compose.gpu.yml
Then start with GPU support:
omni-compose -f docker/docker-compose.gpu.yml --profile local-embeddings --profile vllm up -d
Then configure vLLM as your LLM provider, and “local” as your embedding provider in the Web UI. The compose stack has separate containers for local LLM and embedding inference, so you can choose each one independent of the other - e.g., you could use a cloud LLM with a local embedding model.

Stopping Services

To stop all services:
omni-compose down
To stop and remove all data (including the database):
omni-compose down -v
The -v flag will permanently delete all indexed data and settings.

Troubleshooting

Check Docker logs for errors:
omni-compose logs
Common causes:
  • Port 3000 already in use
  • Missing environment variables
Verify the web service is running:
omni-compose ps omni-web
Check if the port is accessible:
curl http://localhost:3000
Ensure PostgreSQL is fully started:
omni-compose logs postgres
Wait for the message: “database system is ready to accept connections”
Check your LLM provider configuration in the admin panel:
  • Confirm whether you set the API keys
  • Review AI service logs:
omni-compose logs omni-ai

Next Steps

Initial Setup

Configure LLMs, embeddings, and connectors

Connect Data Sources

Google, Slack, Confluence, etc.

Configuration Reference

All environment variables

User Management

Add users and permissions