Skip to main content
This guide will help you deploy Omni on your local machine using Docker Compose.
For production deployments, see our comprehensive Deployment Guide.

Step 1: Download Docker Compose Configuration

mkdir -p omni/docker && cd omni
curl -fsSL -o docker/docker-compose.yml https://raw.githubusercontent.com/getomnico/omni/master/docker/docker-compose.yml
curl -fsSL -o Caddyfile https://raw.githubusercontent.com/getomnico/omni/master/Caddyfile
curl -fsSL -o .env.example https://raw.githubusercontent.com/getomnico/omni/master/.env.example
cp .env.example .env

Step 2: Start Omni Services

Launch all services using Docker Compose. We specify --profile web to enable only the Web Connector:
docker compose -f docker/docker-compose.yml --env-file .env --profile web up -d
The first startup will take a few minutes as Docker pulls images and initializes the database.

Step 3: Access the Web UI

Open your browser and navigate to:
http://localhost:3000
You should see the Omni login page. Create your first admin account:
  1. Click “Create Account”
  2. Enter an email and password
  3. The first user is automatically granted admin privileges

Step 4: Connect Your First Data Source

In our deployment, we’ve only enabled the Web Connector, so we’ll set it up to get some data into Omni.
  1. Navigate to SettingsIntegrations
  2. Under Available Integrations, find the Web connector and click Connect
  3. Enter the root URL of the website you want to index (e.g., https://docs.example.com)
  4. Click Connect and wait for initial sync to complete
Start with a low Max Pages value (e.g., 100) to test the crawler before doing a full crawl.

Step 5: Configure LLM and Embedding models

While Omni is busy indexing, let’s configure LLM providers for AI chat.
  1. Navigate to SettingsLLM Providers.
  2. Click Connect against the provider you want to connect. You can have more than one connected, users will be able to switch between all available models.
  3. Add your API key and hit Connect.
  4. (Optional) Navigate to Embedding Providers and setup a provider of your choice, this will enable semantic search in Omni. Without a provider configured, search will still work but it will be limited to keyword-based search.
Congratulations! You have a working deployment of Omni up and running on your local machine.

Next Steps

Search Your Data

Learn how to use Omni’s search interface

AI Assistant

Ask questions and get AI-powered answers

Add More Connectors

Connect additional data sources

Deploy to Production

Move to a production environment
Having trouble? See the Troubleshooting section in the Docker Compose deployment guide.