Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getomni.co/llms.txt

Use this file to discover all available pages before exploring further.

This guide will help you deploy Omni on your local machine using Docker Compose.
For production deployments, see our comprehensive Deployment Guide.

Step 1: Download and Start Omni

# Download the latest release
mkdir omni && cd omni
curl -fsSL "https://github.com/getomnico/omni/releases/latest/download/omni-docker-compose.tar.gz" \
  -o omni-docker-compose.tar.gz
tar xzf omni-docker-compose.tar.gz
rm omni-docker-compose.tar.gz

# Setup environment
cp .env.example .env

Step 2: Start Omni Services

Launch all services using Docker Compose. The default configuration enables only the Web Connector:
docker compose -f docker/docker-compose.yml --env-file .env up -d
The first startup will take a few minutes as Docker pulls images and initializes the database.

Step 3: Access the Web UI

Open your browser and navigate to:
http://localhost:3000
You should see the Omni login page. Create your first admin account:
  1. Click “Create Account”
  2. Enter an email and password
  3. The first user is automatically granted admin privileges

Step 4: Connect Your First Data Source

In our deployment, we’ve only enabled the Web Connector, so we’ll set it up to get some data into Omni.
  1. Navigate to SettingsIntegrations
  2. Under Available Integrations, find the Web connector and click Connect
  3. Enter the root URL of the website you want to index (e.g., https://docs.example.com)
  4. Click Connect and wait for initial sync to complete
Start with a low Max Pages value (e.g., 100) to test the crawler before doing a full crawl.

Step 5: Configure LLM and Embedding models

While Omni is busy indexing, let’s configure LLM providers for AI chat.
  1. Navigate to SettingsLLM Providers.
  2. Click Connect against the provider you want to connect. You can have more than one connected, users will be able to switch between all available models.
  3. Add your API key and hit Connect.
  4. (Optional) Navigate to Embedding Providers and setup a provider of your choice, this will enable semantic search in Omni. Without a provider configured, search will still work but it will be limited to keyword-based search.
Congratulations! You have a working deployment of Omni up and running on your local machine.

Next Steps

Search Your Data

Learn how to use Omni’s search interface

AI Assistant

Ask questions and get AI-powered answers

Add More Connectors

Connect additional data sources

Deploy to Production

Move to a production environment
Having trouble? See the Troubleshooting section in the Docker Compose deployment guide.