For production deployments, see our comprehensive Deployment Guide.
Step 1: Download Docker Compose Configuration
Step 2: Start Omni Services
Launch all services using Docker Compose. We specify--profile web to enable only the Web Connector:
The first startup will take a few minutes as Docker pulls images and initializes the database.
Step 3: Access the Web UI
Open your browser and navigate to:- Click “Create Account”
- Enter an email and password
- The first user is automatically granted admin privileges
Step 4: Connect Your First Data Source
In our deployment, we’ve only enabled the Web Connector, so we’ll set it up to get some data into Omni.- Navigate to Settings → Integrations
- Under Available Integrations, find the Web connector and click Connect
- Enter the root URL of the website you want to index (e.g.,
https://docs.example.com) - Click Connect and wait for initial sync to complete
Step 5: Configure LLM and Embedding models
While Omni is busy indexing, let’s configure LLM providers for AI chat.- Navigate to Settings → LLM Providers.
- Click Connect against the provider you want to connect. You can have more than one connected, users will be able to switch between all available models.
- Add your API key and hit Connect.
- (Optional) Navigate to Embedding Providers and setup a provider of your choice, this will enable semantic search in Omni. Without a provider configured, search will still work but it will be limited to keyword-based search.
Congratulations! You have a working deployment of Omni up and running on your local machine.
Next Steps
Search Your Data
Learn how to use Omni’s search interface
AI Assistant
Ask questions and get AI-powered answers
Add More Connectors
Connect additional data sources
Deploy to Production
Move to a production environment