Docker Compose is the simplest deployment option, all services are deployed on a single node. For high availability, auto-scaling, or multi-region deployments, see AWS Deployment with Terraform.Documentation Index
Fetch the complete documentation index at: https://docs.getomni.co/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- Linux/macOS/Windows with WSL
- Docker 24.0+ with Compose v2 installed
| Resource | Recommended |
|---|---|
| CPU | 4 cores |
| RAM | 8 GB |
| Storage | 50 GB SSD |
Step 1: Download Docker Compose Configuration
Step 2: Setup Environment
.env and update the following variables:
Step 3: Start Services
For convenience, define an alias:https://<your_domain_name>. First user becomes admin.
Once all services are healthy, follow the Initial Setup guide to configure LLM providers, embeddings, and connectors.
Self-hosted Local Inference
To run LLMs and/or embedding models locally alongside Omni, use thedocker-compose.local-inference.yml overlay. It starts:
- A
llama.cppcontainer serving an OpenAI-compatible LLM endpoint - A HuggingFace TEI container serving an embedding endpoint
http://llama-cpp:${LOCAL_INFERENCE_MODEL_PORT}, and (optionally) the Local embedding provider at http://embeddings:${LOCAL_EMBEDDINGS_PORT}/v1.
The two containers are independent, so you can mix a cloud LLM with a local embedding model, or vice versa. GPU acceleration can be enabled by editing the overlay to pass through the appropriate device.
Stopping Services
To stop all services:Troubleshooting
Services won't start
Services won't start
Check Docker logs for errors:Common causes:
- Port 3000 already in use
- Missing environment variables
Cannot access the web UI
Cannot access the web UI
Verify the web service is running:Check if the port is accessible:
Database connection errors
Database connection errors
Ensure PostgreSQL is fully started:Wait for the message: “database system is ready to accept connections”
AI responses not working
AI responses not working
Check your LLM provider configuration in the admin panel:
- Confirm whether you set the API keys
- Review AI service logs:
Next Steps
Initial Setup
Configure LLMs, embeddings, and connectors
Connect Data Sources
Google, Slack, Confluence, etc.
Configuration Reference
All environment variables
User Management
Add users and permissions