Skip to main content
Omni supports two deployment strategies: Docker Compose and AWS Terraform.

Deployment Options

Docker Compose

Development, small teams, single-server production. Get running in 10-15 minutes.

AWS Terraform

High availability, auto-scaling, multi-region. Production-ready in 30-45 minutes.
After deploying, follow the Initial Setup guide to configure LLM providers, embeddings, and connectors.

Omni Components

All deployments include the same core components:
ComponentPurpose
omni-webSvelteKit frontend and API
omni-searcherSearch query processing
omni-indexerDocument processing
omni-aiLLM and embedding orchestration
omni-connector-managerOrchestrate connector services
PostgreSQLDatabase with pg_search and pgvector
RedisCaching and sessions
Each connector runs in its own Docker container. Connectors are typically very lightweight services whose only job is to act as an interface between Omni and the third-party APIs. For reference, the built-in Google connector uses ~5MB memory at idle, and ~50MB during indexing.