Skip to main content
This page provides a comprehensive reference for all environment variables and configuration options available in Omni.
All environment variables can be set in your .env file for Docker Compose deployments or in terraform.tfvars for AWS deployments.

Database Configuration

VariableRequiredDefaultDescription
DATABASE_HOSTYespostgresDatabase hostname or IP address
DATABASE_PORTYes5432Database port
DATABASE_USERNAMEYesomniDatabase username
DATABASE_PASSWORDYes-Database password (use strong password)
DATABASE_NAMEYesomniDatabase name
DATABASE_SSLNofalseEnable SSL for database connection
DB_MAX_CONNECTIONSNo10Connection pool size per service
DB_ACQUIRE_TIMEOUT_SECONDSNo3Connection acquisition timeout

Redis Configuration

VariableRequiredDefaultDescription
REDIS_URLYesredis://redis:6379Redis connection URL (format: redis://host:port)
For Redis with password:
REDIS_URL=redis://:password@redis:6379

Application Configuration

VariableRequiredDefaultDescription
APP_URLYeshttp://localhost:3000Public-facing application URL (include protocol)
OMNI_DOMAINNolocalhostDomain name for the application
OMNI_VERSIONNolatestDocker image version tag for all Omni services (e.g., 0.2.1)
SESSION_SECRETYes-Secret key for session encryption (32+ characters)
SESSION_COOKIE_NAMENoauth-sessionName of the session cookie
SESSION_DURATION_DAYSNo7Session expiry in days
ACME_EMAILNo-Email for Let’s Encrypt notifications (for automatic HTTPS)
Never use the same SESSION_SECRET across different environments. Generate unique secrets for dev, staging, and production.

Security & Encryption

VariableRequiredDefaultDescription
ENCRYPTION_KEYYes-Encryption key for sensitive credentials (32+ characters)
ENCRYPTION_SALTYes-Salt for key derivation (16+ characters)

Service Ports

Core Services

VariableRequiredDefaultDescription
WEB_PORTNo3000SvelteKit web application port
SEARCHER_PORTNo3001Search service port
INDEXER_PORTNo3002Indexer service port
AI_SERVICE_PORTNo3003AI service port
CONNECTOR_MANAGER_PORTNo3004Connector manager port

Connector Services

VariableRequiredDefaultDescription
GOOGLE_CONNECTOR_PORTNo4001Google connector port
SLACK_CONNECTOR_PORTNo4002Slack connector port
ATLASSIAN_CONNECTOR_PORTNo4003Atlassian connector port
WEB_CONNECTOR_PORTNo4004Web connector port
GITHUB_CONNECTOR_PORTNo4005GitHub connector port
FIREFLIES_CONNECTOR_PORTNo4009Fireflies connector port

Optional Services

VariableRequiredDefaultDescription
VLLM_PORTNo8000vLLM inference server port (for local LLMs)
LOCAL_EMBEDDINGS_PORTNo8001Local embedding model port (for local embeddings via TEI)
In Docker Compose, services communicate via service names (e.g., http://searcher:3001). Ports only need to be exposed to the host for debugging.

Inter-Service URLs

These URLs are used for internal communication between services. In Docker Compose, they use the service name and port variable interpolation.

Core Service URLs

VariableRequiredDefaultDescription
SEARCHER_URLYeshttp://searcher:${SEARCHER_PORT}Search service URL
INDEXER_URLYeshttp://indexer:${INDEXER_PORT}Indexer service URL
AI_SERVICE_URLYeshttp://ai:${AI_SERVICE_PORT}AI service URL
CONNECTOR_MANAGER_URLYeshttp://connector-manager:${CONNECTOR_MANAGER_PORT}Connector manager URL

Connector URLs

VariableRequiredDefaultDescription
GOOGLE_CONNECTOR_URLYeshttp://google-connector:${GOOGLE_CONNECTOR_PORT}Google connector URL
SLACK_CONNECTOR_URLYeshttp://slack-connector:${SLACK_CONNECTOR_PORT}Slack connector URL
ATLASSIAN_CONNECTOR_URLYeshttp://atlassian-connector:${ATLASSIAN_CONNECTOR_PORT}Atlassian connector URL
WEB_CONNECTOR_URLYeshttp://web-connector:${WEB_CONNECTOR_PORT}Web connector URL
GITHUB_CONNECTOR_URLYeshttp://github-connector:${GITHUB_CONNECTOR_PORT}GitHub connector URL
FIREFLIES_CONNECTOR_URLYeshttp://fireflies-connector:${FIREFLIES_CONNECTOR_PORT}Fireflies connector URL

Optional Service URLs

VariableRequiredDefaultDescription
VLLM_URLNohttp://vllm:${VLLM_PORT}/v1vLLM inference server URL (used by the vLLM Docker service)
LOCAL_EMBEDDINGS_URLConditionalhttp://embeddings:${LOCAL_EMBEDDINGS_PORT}/v1Local embeddings service URL (required if using the local embedding provider

LLM Provider Configuration

LLM providers and models are configured through the Admin Panel (Settings > LLM Providers). Multiple providers can be active simultaneously. Users can select which model to use on a per-chat basis.

Supported Providers

Omni supports four LLM provider types:
ProviderRequired ConfigDescription
AnthropicAPI KeyDirect access to Claude models via Anthropic’s API
OpenAIAPI KeyAccess to GPT models via OpenAI’s API
AWS BedrockAWS Region (+ optional credentials)Claude and other models via AWS Bedrock
vLLMAPI URLSelf-hosted models via a vLLM inference server

Predefined Models

When you add a provider, the following models are automatically available: Anthropic: Claude Opus 4.6, Claude Sonnet 4.5, Claude Haiku 4.5 OpenAI: GPT-5.2, GPT-5 Mini, GPT-4.1 AWS Bedrock: Claude Opus 4.6, Claude Sonnet 4.5, Claude Haiku 4.5, Amazon Nova Pro vLLM: No predefined models — specify any HuggingFace model ID loaded in your vLLM server.

AWS Bedrock Environment Variables

For Bedrock providers, the following environment variables are used as fallbacks when not configured in the admin panel:
VariableRequiredDefaultDescription
AWS_REGIONNo-AWS region for Bedrock (e.g., us-east-1). Used as fallback if not set in provider config
AWS_ACCESS_KEY_IDConditional-AWS access key (if not using IAM role)
AWS_SECRET_ACCESS_KEYConditional-AWS secret key (if not using IAM role)
When running on EC2 or ECS with an appropriate IAM role, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not needed — the SDK uses instance credentials automatically.

vLLM Docker Service

If you run vLLM as a Docker service alongside Omni, these environment variables configure the vLLM container itself:
VariableRequiredDefaultDescription
VLLM_MODELYes-HuggingFace model ID to load (e.g., meta-llama/Llama-3.1-8B-Instruct)
VLLM_API_KEYNo-API key for the vLLM server (if authentication is enabled)
VLLM_MAX_MODEL_LENNo-Maximum context length in tokens (model default if unset)
VLLM_PORTNo8000vLLM inference server port

[Work In Progress] Batch Embedding Inference (AWS Bedrock)

For large-scale embedding generation using AWS Bedrock batch inference:
VariableRequiredDefaultDescription
ENABLE_EMBEDDING_BATCH_INFERENCENofalseEnable batch processing for embeddings
EMBEDDING_BATCH_S3_BUCKETConditional-S3 bucket for batch files (required if batch enabled)
EMBEDDING_BATCH_BEDROCK_ROLE_ARNConditional-IAM role ARN for Bedrock (required if batch enabled)
EMBEDDING_BATCH_MIN_DOCUMENTSNo100Minimum documents to trigger batch job
EMBEDDING_BATCH_MAX_DOCUMENTSNo50000Maximum documents per batch
EMBEDDING_BATCH_ACCUMULATION_TIMEOUT_SECONDSNo300Wait time before starting batch (5 min)
EMBEDDING_BATCH_ACCUMULATION_POLL_INTERVALNo10Interval to check queue (10 sec)
EMBEDDING_BATCH_MONITOR_POLL_INTERVALNo300Interval to check batch status (5 min)

Feature Flags

VariableRequiredDefaultDescription
AI_ANSWER_ENABLEDNotrueEnable or disable AI-generated answers in search results

AI Service Configuration

VariableRequiredDefaultDescription
AI_WORKERSNo2Number of uvicorn worker processes
MODEL_PATHNo/modelsDirectory for model storage

Conversation Compaction

Controls automatic compaction of long conversations to stay within model context limits.
VariableRequiredDefaultDescription
ENABLE_CONVERSATION_COMPACTIONNotrueEnable or disable conversation compaction
MAX_CONVERSATION_INPUT_TOKENSNo150000Maximum input tokens before compaction triggers
COMPACTION_RECENT_MESSAGES_COUNTNo20Number of recent messages to preserve during compaction
COMPACTION_SUMMARY_MAX_TOKENSNo2000Maximum tokens for the compaction summary
COMPACTION_CACHE_TTL_SECONDSNo86400Cache TTL for compaction results (default: 24 hours)

Searcher Configuration

VariableRequiredDefaultDescription
RAG_CONTEXT_WINDOWNo2Number of surrounding chunks to fetch in RAG search
SEMANTIC_SEARCH_TIMEOUT_MSNo1000Timeout for semantic (vector) search in milliseconds

Connector Manager

The connector-manager service orchestrates all connector operations including scheduling syncs, health checks, and connector lifecycle management.
VariableRequiredDefaultDescription
MAX_CONCURRENT_SYNCSNo10Maximum concurrent syncs across all sources
MAX_CONCURRENT_SYNCS_PER_TYPENo3Maximum concurrent syncs per connector type
SCHEDULER_POLL_INTERVAL_SECONDSNo60How often scheduler checks for due syncs
STALE_SYNC_TIMEOUT_MINUTESNo10Timeout to mark a sync as stale/failed

Storage Configuration

VariableRequiredDefaultDescription
STORAGE_BACKENDYespostgresStorage backend: postgres or s3
S3_BUCKETConditional-S3 bucket name (required if STORAGE_BACKEND=s3)
S3_REGIONConditional-S3 region (required if STORAGE_BACKEND=s3)
PostgreSQL storage (default):
STORAGE_BACKEND=postgres
# Content stored directly in the database — simplest setup
S3 storage:
STORAGE_BACKEND=s3
S3_BUCKET=omni-content-prod
S3_REGION=us-east-1
# Uses IAM role in AWS, or set AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY

Connector-Specific Configuration

Google Workspace Connector

VariableRequiredDefaultDescription
GOOGLE_SYNC_INTERVAL_SECONDSNo86400Interval between Google sync runs (default: 24 hours)
GOOGLE_WEBHOOK_URLNo-Public URL for Google Drive change notifications (e.g., https://yourdomain.com/google-webhook)
WEBHOOK_RENEWAL_CHECK_INTERVAL_SECONDSNo3600How often to check and renew Google Drive webhooks (default: 1 hour)
GOOGLE_MAX_AGE_DAYSNo712Maximum age of documents to index (documents older than this are skipped)

Web Connector

VariableRequiredDefaultDescription
WEB_SYNC_INTERVAL_SECONDSNo86400Interval between web recrawl runs (default: 24 hours)

Logging & Monitoring

VariableRequiredDefaultDescription
RUST_LOGNoinfoRust services log level: trace, debug, info, warn, error
RUST_BACKTRACENo-Enable Rust backtraces: set to 1 or full for debugging
Log level recommendations:
  • Development: RUST_LOG=debug
  • Production: RUST_LOG=info
  • Troubleshooting: RUST_LOG=trace

Telemetry (OpenTelemetry)

VariableRequiredDefaultDescription
OTEL_EXPORTER_OTLP_ENDPOINTNo-OTLP collector endpoint (empty = telemetry disabled)
OTEL_DEPLOYMENT_IDNo-Deployment identifier for tracing
OTEL_DEPLOYMENT_ENVIRONMENTNoproductionEnvironment: development, staging, production
SERVICE_VERSIONNo0.1.0Service version for tracing
Example with Honeycomb:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io
OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=your-api-key
OTEL_DEPLOYMENT_ID=omni-prod-us-east-1
OTEL_DEPLOYMENT_ENVIRONMENT=production