After deploying Omni via Docker Compose or AWS Terraform, follow these steps to get your instance fully operational.Documentation Index
Fetch the complete documentation index at: https://docs.getomni.co/llms.txt
Use this file to discover all available pages before exploring further.
Step 1: Access the Web UI and Create Your Admin Account
Navigate to your Omni URL (e.g.,https://<your_domain_name>) and sign up. The first user to register automatically becomes the admin.
Admin users have access to Settings in the sidebar, where you can configure LLM providers, manage connectors, and invite users.
Step 2: Configure an LLM Provider
LLM providers are configured through the admin panel at Settings > LLM Providers. You can add multiple providers and users can select which model to use per chat.Add a Provider
- Go to Settings > LLM Providers
- Click Connect against the provider you wish to, well, connect
- Enter the API key/credentials
- Click Connect to save, the provider’s predefined models become available automatically
AWS Bedrock Notes
When running on AWS (ECS/EC2) with an appropriate IAM role, no access keys are needed — the SDK uses instance credentials automatically. Otherwise, setAWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables.
Vertex AI Notes
Vertex AI uses Google Cloud Application Default Credentials (ADC). In the admin panel, configure the GCPregion and project_id. When running on GCP (GKE, Cloud Run, Compute Engine) with an attached service account, authentication is automatic. For other environments, set the GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to a service account key JSON file.
Azure AI Foundry Notes
Azure AI Foundry usesDefaultAzureCredential (Managed Identity). In the admin panel, configure the endpoint_url of your Azure AI Foundry deployment. When running on Azure (AKS, Container Apps) with Managed Identity, authentication is automatic.
Self-hosted Models (OpenAI-compatible)
Omni ships andocker-compose.local-inference.yml overlay that starts a local llama.cpp server for LLMs and, optionally, a HuggingFace TEI server for embeddings. Start them with:
- Add an OpenAI-compatible LLM provider pointing to
http://llama-cpp:${LOCAL_INFERENCE_MODEL_PORT} - (Optional) Add the Local embedding provider pointing to
http://embeddings:${LOCAL_EMBEDDINGS_PORT}/v1
Step 3: Configure an Embedding Provider
Embeddings power Omni’s semantic search. Like LLM providers, embedding providers are configured through the admin panel.Add a Provider
- Go to Settings > Embedding Providers
- Click Connect against the provider you wish to use
- Enter the API key/credentials
- Click Connect to save
Step 4: Enable and Configure Connectors
Connectors sync data from your external tools into Omni.Enable Connectors
Set theENABLED_CONNECTORS environment variable to a comma-separated list of the connectors you want to run:
google, slack, atlassian, github, hubspot, notion, filesystem, fireflies, web, microsoft, imap, clickup, linear, nextcloud, paperless
Restart services after changing this variable (omni-compose up -d).
Configure Each Connector
Once enabled, connectors are configured in the admin panel:- Go to Settings > Integrations
- Select the connector you want to configure
- Follow the setup instructions (typically involves providing service credentials or API keys)
- Start the initial sync
Google Workspace
Drive, Docs, Sheets, Gmail
Microsoft 365
OneDrive, SharePoint, Outlook, Calendar
Slack
Messages, threads, files
Atlassian
Jira issues, Confluence pages
All Connectors
Full list and setup guides
Step 5: Verify Your Setup
Once you’ve configured at least one LLM provider, an embedding provider, and a connector, verify everything is working.Check Connector Sync Status
Go to Settings > Integrations and confirm your connectors show a successful sync status. The initial sync may take a few minutes depending on the volume of data.Test Search
- Navigate to the Search page
- Enter a query related to content from your connected sources
- Verify that results appear and are relevant
Test the AI Assistant
- Navigate to the Chat page
- Ask a question about information in your connected sources
- Verify the assistant responds with an answer that cites your documents
Next Steps
- Configuration Reference — Full list of environment variables and options
- User Management — Invite team members and manage roles
- Connector Management — Monitor sync status and schedules