Step 1: Access the Web UI and Create Your Admin Account
Navigate to your Omni URL (e.g.,https://<your_domain_name>) and sign up. The first user to register automatically becomes the admin.
Admin users have access to Settings in the sidebar, where you can configure LLM providers, manage connectors, and invite users.
Step 2: Configure an LLM Provider
LLM providers are configured through the admin panel at Settings > LLM Providers. You can add multiple providers and users can select which model to use per chat.Add a Provider
- Go to Settings > LLM Providers
- Click Connect against the provider you wish to, well, connect
- Enter the API key/credentials
- Click Connect to save, the provider’s predefined models become available automatically
AWS Bedrock Notes
When running on AWS (ECS/EC2) with an appropriate IAM role, no access keys are needed — the SDK uses instance credentials automatically. Otherwise, setAWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables.
vLLM Notes
If you’re running vLLM as part of the Docker Compose stack, enable GPU support first:http://vllm:8000/v1.
See LLM Provider Configuration for all environment variable details.
Step 3: Configure an Embedding Provider
Embeddings power Omni’s semantic search. Unlike LLM providers, embedding configuration is done via environment variables (not the admin panel).Add a Provider
- Go to Settings > Embedding Providers
- Click Connect against the provider you wish to use
- Enter the API key/credentials
- Click Connect to save
Step 4: Enable and Configure Connectors
Connectors sync data from your external tools into Omni.Enable Connectors
Set theENABLED_CONNECTORS environment variable to a comma-separated list of the connectors you want to run:
google, slack, atlassian, github, hubspot, notion, filesystem, fireflies, web
Restart services after changing this variable (same process as Step 3).
Configure Each Connector
Once enabled, connectors are configured in the admin panel:- Go to Settings > Integrations
- Select the connector you want to configure
- Follow the setup instructions (typically involves providing service credentials or API keys)
- Start the initial sync
Google Workspace
Drive, Docs, Sheets, Gmail
Slack
Messages, threads, files
Atlassian
Jira issues, Confluence pages
All Connectors
Full list and setup guides
Step 5: Verify Your Setup
Once you’ve configured at least one LLM provider, an embedding provider, and a connector, verify everything is working.Check Connector Sync Status
Go to Settings > Integrations and confirm your connectors show a successful sync status. The initial sync may take a few minutes depending on the volume of data.Test Search
- Navigate to the Search page
- Enter a query related to content from your connected sources
- Verify that results appear and are relevant
Test the AI Assistant
- Navigate to the Chat page
- Ask a question about information in your connected sources
- Verify the assistant responds with an answer that cites your documents
Next Steps
- Configuration Reference — Full list of environment variables and options
- User Management — Invite team members and manage roles
- Connector Management — Monitor sync status and schedules