Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getomni.co/llms.txt

Use this file to discover all available pages before exploring further.

After deploying Omni via Docker Compose or AWS Terraform, follow these steps to get your instance fully operational.

Step 1: Access the Web UI and Create Your Admin Account

Navigate to your Omni URL (e.g., https://<your_domain_name>) and sign up. The first user to register automatically becomes the admin.
Admin users have access to Settings in the sidebar, where you can configure LLM providers, manage connectors, and invite users.

Step 2: Configure an LLM Provider

LLM providers are configured through the admin panel at Settings > LLM Providers. You can add multiple providers and users can select which model to use per chat.

Add a Provider

  1. Go to Settings > LLM Providers
  2. Click Connect against the provider you wish to, well, connect
  3. Enter the API key/credentials
  4. Click Connect to save, the provider’s predefined models become available automatically
You can add multiple providers simultaneously. For example, use Anthropic for complex reasoning and a local llama.cpp model (via the OpenAI-compatible provider) for sensitive queries.

AWS Bedrock Notes

When running on AWS (ECS/EC2) with an appropriate IAM role, no access keys are needed — the SDK uses instance credentials automatically. Otherwise, set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables.

Vertex AI Notes

Vertex AI uses Google Cloud Application Default Credentials (ADC). In the admin panel, configure the GCP region and project_id. When running on GCP (GKE, Cloud Run, Compute Engine) with an attached service account, authentication is automatic. For other environments, set the GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to a service account key JSON file.

Azure AI Foundry Notes

Azure AI Foundry uses DefaultAzureCredential (Managed Identity). In the admin panel, configure the endpoint_url of your Azure AI Foundry deployment. When running on Azure (AKS, Container Apps) with Managed Identity, authentication is automatic.

Self-hosted Models (OpenAI-compatible)

Omni ships an docker-compose.local-inference.yml overlay that starts a local llama.cpp server for LLMs and, optionally, a HuggingFace TEI server for embeddings. Start them with:
omni-compose -f docker/docker-compose.local-inference.yml up -d
Then in the admin panel:
  • Add an OpenAI-compatible LLM provider pointing to http://llama-cpp:${LOCAL_INFERENCE_MODEL_PORT}
  • (Optional) Add the Local embedding provider pointing to http://embeddings:${LOCAL_EMBEDDINGS_PORT}/v1
The local LLM and embedding containers can be enabled independently — you can mix cloud LLMs with a local embedding model, or vice versa. See LLM Provider Configuration for all environment variable details.

Step 3: Configure an Embedding Provider

Embeddings power Omni’s semantic search. Like LLM providers, embedding providers are configured through the admin panel.

Add a Provider

  1. Go to Settings > Embedding Providers
  2. Click Connect against the provider you wish to use
  3. Enter the API key/credentials
  4. Click Connect to save
Changing the embedding provider after documents have been indexed will require a full re-index, since different providers produce incompatible vector representations.

Step 4: Enable and Configure Connectors

Connectors sync data from your external tools into Omni.

Enable Connectors

Set the ENABLED_CONNECTORS environment variable to a comma-separated list of the connectors you want to run:
# Example: enable Google Workspace, Slack, and Atlassian
ENABLED_CONNECTORS=google,slack,atlassian
Available connector names: google, slack, atlassian, github, hubspot, notion, filesystem, fireflies, web, microsoft, imap, clickup, linear, nextcloud, paperless Restart services after changing this variable (omni-compose up -d).

Configure Each Connector

Once enabled, connectors are configured in the admin panel:
  1. Go to Settings > Integrations
  2. Select the connector you want to configure
  3. Follow the setup instructions (typically involves providing service credentials or API keys)
  4. Start the initial sync
https://mintcdn.com/omni-32b875be/snhpxkbpe8cM_QhI/images/icons/google.svg?fit=max&auto=format&n=snhpxkbpe8cM_QhI&q=85&s=d3c6670068c77edd18a52bcd4256886f

Google Workspace

Drive, Docs, Sheets, Gmail
https://mintcdn.com/omni-32b875be/snhpxkbpe8cM_QhI/images/icons/microsoft.svg?fit=max&auto=format&n=snhpxkbpe8cM_QhI&q=85&s=b007ef7dd57c7d235c07fb5bc19bd0de

Microsoft 365

OneDrive, SharePoint, Outlook, Calendar
https://mintcdn.com/omni-32b875be/snhpxkbpe8cM_QhI/images/icons/slack.svg?fit=max&auto=format&n=snhpxkbpe8cM_QhI&q=85&s=d8e5372b07be114aab5fb0a0f4a4fae3

Slack

Messages, threads, files
https://mintcdn.com/omni-32b875be/snhpxkbpe8cM_QhI/images/icons/atlassian.svg?fit=max&auto=format&n=snhpxkbpe8cM_QhI&q=85&s=6add7869b873ed75b3282894b5e2edfc

Atlassian

Jira issues, Confluence pages

All Connectors

Full list and setup guides

Step 5: Verify Your Setup

Once you’ve configured at least one LLM provider, an embedding provider, and a connector, verify everything is working.

Check Connector Sync Status

Go to Settings > Integrations and confirm your connectors show a successful sync status. The initial sync may take a few minutes depending on the volume of data.
  1. Navigate to the Search page
  2. Enter a query related to content from your connected sources
  3. Verify that results appear and are relevant

Test the AI Assistant

  1. Navigate to the Chat page
  2. Ask a question about information in your connected sources
  3. Verify the assistant responds with an answer that cites your documents
If search returns results but the AI assistant doesn’t work, double-check your LLM provider configuration in Settings > LLM Providers.

Next Steps