Channel Revision Published Runs on
latest/edge 26 11 Mar 2026
Ubuntu 24.04
juju deploy openclaw --channel edge
Show information

Platform:

Ubuntu
24.04

Learn about configurations >

  • ai-api-key | string

    API key for the selected AI provider. Required for most providers:

    • anthropic: Anthropic API key
    • openai: OpenAI API key
    • google: Google Gemini API key
    • opencode: OpenCode API key
    • github-copilot: GitHub personal access token
    • openrouter: OpenRouter API key
    • xai: xAI API key
    • groq: Groq API key
    • cerebras: Cerebras API key
    • mistral: Mistral API key
    • zai: Z.AI API key
    • vercel-ai-gateway: Vercel AI Gateway API key Not required for: ollama (local), bedrock (AWS credentials), openai-codex (OAuth)

    Note: API keys are stored securely in auth-profiles.json, not in environment variables.

  • ai-base-url | string

    Custom API base URL for OpenAI-compatible providers. Use this to connect to local AI services like LM Studio, vLLM, FastChat, etc. Examples:

    • LM Studio: http://localhost:1234/v1
    • vLLM: http://localhost:8000/v1
    • Text Generation WebUI: http://localhost:5000/v1 Leave empty to use default provider endpoints.

  • ai-context-window | int

    Override the context window size (in tokens) for the primary AI model. Set to a positive integer (e.g. 8192) to cap memory usage on low-VRAM GPUs.

    Use case: When using Ollama via the openai provider (ai-provider=openai, ai-base-url=http://.../v1), OpenClaw cannot auto-discover the model's context window. Setting this prevents OpenClaw from defaulting to a huge context that causes out-of-memory crashes on constrained hardware.

    Only effective when ai-provider is NOT 'ollama' (i.e. OpenAI-compatible mode). In native ollama mode, OpenClaw auto-discovers and enforces the model's own context window, so this setting has no effect.

    0 = use OpenClaw's default (no override).

  • ai-model | string

    AI model(s) to use. Supports:

    • Single model: claude-opus-4-5
    • Multiple models (comma-separated): gemini-3-flash-preview,gemini-3-pro-preview,gemini-2.5-pro

    When multiple models are provided:

    • First model becomes the primary (system default)
    • Remaining models are added as fallbacks
    • All models use the configured ai-provider

    Models are stored as provider/model format (e.g., anthropic/claude-opus-4-5).

    For models from different providers, use separate slots (ai0-provider/ai0-model, etc.) with each provider's API key.

    Examples:

    • Single: claude-opus-4-5
    • Multiple same provider: gpt-4,gpt-3.5-turbo,gpt-4o-mini
    • GitHub Copilot aggregator: gemini-3-flash-preview,claude-haiku-4.5,gpt-4

  • ai-provider | string

    Primary AI model provider. Supported providers:

    • anthropic: Anthropic Claude models
    • openai: OpenAI GPT models
    • openai-codex: OpenAI Codex (requires OAuth)
    • google: Google Gemini models
    • opencode: OpenCode Zen models
    • github-copilot: GitHub Copilot/Models API
    • openrouter: OpenRouter aggregator
    • xai: xAI models
    • groq: Groq models
    • cerebras: Cerebras models
    • mistral: Mistral AI models
    • zai: Z.AI (GLM) models
    • vercel-ai-gateway: Vercel AI Gateway
    • ollama: Local Ollama models (no API key needed)
    • bedrock: AWS Bedrock (uses AWS credentials)

  • ai-providers-order | string

    Comma-separated list of AI model slots to prioritize (e.g. "3,2,1,"). Empty string represents the primary ai-model slot. By default (empty), the order is: ai-model, ai0-model, ai1-model, ..., ai9-model.

    Slots not mentioned in the list will follow the natural order after the prioritized slots.

    Example: "3,2,1," will result in the following order: ai3-model, ai2-model, ai1-model, ai-model, ai0-model, ai4-model, ai5-model, ..., ai9-model.

  • ai0-api-key | string

    API key for AI model slot 0

  • ai0-base-url | string

    Custom API base URL for AI model slot 0 (for OpenAI-compatible local services)

  • ai0-context-window | int

    Override the context window size (in tokens) for AI model slot 0. See ai-context-window for full details. 0 = no override.

  • ai0-model | string

    AI model(s) for slot 0. Supports single or comma-separated models. All models in this slot use ai0-provider. Example: gpt-4,gpt-3.5-turbo,gpt-4o-mini

  • ai0-provider | string

    AI provider for model slot 0. See ai-provider for supported providers.

  • ai1-api-key | string

    API key for AI model slot 1

  • ai1-base-url | string

    Custom API base URL for AI model slot 1 (for OpenAI-compatible local services)

  • ai1-context-window | int

    Override the context window size (in tokens) for AI model slot 1. See ai-context-window for full details. 0 = no override.

  • ai1-model | string

    AI model(s) for slot 1 (supports comma-separated)

  • ai1-provider | string

    AI provider for model slot 1. See ai-provider for supported providers.

  • ai2-api-key | string

    API key for AI model slot 2

  • ai2-base-url | string

    Custom API base URL for AI model slot 2 (for OpenAI-compatible local services)

  • ai2-context-window | int

    Override the context window size (in tokens) for AI model slot 2. See ai-context-window for full details. 0 = no override.

  • ai2-model | string

    AI model(s) for slot 2 (supports comma-separated)

  • ai2-provider | string

    AI provider for model slot 2. See ai-provider for supported providers.

  • ai3-api-key | string

    API key for AI model slot 3

  • ai3-base-url | string

    Custom API base URL for AI model slot 3 (for OpenAI-compatible local services)

  • ai3-context-window | int

    Override the context window size (in tokens) for AI model slot 3. See ai-context-window for full details. 0 = no override.

  • ai3-model | string

    AI model(s) for slot 3 (supports comma-separated)

  • ai3-provider | string

    AI provider for model slot 3. See ai-provider for supported providers.

  • ai4-api-key | string

    API key for AI model slot 4

  • ai4-base-url | string

    Custom API base URL for AI model slot 4 (for OpenAI-compatible local services)

  • ai4-context-window | int

    Override the context window size (in tokens) for AI model slot 4. See ai-context-window for full details. 0 = no override.

  • ai4-model | string

    AI model(s) for slot 4 (supports comma-separated)

  • ai4-provider | string

    AI provider for model slot 4. See ai-provider for supported providers.

  • ai5-api-key | string

    API key for AI model slot 5

  • ai5-base-url | string

    Custom API base URL for AI model slot 5 (for OpenAI-compatible local services)

  • ai5-context-window | int

    Override the context window size (in tokens) for AI model slot 5. See ai-context-window for full details. 0 = no override.

  • ai5-model | string

    AI model(s) for slot 5 (supports comma-separated)

  • ai5-provider | string

    AI provider for model slot 5. See ai-provider for supported providers.

  • ai6-api-key | string

    API key for AI model slot 6

  • ai6-base-url | string

    Custom API base URL for AI model slot 6 (for OpenAI-compatible local services)

  • ai6-context-window | int

    Override the context window size (in tokens) for AI model slot 6. See ai-context-window for full details. 0 = no override.

  • ai6-model | string

    AI model(s) for slot 6 (supports comma-separated)

  • ai6-provider | string

    AI provider for model slot 6. See ai-provider for supported providers.

  • ai7-api-key | string

    API key for AI model slot 7

  • ai7-base-url | string

    Custom API base URL for AI model slot 7 (for OpenAI-compatible local services)

  • ai7-context-window | int

    Override the context window size (in tokens) for AI model slot 7. See ai-context-window for full details. 0 = no override.

  • ai7-model | string

    AI model(s) for slot 7 (supports comma-separated)

  • ai7-provider | string

    AI provider for model slot 7. See ai-provider for supported providers.

  • ai8-api-key | string

    API key for AI model slot 8

  • ai8-base-url | string

    Custom API base URL for AI model slot 8 (for OpenAI-compatible local services)

  • ai8-context-window | int

    Override the context window size (in tokens) for AI model slot 8. See ai-context-window for full details. 0 = no override.

  • ai8-model | string

    AI model(s) for slot 8 (supports comma-separated)

  • ai8-provider | string

    AI provider for model slot 8. See ai-provider for supported providers.

  • ai9-api-key | string

    API key for AI model slot 9

  • ai9-base-url | string

    Custom API base URL for AI model slot 9 (for OpenAI-compatible local services)

  • ai9-context-window | int

    Override the context window size (in tokens) for AI model slot 9. See ai-context-window for full details. 0 = no override.

  • ai9-model | string

    AI model(s) for slot 9 (supports comma-separated)

  • ai9-provider | string

    AI provider for model slot 9. See ai-provider for supported providers.

  • auto-update | boolean

    Automatically update OpenClaw to latest stable version on upgrade-charm

  • control-ui-allowed-origins | string

    Comma-separated list of full origins allowed to access the OpenClaw Control UI from non-loopback addresses (e.g. "http://10.0.0.5:18789,https://openclaw.example.com"). Each entry must be a complete origin: scheme + host + optional port, no trailing slash. Required when gateway-bind is not loopback and you want to use the Control UI remotely. Loopback origins (127.0.0.1, localhost) are always auto-approved and do not need to be listed. Maps to gateway.controlUi.allowedOrigins in openclaw.json.

  • discord-bot-token | string

    Discord bot token from Discord Developer Portal. Leave empty to disable Discord messaging.

  • dm-policy | string

    Default: pairing

    DM access policy: 'pairing' (require pairing code), 'open' (auto-respond), or 'closed' (reject all DMs)

  • dm-scope | string

    Default: main

    DM session scope for isolating direct message conversations:

    • 'main' (default): All DMs share the main session for continuity across devices/channels
    • 'per-peer': Isolate sessions by sender ID across channels
    • 'per-channel-peer': Isolate by channel + sender (recommended for multi-user inboxes)
    • 'per-account-channel-peer': Isolate by account + channel + sender (recommended for multi-account inboxes)

    Security Warning: If your agent receives DMs from multiple people, use 'per-channel-peer' or 'per-account-channel-peer' to prevent conversation context leakage between users.

  • gateway-bind | string

    Default: loopback

    Gateway bind mode: 'loopback' (127.0.0.1 only), 'lan' (all interfaces), or specific IP address.

    IMPORTANT: Multi-unit deployments REQUIRE 'lan' mode for Node connectivity. The charm will block deployment if loopback mode is used with multiple units.

  • gateway-port | int

    Default: 18789

    Port for the OpenClaw Gateway WebSocket/HTTP server

  • install-method | string

    Default: npm

    Installation method: 'npm' (global npm install), 'pnpm' (uses npm), 'bun' (global bun install), or 'source' (build from git source)

  • install-pkgs | string

    Comma-separated list of packages to install. Supported values: 'chrome', 'chromium', 'firefox', 'tailscale', 'homebrew'. Example: "chrome,tailscale,homebrew". Can be changed post-deployment via 'juju config'.

  • line-channel-access-token | string

    LINE Messaging API channel access token from LINE Developers Console. Leave empty to disable LINE messaging.

  • line-channel-secret | string

    LINE Messaging API channel secret from LINE Developers Console. Required when line-channel-access-token is configured.

  • log-level | string

    Default: info

    Log level (debug, info, warn, error)

  • manual | boolean

    Manual configuration mode. When enabled (manual=true):

    • Charm will NOT auto-generate openclaw.json or node.json
    • User is responsible for creating and managing OpenClaw configuration
    • Charm only installs OpenClaw and manages systemd services
    • Peer relations still work for multi-unit deployments
    • Environment file (.openclaw/environment) is still created for systemd

    When disabled (manual=false, default):

    • Charm auto-generates configuration from Juju config options
    • AI providers, channels, and settings are managed by the charm
    • Configuration updates on 'juju config' changes

  • node-version | string

    Default: 24

    Node.js major version to install (minimum 22 required)

  • sandbox-mode | string

    Default: non-main

    Sandbox mode for non-main sessions: 'all' (sandbox everything), 'non-main' (sandbox groups/channels), or 'none' (no sandboxing)

  • slack-app-token | string

    Slack app token (xapp-...) from Slack App settings. Required when slack-bot-token is configured.

  • slack-bot-token | string

    Slack bot token (xoxb-...) from Slack App settings. Leave empty to disable Slack messaging.

  • telegram-bot-token | string

    Telegram bot token from @BotFather (e.g., 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11). Leave empty to disable Telegram messaging.

  • version | string

    Default: latest

    OpenClaw version to install (e.g., 'latest', '2026.1.29', or git commit)