RYDIR

Self-Hosting an AI Assistant with OpenClaw and Mission Control

Feb 23, 2026

Why Self-Host an AI Assistant?

Cloud-hosted AI assistants are convenient, but they come with trade-offs: your conversations flow through third-party servers, you’re locked into a single provider’s UI, and you have limited control over how the system behaves. Self-hosting flips the equation - your data stays on your infrastructure, you pick the models, and you control the experience end to end.

This post walks through setting up OpenClaw with Mission Control, an open-source stack for running and governing AI assistants on your own hardware.

What Is OpenClaw?

OpenClaw is an open-source AI assistant gateway that connects to over 20 messaging platforms - Slack, Discord, Microsoft Teams, WhatsApp, Telegram, and more. It acts as a unified layer between your users and your AI backend, handling message routing, conversation state, and platform-specific formatting.

The key idea is local-first: the gateway runs on your infrastructure, conversations are stored in your database, and you choose which LLM provider (or local model) handles the inference. You get the convenience of a multi-platform chatbot with the privacy of self-hosting.

Core features include:

  • Multi-platform messaging integration via a plugin architecture
  • Configurable LLM backends (OpenAI, Anthropic, local models via Ollama)
  • Conversation history and context management
  • Extensible plugin system for custom tools and integrations

What Is Mission Control?

If OpenClaw is the engine, Mission Control is the dashboard. It’s an operations and governance layer that gives you visibility into what your AI assistants are doing and control over how they behave.

Mission Control provides:

  • Agent management - register, configure, and monitor multiple AI agents from a single interface
  • Approval workflows - require human approval for sensitive actions before the agent executes them
  • Activity visibility - real-time logs of agent conversations, tool usage, and decision paths
  • Multi-agent orchestration - coordinate multiple specialized agents that handle different domains or tasks

This matters when you’re running AI assistants in a team or organization. You need to know what the agents are saying, catch mistakes before they reach users, and maintain audit trails for compliance.

Quick Setup

The fastest way to get started is the one-liner install script. This pulls down the Docker images and sets up the default configuration:

curl -fsSL https://raw.githubusercontent.com/openclaw/openclaw/main/install.sh | bash

This will create a openclaw directory with the Docker Compose configuration, default environment variables, and a startup script. After the install completes:

cd openclaw
cp .env.example .env
# Edit .env with your LLM API keys and configuration
docker compose up -d

The web UI will be available at http://localhost:3000 and Mission Control at http://localhost:3001.

Manual Docker Setup

If you prefer more control, here’s the step-by-step setup. First, create your project directory and environment configuration:

mkdir openclaw && cd openclaw

Create a docker-compose.yml:

version: '3.8'

services:
  gateway:
    image: openclaw/gateway:latest
    ports:
      - '3000:3000'
    environment:
      - DATABASE_URL=postgresql://openclaw:openclaw@db:5432/openclaw
      - LLM_PROVIDER=${LLM_PROVIDER:-anthropic}
      - LLM_API_KEY=${LLM_API_KEY}
      - MISSION_CONTROL_URL=http://mission-control:3001
    depends_on:
      - db
    restart: unless-stopped

  mission-control:
    image: openclaw/mission-control:latest
    ports:
      - '3001:3001'
    environment:
      - DATABASE_URL=postgresql://openclaw:openclaw@db:5432/openclaw
      - GATEWAY_URL=http://gateway:3000
    depends_on:
      - db
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=openclaw
      - POSTGRES_PASSWORD=openclaw
      - POSTGRES_DB=openclaw
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  pgdata:

Create your .env file:

# LLM Configuration
LLM_PROVIDER=anthropic
LLM_API_KEY=your-api-key-here

# Optional: specify a model
LLM_MODEL=claude-sonnet-4-6

# Optional: configure local model via Ollama
# LLM_PROVIDER=ollama
# OLLAMA_BASE_URL=http://host.docker.internal:11434

Then bring everything up:

docker compose up -d
docker compose logs -f  # Watch the startup logs

Connecting Messaging Services

OpenClaw’s gateway architecture uses a plugin system for messaging integrations. Each platform connection is configured through environment variables or the Mission Control UI.

For example, to connect Slack, you’d register a Slack app, configure the bot token and signing secret in your environment, and enable the Slack plugin. The gateway handles the webhook endpoints, message formatting, and conversation threading automatically.

The same pattern applies to every supported platform - configure credentials, enable the plugin, and the gateway bridges messages between the platform and your LLM backend. This means you can run a single AI assistant that’s reachable on Slack, Discord, and Teams simultaneously, with shared conversation context.

Governance Features

This is where Mission Control earns its name. In a production deployment, you don’t want AI agents operating as black boxes. Mission Control gives you three key governance capabilities:

Approval workflows. Define rules for when an agent needs human sign-off before acting. For example, any response that includes financial data, any tool call that modifies production systems, or any conversation with a VIP customer. The agent pauses, a human reviews and approves (or edits), and then the response goes out.

Activity visibility. Every conversation, tool call, and decision is logged and searchable. You can trace exactly why an agent gave a particular response, what context it had, and what tools it used. This is essential for debugging and compliance.

Multi-agent orchestration. As your needs grow, you can run specialized agents - one for customer support, one for internal IT questions, one for code review - and Mission Control coordinates the routing and handoffs between them.

Why Self-Hosted AI Matters

The AI assistant landscape is moving fast, and the temptation is to just use the hosted version of everything. But there are real reasons to invest in self-hosting:

  • Data sovereignty - conversations stay on your infrastructure, full stop
  • Cost control - route to cheaper models or local inference for routine queries
  • Customization - tailor behavior, personality, and tool access per agent
  • No vendor lock-in - swap LLM providers without rebuilding your integrations
  • Compliance - meet regulatory requirements for data residency and audit trails

OpenClaw and Mission Control make this accessible without building everything from scratch. The stack is open source, actively maintained, and designed to be extended.

If you’re interested in trying it out, the repositories are available on GitHub. Start with the quick setup, connect a test Slack workspace, and see how it feels to have an AI assistant that you fully control.