Run OpenClaw Gateway in Docker with fully isolated config, Tailscale remote access, and proxy support for restricted networks — no interference with your host environment.
- Config isolation — all state lives in
./openclaw-config, never touching~/.openclawon the host - Tailscale sidecar — expose the gateway over your tailnet with zero open ports, fully declared in Compose
- Proxy-aware — single
PROXY=env var routes Node.js traffic through proxychains4 (works in China and corporate networks whereHTTPS_PROXYalone isn't enough) - VM-like UX —
docker execdrops you in asnode, wrapper scripts for every operation, supervisord keeps the process alive without restarting the container - Hot-swap updates — upgrade openclaw without rebuilding the image
# 1. Clone and configure
cp .env.example .env
$EDITOR .env # set GATEWAY_HOSTNAME, PROXY, etc. as needed
# 2. Start (ports mode — gateway on localhost:18789)
./setup.sh ports
# 3. Configure API keys and channels
./openclaw onboardGateway UI: http://localhost:18789
Copy .env.example to .env and fill in as needed. All fields are optional.
| Variable | Description |
|---|---|
GATEWAY_HOSTNAME |
Container hostname shown in Tailscale admin (default: openclaw-gateway) |
TS_AUTHKEY |
Tailscale OAuth client secret or auth key (Tailscale mode only) |
TS_TAG |
Tailscale tag to advertise, e.g. tag:server (Tailscale mode only) |
NPM_REGISTRY |
npm registry mirror, e.g. https://registry.npmmirror.com |
PROXY |
Outbound proxy — see Proxy below |
| Script | Description |
|---|---|
./setup.sh <ports|tailscale> |
Build and start the gateway |
./stop.sh <ports|tailscale> [--down] |
Stop (or stop + remove) containers |
./restart.sh |
Restart the openclaw process via supervisorctl (fast) |
./restart.sh --full <ports|tailscale> |
Restart the entire container |
./update.sh <ports|tailscale> [version] |
Hot-swap openclaw to a new version |
./backup.sh |
Snapshot config + commit workspace |
./openclaw <args> |
Run openclaw CLI inside the container |
Run any script without arguments to see its usage.
Use ./setup.sh tailscale to start with the Tailscale sidecar. The openclaw container shares the sidecar's network — no ports are exposed to the host.
Prerequisites:
- Create an OAuth client at Tailscale admin → Settings → OAuth clients (
devices:writescope) - Define your tag in ACL
tagOwners - Enable HTTPS in Tailscale admin (DNS → Enable HTTPS) for certificate support
# .env
TS_AUTHKEY=tskey-client-xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TS_TAG=tag:server./setup.sh tailscaleTailscale state is persisted in ./tailscale-state/ — the device stays registered across restarts without re-authentication.
openclaw config for Tailscale — add allowTailscale so the gateway trusts Tailscale identity headers (token-free Web UI access for tailnet members):
{
"gateway": {
"auth": {
"mode": "token",
"token": "...",
"allowTailscale": true
}
}
}API endpoints (
/v1/*) still require a token regardless ofallowTailscale.
Node.js's native fetch (undici) does not respect HTTPS_PROXY. This setup uses proxychains4 to transparently route openclaw's traffic at the socket level.
Set a single variable in .env:
# HTTP proxy
PROXY=http://192.168.1.1:7890
# HTTP proxy with authentication
PROXY=http://user:pass@192.168.1.1:7890
# SOCKS5 (recommended — see below)
PROXY=socks5://192.168.1.1:1080npm install also benefits — PROXY is automatically used during bootstrapping and hot-swap updates.
Prefer SOCKS5 over HTTP proxy. HTTP CONNECT proxies are designed for HTTPS (port 443) and often block or mishandle connections to other ports (e.g. SSH on port 22). Some npm packages depend on private git repos cloned over SSH — with an HTTP proxy these installs fail with a misleading Permission denied (publickey) error. SOCKS5 tunnels raw TCP regardless of port, so SSH git dependencies work correctly.
DNS hijacking. proxychains4 is configured with proxy_dns, meaning DNS resolution is also routed through the proxy. This prevents DNS pollution/hijacking (common in China and some corporate networks) from causing silent failures that masquerade as authentication errors.
Restore an existing config before running setup.sh:
rsync -av /path/to/backup/ ./openclaw-config/
# If workspace has a git remote
git clone <remote-url> ./openclaw-workspace
./setup.sh portsbackup.sh snapshots all four data directories (openclaw-config, openclaw-workspace, openclaw-workspaces, openclaw-repos) as a single .tar.gz. Runtime logs are excluded; .git dirs are included for disaster recovery.
./backup.sh
# Schedule with cron (every hour)
0 * * * * /path/to/openclaw-docker-gateway/backup.sh >> /tmp/openclaw-backup.log 2>&1sync.sh commits and pushes workspace changes to local bare repos in /home/node/repos/. Bare repos are auto-initialized if missing; remote origin is set automatically if not configured (existing remotes are never overridden, making future migration to a real git server seamless).
docker exec openclaw-gateway /home/node/scripts/sync.shPush workspace to a real remote (optional — set the remote before running sync):
docker exec openclaw-gateway git -C /home/node/.openclaw/workspace remote set-url origin <your-remote-url>./update.sh ports # latest
./update.sh ports 2026.3.1 # specific versionHot-swaps the binary into the toolchain/ volume and restarts the gateway — no image rebuild needed.
openclaw supports configuring the workspace path per agent. Additional workspaces live in ./openclaw-workspaces/ on the host, mounted at /home/node/workspaces in the container.
Create a subdirectory per agent:
mkdir -p openclaw-workspaces/agent-a openclaw-workspaces/agent-bThen point each agent's workspace to /home/node/workspaces/agent-a in openclaw's config. The main workspace (./openclaw-workspace) is unaffected.
The container/skills/ directory contains openclaw skills shipped with this repo. Skills teach agents how to use the gateway's scripts (backup, workspace management, bare repos).
Enable globally (all agents) — add to openclaw.json:
{
"skills": {
"load": {
"extraDirs": ["/home/node/scripts/skills"]
}
}
}Enable per agent (symlink into workspace):
mkdir -p openclaw-workspaces/agent-a/skills
ln -s /home/node/scripts/skills/gateway-ops \
/home/node/workspaces/agent-a/skills/gateway-opsThe skill source lives in container/skills/ and is mounted read-only at /home/node/scripts/skills/. Runtime-installed skills (~/.openclaw/skills/) are unaffected.
Agents can share knowledge via local bare git repos mounted at /home/node/repos in the container.
Initialize a bare repo:
git init --bare openclaw-repos/knowledge.gitThen inside any agent's workspace:
git remote add origin /home/node/repos/knowledge.git
git push origin mainAny agent with access to the container can clone or pull from the same path. No network required.
supervisord manages the openclaw gateway process. Key implications:
- The container never exits due to openclaw crashing — supervisord restarts it automatically (up to 5 retries)
restart: unless-stoppedis effectively not involved in openclaw recovery; supervisord handles that layer./restart.shrestarts just the openclaw process without touching the container
The healthcheck probes port 18789. A failing healthcheck does not trigger a container restart — Docker's restart policy is based on container exit codes, not healthcheck state. An unhealthy status is purely informational and resolves automatically once openclaw recovers.
On first start, launcher.sh installs openclaw via npm install -g. This takes ~2 minutes. The binary is cached in ./toolchain/ — subsequent starts are fast.