ZeroClaw has two independent translation layers:
| Layer | Format | What it covers |
|---|---|---|
| App strings | Mozilla Fluent (.ftl) |
CLI help text, command descriptions, runtime messages |
| Docs | gettext (.po) |
Everything in this mdBook |
They are filled separately and stored separately. Both use a provider-agnostic fill pipeline: configure any OpenAI-compatible endpoint in ~/.zeroclaw/config.toml under [providers.models.<name>] and pass --provider <name> to the fill commands.
Local models via Ollama are a first-class option β no API keys required, no per-call cost. A hosted provider is also fine for release-grade quality. Translation is a local operation β run cargo mdbook sync before you PR.
Ollama is the current canonical source for docs. Ensure you have Ollama installed and have qwen3.6:35-a3b pulled. Then, in ~/.zeroclaw/config.toml (or your established config home):
# Local via Ollama β free, runs on your machine
[providers.models.ollama]
base_url = "http://localhost:11434"
model = "qwen3.6:35b-a3b" # Current preferred model{{#include ../developing/building-docs.md}}
App strings live in crates/zeroclaw-runtime/locales/. English is the source of truth and is embedded at compile time. Non-English locales are loaded from ~/.zeroclaw/workspace/locales/ at runtime.
cargo fluent stats # coverage per locale
cargo fluent check # validate .ftl syntax
cargo fluent fill --locale ja --provider ollama # fill missing keys (default batch 50)
cargo fluent fill --locale ja --provider ollama --batch 1 # one-at-a-time (use when a file has long entries that truncate at batch 50, e.g. tools.ftl)
cargo fluent fill --locale ja --provider ollama --force # retranslate everything
cargo fluent scan # find stale or missing keys vs Rust sourceEach batch is written to disk before the next API call, so a mid-run failure only loses the in-flight batch. Re-running skips keys that already exist in the target .ftl, so resume is automatic β no --force needed.
After filling, copy the updated .ftl file to your workspace and rebuild to pick up the changes:
mkdir -p ~/.zeroclaw/workspace/locales/ja
cp crates/zeroclaw-runtime/locales/ja/cli.ftl ~/.zeroclaw/workspace/locales/ja/cli.ftlDoc translations live in docs/book/po/. cargo mdbook sync runs extract β merge β strip obsolete β AI-fill in one step. Without --provider, sync still runs extract + merge and reports how many strings need translation β partial translations fall back to English at render time.
cargo mdbook sync --provider ollama # delta fill
cargo mdbook sync --provider ollama --force # quality pass: retranslate all entries
cargo mdbook sync --provider ollama --batch 1 # one-at-a-time (helpful for flaky local models)
cargo mdbook sync --locale ja --provider ollama # single localeThe pipeline has built-in resilience:
- Leak detection β if a model returns its own instructions instead of a translation, the tool detects the pattern (via response-length ratio and bullet-list structure), attempts to recover the real translation from the response tail, and blanks the entry for re-translation if recovery fails.
- Incremental writes β after each batch, the
.pofile is rewritten. A Ctrl-C mid-run doesn't lose the progress up to that point. - Obsolete stripping β
msgmerge+msgattrib --no-obsoletekeep removed source strings from accumulating as#~entries.
-
Edit
locales.tomlat the repo root β the only file you need to touch:[[locale]] code = "<code>" label = "Language Name"
-
Translate the app strings:
cargo fluent fill --locale <code> --provider ollama
-
Bootstrap and fill the docs
.pofile:cargo mdbook sync --locale <code> --provider ollama
Everything else β lang-switcher.js, CI deploy target list, cargo mdbook locales output β reads from locales.toml automatically.
Translation quality varies significantly by language and model.
| Locale | Well-supported by | Notes |
|---|---|---|
ja, zh-CN |
qwen3.6 family, any frontier hosted model | Qwen is Chinese-first; Japanese also strong |
es, fr |
qwen3.6, mistral, gemma3, hosted | Romance languages are broadly well-trained |
| Low-resource locales | Hosted frontier models only | Local models often hallucinate words |
For release-grade passes, prefer a hosted frontier model via --force. For ongoing delta fills during development, a local Ollama model is fine and free.