Skip to main content
OpenClaw can use routing.run through a custom OpenAI-compatible provider. Point the provider at https://api.routing.run/v1, authenticate with your dashboard rk_ key, and use model IDs with the route/ prefix.

Setup

1

Get your API key

Create an API key at app.routing.run. Copy the full secret. It starts with rk_.
2

Create an isolated OpenClaw profile

Use a named profile like routingrun-test so you can test routing.run without changing your main OpenClaw setup.
3

Save the profile config and env file

Create ~/.openclaw-routingrun-test/openclaw.json and ~/.openclaw-routingrun-test/.env with the exact values below.
4

Start the profile and test it

Run openclaw --profile routingrun-test gateway, then send a local test turn with openclaw --profile routingrun-test agent --local --agent main ....

Connection prompt

Use Copy on the block below for a complete setup sequence.

OpenClaw CLI - connect to routing.run

Working config notes

This profile shape was validated with a live setup on OpenClaw 2026.3.13.
  • Use models.providers.routing-run.api: "openai-completions".
  • Put reasoning: true only under models.providers.routing-run.models[].
  • Do not put reasoning inside agents.defaults.models. That shape fails validation on the tested version.
  • Set gateway.mode to local for a fresh isolated profile, or OpenClaw will block startup.
  • Use tools.profile: "coding" if you want the coding-agent tool surface.
  • thinkingDefault: "high" is optional. Remove it first if you suspect reasoning parameters are involved in a failure.

Manual configuration

Create ~/.openclaw-routingrun-test/.env:
ROUTING_RUN_API_KEY='rk_REPLACE_ME'
Then save this profile config at ~/.openclaw-routingrun-test/openclaw.json:
{
  "models": {
    "providers": {
      "routing-run": {
        "baseUrl": "https://api.routing.run/v1",
        "apiKey": "${ROUTING_RUN_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "route/glm-5.1",
            "name": "GLM 5.1",
            "reasoning": true
          },
          {
            "id": "route/minimax-m2.7",
            "name": "MiniMax M2.7",
            "reasoning": true
          },
          {
            "id": "route/mimo-v2-pro",
            "name": "MiMo v2 Pro",
            "reasoning": true
          },
          {
            "id": "route/qwen3.6-plus",
            "name": "Qwen 3.6 Plus",
            "reasoning": true
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "routing-run/route/glm-5.1",
        "fallbacks": [
          "routing-run/route/qwen3.6-plus",
          "routing-run/route/mimo-v2-pro"
        ]
      },
      "models": {
        "routing-run/route/glm-5.1": {
          "alias": "ROUTE_GLM-5.1"
        },
        "routing-run/route/minimax-m2.7": {
          "alias": "ROUTE_MINIMAX-M2.7"
        },
        "routing-run/route/mimo-v2-pro": {
          "alias": "ROUTE_MIMO-V2-PRO"
        },
        "routing-run/route/qwen3.6-plus": {
          "alias": "ROUTE_QWEN3.6-PLUS"
        }
      },
      "workspace": "~/.openclaw-routingrun-test/workspace",
      "thinkingDefault": "high"
    },
    "list": [
      {
        "id": "main",
        "model": {
          "primary": "routing-run/route/glm-5.1",
          "fallbacks": [
            "routing-run/route/qwen3.6-plus",
            "routing-run/route/mimo-v2-pro",
            "routing-run/route/minimax-m2.7"
          ]
        }
      }
    ]
  },
  "tools": {
    "profile": "coding"
  },
  "gateway": {
    "port": 19011,
    "mode": "local",
    "bind": "loopback"
  }
}
When you start the isolated gateway for the first time, OpenClaw may add gateway.auth.token automatically. That is expected.

Troubleshooting

  • Invalid model — Keep the full model ID, including the route/ prefix. Use published IDs from the models page.
  • Gateway start blocked — Set gateway.mode to local in the profile config, or start with --allow-unconfigured.
  • Wrong host or 401 — Confirm the custom provider still points at https://api.routing.run/v1 and ROUTING_RUN_API_KEY is available in ~/.openclaw-routingrun-test/.env or the current shell.
  • Reasoning issues — Remove thinkingDefault first, then retry with the same model list.
  • Config validation error for reasoning — Keep reasoning only in models.providers.routing-run.models[], not in agents.defaults.models.
  • Tooling mismatch — If OpenClaw exposes the wrong tool surface, set tools.profile to coding.
Published model IDs are listed on the models page. Exact access depends on your plan tier and the dashboard; common picks:
ModelUse case
route/glm-5.1-precisionStrongest reasoning-first choice
route/qwen3.6-plusBest default starting point
route/minimax-m2.7Long-context sessions
route/glm-5.1Faster GLM alternative
route/mimo-v2-proGeneral-purpose coding