Skip to main content
OpenAI’s Codex CLI uses the standard OpenAI client env vars. Set OPENAI_BASE_URL to routing.run and pass --model route/… so each run targets a model your key can access.

Setup

1

Get your API key

Create an API key at app.routing.run. Copy the key — it starts with rk_.
2

Set environment variables

Copy the Connection prompt below and apply the export lines before running codex.
3

Run Codex

Use any routing.run model with the Codex CLI.

Connection prompt

Codex CLI uses OpenAI-compatible env vars (OPENAI_API_KEY, OPENAI_BASE_URL). Use Copy on the block below when you want a complete, step-by-step setup checklist.

Codex CLI — connect to routing.run

Manual configuration

If you want to set Codex CLI up manually instead of using the prompt above, use these exact values:
export ROUTING_RUN_API_KEY='rk_REPLACE_ME'
export OPENAI_API_KEY="$ROUTING_RUN_API_KEY"
export OPENAI_BASE_URL="https://api.routing.run/v1"
If you keep multiple CLI backends locally, save these exports in a small shell wrapper or env file named routing.run so you can switch providers quickly. Example run:
codex --model route/glm-5.1 "Summarize this repo"

Usage

codex --model route/glm-5.1 "Your prompt here"
Plan tier and access are authoritative on GET https://api.routing.run/v1/models with your rk_ key. The full list is on the models page; common picks:
ModelUse case
route/deepseek-v3.2General-purpose chat (matches most doc examples)
route/glm-5.1Reasoning and coding
route/minimax-m2.7Long-context sessions
route/kimi-k2.5Agentic and tool-heavy flows
route/qwen3.6-plus-previewQwen3.6-class preview
route/qwen3.5-plusQwen3.5 flagship