Skip to main content
This guide shows how to run OpenAI Codex CLI with Venice using Codex’s official config paths: ~/.codex/config.toml (user-level) or .codex/config.toml (project-level).

Simple Setup

One config file in your project

OpenAI Compatible

Uses Venice’s OpenAI-compatible API

Model Flexibility

Swap in any supported Venice text model

Prerequisites


Setup

1

Create the project config path

From your project root:
mkdir -p .codex
2

Create .codex/config.toml

Create the file and paste the configuration below:
#:schema https://developers.openai.com/codex/config-schema.json

model = "openai-gpt-54" # use any Venice model
model_provider = "venice"
model_reasoning_effort = "high"
personality = "pragmatic"
sandbox_mode = "workspace-write"

[model_providers.venice]
name = "Venice"
base_url = "https://api.venice.ai/api/v1/"
experimental_bearer_token = "YOUR VENICE API KEY"
wire_api = "responses"
3

Replace the two placeholders

Update:
  • model with the Venice model ID you want to use
  • experimental_bearer_token with your real Venice API key
You can browse available model IDs in the text model catalog.
4

Run Codex CLI normally

Start Codex CLI from the same project. It will load .codex/config.toml (for trusted projects) and route requests through Venice.

Official Codex Config Locations

  • User defaults: ~/.codex/config.toml
  • Project overrides: .codex/config.toml (loaded only for trusted projects)
If you want Venice settings to apply everywhere, put the same config in ~/.codex/config.toml.

Configuration Precedence (Highest First)

  1. CLI flags and --config overrides
  2. Profile values (--profile <name>)
  3. Project config layers (.codex/config.toml, closest directory wins)
  4. User config (~/.codex/config.toml)
  5. System config (/etc/codex/config.toml, Unix)
  6. Built-in defaults

Notes

  • Keep your API key private and never commit real keys to git.
  • Codex ignores project .codex/ config when a project is marked untrusted.
  • If you switch models, only update the model field.
  • The wire_api = "responses" setting is required for this provider setup.

Resources