Official Venice Provider Guide
Full setup instructions, model list, and configuration options on the OpenClaw docs.
Setup
1. Install OpenClaw
- macOS / Linux
- Windows
- npm
2. Run the onboarding wizard
3. Pick a model
During onboarding, OpenClaw shows all available Venice models. Some recommendations:| Use case | Model | Privacy |
|---|---|---|
| General | venice/zai-org-glm-5 | Private |
| Reasoning | venice/kimi-k2-5 | Private |
| Coding | venice/claude-opus-4-6 | Anonymized |
| Vision | venice/qwen3-vl-235b-a22b | Private |
| Uncensored | venice/venice-uncensored | Private |
4. Start chatting
Open the terminal UI:Privacy modes
Venice models in OpenClaw follow the same privacy tiers as the Venice API:- Private models (GLM, Qwen, DeepSeek, Llama, Venice Uncensored) run on Venice’s GPU fleet. Prompts are never stored or logged.
- Anonymized models (Claude, GPT, Gemini, Grok) are proxied through Venice with all identifying information stripped. The third-party provider sees Venice as the customer, not you.