OpenClaw is an open-source, self-hosted AI gateway that connects messaging platforms (WhatsApp, Telegram, Discord, iMessage, Slack) to AI models. Venice AI is available as a built-in provider, giving you access to private and uncensored models from any connected channel.Documentation Index
Fetch the complete documentation index at: https://docs.venice.ai/llms.txt
Use this file to discover all available pages before exploring further.
Official Venice Provider Guide
Full setup instructions, model list, and configuration options on the OpenClaw docs.
Setup
1. Install OpenClaw
- macOS / Linux
- Windows
- npm
2. Run the onboarding wizard
3. Pick a model
During onboarding, OpenClaw shows all available Venice models. Some recommendations:| Use case | Model | Privacy |
|---|---|---|
| General | venice/zai-org-glm-5 | Private |
| Reasoning | venice/kimi-k2-5 | Private |
| Coding | venice/claude-opus-4-6 | Anonymized |
| Vision | venice/qwen3-vl-235b-a22b | Private |
| Uncensored | venice/venice-uncensored | Private |
4. Start chatting
Open the terminal UI:Privacy modes
Venice models in OpenClaw follow the same privacy tiers as the Venice API:- Private models (GLM, Qwen, DeepSeek, Llama, Venice Uncensored) run on Venice’s GPU fleet. Prompts are never stored or logged.
- Anonymized models (Claude, GPT, Gemini, Grok) are proxied through Venice with all identifying information stripped. The third-party provider sees Venice as the customer, not you.
Image and video generation
Install the Venice AI Media skill for image and video generation:Resources
OpenClaw Docs
Official documentation
Venice Provider Guide
Full Venice setup reference