Pay Per Token
No subscription. Pay only for what you use
Private Inference
Zero data retention on Venice servers
Docker Isolation
Each chat runs in its own secure container
Why Venice AI?
Venice is a privacy-first AI platform. They don’t store or log any prompts or responses on their servers — your conversations exist only on your device. Requests are encrypted end-to-end through their proxy to decentralized GPU providers, with zero data retention. This means your AI assistant conversations stay private, even from Venice themselves. Venice provides anonymized access to frontier models (Claude Opus, Claude Sonnet) and fully private access to open-source models (GLM, Qwen) through a single API — switch between them anytime.| Venice AI | Traditional AI providers | |
|---|---|---|
| Data retention | None — zero logs | Yes |
| Prompt privacy | Encrypted, never stored | Stored on provider servers |
| Open-source models | Yes (GLM, Qwen, and others) | No |
| Frontier models | Claude, GPT, and others — anonymously | Only through direct subscriptions |
| Pricing | Pay-per-token, no subscription. Or stake DIEM for daily refreshing credits | $20–200/mo subscriptions or pay-per-token API |
| Uncensored inference | Yes (open-source models) | No |
Why NanoClaw?
NanoClaw is a clean, minimal alternative to larger platforms like OpenClaw. It’s designed for one person running one bot.| NanoClaw (Venice) | OpenClaw | |
|---|---|---|
| Codebase | ~2,000 lines, handful of files | ~500,000 lines, 53 config files |
| Dependencies | ~15 packages | 70+ packages |
| Security model | OS-level Docker container isolation | Application-level allowlists and pairing codes |
| Per-group isolation | Each group gets its own container, filesystem, and memory | Shared process, shared memory |
| Setup | One wizard (/setup), ~10 minutes | Manual multi-step configuration |
| AI provider | Venice AI (private, no subscription) | Anthropic (requires API key or subscription) |
| Customization | Edit the code directly — it’s small enough to read | Config files and plugins |
| Target user | One person, one bot | Multi-user platform |
What You Get
- Personal AI assistant on Telegram and/or WhatsApp
- Powered by Venice AI — no Anthropic account needed
- Bot runs in an isolated Docker container (sandboxed, can’t access your system)
- Model switching — tell the bot “switch to zai-org-glm-5” or “use opus” anytime
- Scheduled tasks — set reminders, recurring tasks
- Web search and browsing built in
- Markdown formatting in Telegram messages
Prerequisites
Node.js 20+
Check with
node --versionDocker
Install and open once so it’s running
Claude Code CLI
Check with
claude --versionVenice API Key
Generate from your Venice account
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Save the token BotFather gives you (looks like
123456789:ABCdef...)
Setup
The setup takes about 10 minutes. You only need one Terminal window.Launch Claude Code with Venice
Replace This starts the Venice proxy and launches Claude Code through it in a single command.If prompted “Do you want to use this API key?” — select Yes.
your-key with your Venice API key and run:Claude Code defaults to GLM 5 (
zai-org-glm-5) to keep setup costs low. After setup, type /model inside Claude Code to switch to claude-sonnet-4-6 or claude-opus-4-6 for best performance.Run the Setup Wizard
In your Claude Code terminal, type:The wizard walks you through:
- Bootstrap — checks Node.js and dependencies
- Venice API key — validates and saves your key
- Channel choice — pick WhatsApp, Telegram, or both
- Container build — builds the Docker container (takes a few minutes first time)
- WhatsApp auth — scan QR code with your phone (if applicable)
- Telegram setup — send a message to your bot so it detects your chat
- Trigger word — prefix that activates the bot (default:
@Andy) - Mount directories — pick “No” for now (you can add file access later)
- Start services — NanoClaw and the Venice proxy both start as background services
- NanoClaw — the bot itself
- Venice proxy — a small local server (localhost:4001) that translates between Claude Code and Venice AI
If the wizard stops between steps, type “continue” or “next step” to nudge it forward.
Start Chatting
Once setup is complete, open your chat (Telegram or WhatsApp) and send:The bot should respond within seconds. In your main channel, you can type normally without the
@Andy prefix.You can now close the terminal window. Everything runs as background services and starts automatically when your computer boots.How It Works
There are two layers to NanoClaw:| Layer | What It Does |
|---|---|
| Claude Code CLI | Admin tool for setup, debugging, and customization |
| The Bot | AI in your chat, running inside an isolated Docker container |
/setup, /debug, /customize, or make changes to the bot’s behavior.
Models
| Context | Default Model | How to Switch |
|---|---|---|
| Bot (in chat) | claude-sonnet-4-6 | Tell the bot: “switch to opus” or “use zai-org-glm-5” |
| Claude Code CLI | zai-org-glm-5 (GLM 5) | Use /model in Claude Code or claude --model <name> |
Troubleshooting
The proxy isn't running
The proxy isn't running
The Venice proxy runs as a background service and restarts itself automatically. If it’s not working:macOS:Linux:
Claude Code shows 403 error or 'Please run /login'
Claude Code shows 403 error or 'Please run /login'
This means Claude Code can’t connect to the Venice proxy.
- Check the proxy is running. See the troubleshooting step above.
- Make sure you’re in the right folder. Always
cd nanoclaw-venicefirst. - Start fresh: Close all terminals and run:
Model errors ('model does not exist')
Model errors ('model does not exist')
Bot doesn't respond to messages
Bot doesn't respond to messages
Work through these steps in order:
- Check your trigger word. Make sure you’re using the right prefix (e.g.,
@Andy hello). - Check Docker is running. Run
docker info— if it errors, open Docker Desktop. - Check the proxy is running. See “The proxy isn’t running” above.
- Check logs:
tail -f logs/nanoclaw.login the project folder. - Check container logs. Open the
nanoclaw-venice/groups/main/logs/folder. Open the most recent file that starts withcontainer-. - Restart everything: Restart both proxy and bot (see above).
Container build fails during setup
Container build fails during setup
Make sure Docker Desktop is open and running. Wait 10 seconds for Docker to fully start, then type
continue in the wizard to retry.WhatsApp disconnected
WhatsApp disconnected
Your WhatsApp session can expire. To reconnect:Scan the QR code with WhatsApp (Settings → Linked Devices → Link a Device), then restart the bot:
- macOS:
launchctl kickstart -k gui/$(id -u)/com.nanoclaw - Linux:
systemctl --user restart nanoclaw
Advanced
Give the bot access to files on your computer
Give the bot access to files on your computer
By default, the bot is completely walled off from your computer — it can only see its own memory and conversation history.
- During setup: When asked about directory access, choose “Yes”
- After setup: Run
/customizein Claude Code
Manually start/stop the bot
Manually start/stop the bot
NanoClaw runs two background services that start automatically on boot.macOS:
Linux:
| Action | Command |
|---|---|
| Start bot | launchctl load ~/Library/LaunchAgents/com.nanoclaw.plist |
| Stop bot | launchctl unload ~/Library/LaunchAgents/com.nanoclaw.plist |
| Restart bot | launchctl kickstart -k gui/$(id -u)/com.nanoclaw |
| Start proxy | launchctl load ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist |
| Stop proxy | launchctl unload ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist |
| Restart proxy | launchctl kickstart -k gui/$(id -u)/com.nanoclaw.venice-proxy |
| Action | Command |
|---|---|
| Start bot | systemctl --user start nanoclaw |
| Stop bot | systemctl --user stop nanoclaw |
| Restart bot | systemctl --user restart nanoclaw |
| Start proxy | systemctl --user start nanoclaw-venice-proxy |
| Stop proxy | systemctl --user stop nanoclaw-venice-proxy |
| Restart proxy | systemctl --user restart nanoclaw-venice-proxy |
Using Claude Code through Venice (no bot)
Using Claude Code through Venice (no bot)
If you just want Claude Code with Venice and don’t need WhatsApp/Telegram, the proxy service needs to be running. If you’ve already run Tip: Add this to your Then just type
/setup, it’s already running as a background service.~/.zshrc (or ~/.bashrc) so you can quickly switch any terminal to Venice:venice in any terminal before running claude to use Venice, or anthropic to switch back.Running multiple bots
Running multiple bots
You can run multiple NanoClaw bots on the same machine (e.g., one for personal use and one for a team). Just clone the repo into a different folder and run setup again. Note: they share the same Docker image, so rebuilding one affects all of them.
Developer commands
Developer commands
For people who want to modify NanoClaw’s code:
Architecture
| File | Purpose |
|---|---|
proxy/venice-proxy.ts | Translates Anthropic format to OpenAI format for Venice |
src/index.ts | Main orchestrator — message loop, agent invocation |
src/channels/whatsapp.ts | WhatsApp connection via baileys |
src/channels/telegram.ts | Telegram bot via grammy |
src/container-runner.ts | Spawns isolated agent containers |
FAQ
Why do I need a proxy?
Why do I need a proxy?
The Claude Agent SDK speaks Anthropic’s message format. Venice speaks OpenAI’s format. The proxy translates between them so everything works without modifying the SDK.
Can I use open-source models?
Can I use open-source models?
Yes. Venice hosts many models. Tell the bot “switch to zai-org-glm-5” or any Venice model ID. See the model catalog.
Is it secure?
Is it secure?
Agents run in Docker containers with real OS-level isolation. The Venice API key is passed via stdin, never written to disk inside containers. Each group gets its own isolated environment.
Do I need an Anthropic subscription?
Do I need an Anthropic subscription?
No. Everything runs through Venice AI. You only need a Venice API key.
Can I use this on a server?
Can I use this on a server?
Yes. It works on any Linux machine with Docker. Use the systemd service for auto-start on boot.
Resources
NanoClaw Venice Repo
Source code and full README
Original NanoClaw
Upstream project by qwibitai
Venice Model Catalog
Browse available models
Venice Privacy
How Venice protects your data