NanoClaw is a lightweight, self-hosted AI assistant that runs on WhatsApp and Telegram. This fork adds Venice AI support so everything runs privately without an Anthropic subscription.Documentation Index
Fetch the complete documentation index at: https://docs.venice.ai/llms.txt
Use this file to discover all available pages before exploring further.
Pay Per Token
Private Inference
Docker Isolation
Why Venice AI?
Venice is a privacy-first AI platform. They don’t store or log any prompts or responses on their servers — your conversations exist only on your device. Requests are encrypted end-to-end through their proxy to decentralized GPU providers, with zero data retention. This means your AI assistant conversations stay private, even from Venice themselves. Venice provides anonymized access to frontier models (Claude Opus, Claude Sonnet) and fully private access to open-source models (GLM, Qwen) through a single API — switch between them anytime.| Venice AI | Traditional AI providers | |
|---|---|---|
| Data retention | None — zero logs | Yes |
| Prompt privacy | Encrypted, never stored | Stored on provider servers |
| Open-source models | Yes (GLM, Qwen, and others) | No |
| Frontier models | Claude, GPT, and others — anonymously | Only through direct subscriptions |
| Pricing | Pay-per-token, no subscription. Or stake DIEM for daily refreshing credits | $20–200/mo subscriptions or pay-per-token API |
| Uncensored inference | Yes (open-source models) | No |
Why NanoClaw?
NanoClaw is a clean, minimal alternative to larger platforms like OpenClaw. It’s designed for one person running one bot.| NanoClaw (Venice) | OpenClaw | |
|---|---|---|
| Codebase | ~2,000 lines, handful of files | ~500,000 lines, 53 config files |
| Dependencies | ~15 packages | 70+ packages |
| Security model | OS-level Docker container isolation | Application-level allowlists and pairing codes |
| Per-group isolation | Each group gets its own container, filesystem, and memory | Shared process, shared memory |
| Setup | One wizard (/setup), ~10 minutes | Manual multi-step configuration |
| AI provider | Venice AI (private, no subscription) | Anthropic (requires API key or subscription) |
| Customization | Edit the code directly — it’s small enough to read | Config files and plugins |
| Target user | One person, one bot | Multi-user platform |
What You Get
- Personal AI assistant on Telegram and/or WhatsApp
- Powered by Venice AI — no Anthropic account needed
- Bot runs in an isolated Docker container (sandboxed, can’t access your system)
- Model switching — tell the bot “switch to zai-org-glm-5” or “use opus” anytime
- Scheduled tasks — set reminders, recurring tasks
- Web search and browsing built in
- Markdown formatting in Telegram messages
Prerequisites
Node.js 20+
node --versionDocker
Claude Code CLI
claude --versionVenice API Key
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Save the token BotFather gives you (looks like
123456789:ABCdef...)
Setup
The setup takes about 10 minutes. You only need one Terminal window.Launch Claude Code with Venice
your-key with your Venice API key and run:zai-org-glm-5) to keep setup costs low. After setup, type /model inside Claude Code to switch to claude-sonnet-4-6 or claude-opus-4-6 for best performance.Run the Setup Wizard
- Bootstrap — checks Node.js and dependencies
- Venice API key — validates and saves your key
- Channel choice — pick WhatsApp, Telegram, or both
- Container build — builds the Docker container (takes a few minutes first time)
- WhatsApp auth — scan QR code with your phone (if applicable)
- Telegram setup — send a message to your bot so it detects your chat
- Trigger word — prefix that activates the bot (default:
@Andy) - Mount directories — pick “No” for now (you can add file access later)
- Start services — NanoClaw and the Venice proxy both start as background services
- NanoClaw — the bot itself
- Venice proxy — a small local server (localhost:4001) that translates between Claude Code and Venice AI
Start Chatting
@Andy prefix.You can now close the terminal window. Everything runs as background services and starts automatically when your computer boots.How It Works
There are two layers to NanoClaw:| Layer | What It Does |
|---|---|
| Claude Code CLI | Admin tool for setup, debugging, and customization |
| The Bot | AI in your chat, running inside an isolated Docker container |
/setup, /debug, /customize, or make changes to the bot’s behavior.
Models
| Context | Default Model | How to Switch |
|---|---|---|
| Bot (in chat) | claude-sonnet-4-6 | Tell the bot: “switch to opus” or “use zai-org-glm-5” |
| Claude Code CLI | zai-org-glm-5 (GLM 5) | Use /model in Claude Code or claude --model <name> |
Troubleshooting
The proxy isn't running
The proxy isn't running
Claude Code shows 403 error or 'Please run /login'
Claude Code shows 403 error or 'Please run /login'
- Check the proxy is running. See the troubleshooting step above.
- Make sure you’re in the right folder. Always
cd nanoclaw-venicefirst. - Start fresh: Close all terminals and run:
Model errors ('model does not exist')
Model errors ('model does not exist')
Bot doesn't respond to messages
Bot doesn't respond to messages
- Check your trigger word. Make sure you’re using the right prefix (e.g.,
@Andy hello). - Check Docker is running. Run
docker info— if it errors, open Docker Desktop. - Check the proxy is running. See “The proxy isn’t running” above.
- Check logs:
tail -f logs/nanoclaw.login the project folder. - Check container logs. Open the
nanoclaw-venice/groups/main/logs/folder. Open the most recent file that starts withcontainer-. - Restart everything: Restart both proxy and bot (see above).
Container build fails during setup
Container build fails during setup
continue in the wizard to retry.WhatsApp disconnected
WhatsApp disconnected
- macOS:
launchctl kickstart -k gui/$(id -u)/com.nanoclaw - Linux:
systemctl --user restart nanoclaw
Advanced
Give the bot access to files on your computer
Give the bot access to files on your computer
- During setup: When asked about directory access, choose “Yes”
- After setup: Run
/customizein Claude Code
Manually start/stop the bot
Manually start/stop the bot
| Action | Command |
|---|---|
| Start bot | launchctl load ~/Library/LaunchAgents/com.nanoclaw.plist |
| Stop bot | launchctl unload ~/Library/LaunchAgents/com.nanoclaw.plist |
| Restart bot | launchctl kickstart -k gui/$(id -u)/com.nanoclaw |
| Start proxy | launchctl load ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist |
| Stop proxy | launchctl unload ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist |
| Restart proxy | launchctl kickstart -k gui/$(id -u)/com.nanoclaw.venice-proxy |
| Action | Command |
|---|---|
| Start bot | systemctl --user start nanoclaw |
| Stop bot | systemctl --user stop nanoclaw |
| Restart bot | systemctl --user restart nanoclaw |
| Start proxy | systemctl --user start nanoclaw-venice-proxy |
| Stop proxy | systemctl --user stop nanoclaw-venice-proxy |
| Restart proxy | systemctl --user restart nanoclaw-venice-proxy |
Using Claude Code through Venice (no bot)
Using Claude Code through Venice (no bot)
/setup, it’s already running as a background service.~/.zshrc (or ~/.bashrc) so you can quickly switch any terminal to Venice:venice in any terminal before running claude to use Venice, or anthropic to switch back.Running multiple bots
Running multiple bots
Developer commands
Developer commands
Architecture
| File | Purpose |
|---|---|
proxy/venice-proxy.ts | Translates Anthropic format to OpenAI format for Venice |
src/index.ts | Main orchestrator — message loop, agent invocation |
src/channels/whatsapp.ts | WhatsApp connection via baileys |
src/channels/telegram.ts | Telegram bot via grammy |
src/container-runner.ts | Spawns isolated agent containers |
FAQ
Why do I need a proxy?
Why do I need a proxy?
Can I use open-source models?
Can I use open-source models?
Is it secure?
Is it secure?
Do I need an Anthropic subscription?
Do I need an Anthropic subscription?
Can I use this on a server?
Can I use this on a server?