Pay Per Token
No subscription. Pay only for what you use
Private Inference
Zero data retention on Venice servers
Docker Isolation
Each chat runs in its own secure container
Why NanoClaw?
NanoClaw is a clean, minimal alternative to larger platforms like OpenClaw. It’s designed for one person running one bot.| Feature | NanoClaw | OpenClaw |
|---|---|---|
| Codebase | ~2,000 lines | ~500,000 lines |
| Dependencies | ~15 packages | 70+ packages |
| Security | Docker container isolation | Application-level allowlists |
| Setup | One wizard, ~10 minutes | Manual multi-step config |
| Target user | One person, one bot | Multi-user platform |
What You Get
- Personal AI assistant on Telegram and/or WhatsApp
- Powered by Venice AI — no Anthropic account needed
- Bot runs in an isolated Docker container
- Model switching — tell the bot “switch to zai-org-glm-5” or “use opus” anytime
- Scheduled tasks — set reminders, recurring tasks
- Web search and browsing built in
- Markdown formatting in Telegram messages
Prerequisites
Node.js 20+
Check with
node --versionDocker
Install and open once so it’s running
Claude Code CLI
Check with
claude --versionVenice API Key
Generate from your Venice account
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Save the token BotFather gives you (looks like
123456789:ABCdef...)
Setup
The setup takes about 10 minutes. You’ll need two Terminal windows open side by side.Start the Venice Proxy
The proxy translates between Claude Code and Venice AI. It needs to stay running.Replace You should see:
your-key with your Venice API key and run:Launch Claude Code
Open a second Terminal window (Cmd+N on macOS, Ctrl+Shift+N on Linux).Navigate to the project folder and start Claude Code with Venice routing:If prompted “Do you want to use this API key?” — select Yes.
Run the Setup Wizard
In your Claude Code terminal, type:The wizard walks you through:
- Bootstrap — checks Node.js and dependencies
- Venice API key — validates and saves your key
- Channel choice — pick WhatsApp, Telegram, or both
- Container build — builds the Docker container (takes a few minutes first time)
- WhatsApp auth — scan QR code with your phone (if applicable)
- Telegram setup — send a message to your bot so it detects your chat
- Trigger word — prefix that activates the bot (default:
@Andy) - Mount directories — pick “No” for now (you can add file access later)
- Start service — NanoClaw starts running in the background
If the wizard stops between steps, type “continue” or “next step” to nudge it forward.
How It Works
There are two layers to NanoClaw:| Layer | What It Does |
|---|---|
| Claude Code CLI | Admin tool for setup, debugging, and customization |
| The Bot | AI in your chat, running inside an isolated Docker container |
Models
| Context | Default Model | How to Switch |
|---|---|---|
| Bot (in chat) | claude-sonnet-4-6 | Tell the bot: “switch to opus” or “use zai-org-glm-5” |
| Claude Code CLI | claude-opus-4-6 | Use /model in Claude Code |
Troubleshooting
The proxy crashed
The proxy crashed
The proxy can occasionally crash on connection errors. To restart:For auto-restart, install pm2:Now pm2 keeps the proxy running and restarts it if it crashes.
| Action | Command |
|---|---|
| Check status | pm2 status |
| View logs | pm2 logs venice-proxy |
| Restart | pm2 restart venice-proxy |
| Stop | pm2 stop venice-proxy |
Claude Code shows 403 error or 'Please run /login'
Claude Code shows 403 error or 'Please run /login'
This means Claude Code can’t connect to the Venice proxy.
- Check the proxy is running. Look at the Terminal window — it should show
Venice API proxy listening on http://localhost:4001. - Make sure you’re in the right folder. Always
cd nanoclaw-venicefirst. - Start fresh: Close all terminals and restart both the proxy and Claude Code.
Model errors ('model does not exist')
Model errors ('model does not exist')
Bot doesn't respond to messages
Bot doesn't respond to messages
- Check your trigger word. Make sure you’re using the right prefix (e.g.,
@Andy hello). - Check Docker is running. Run
docker info— if it errors, open Docker Desktop. - Check the proxy is running. Should show
Venice API proxy listening... - Check logs:
tail -f logs/nanoclaw.login the project folder. - Restart everything: Restart both proxy and bot (see above).
Container build fails during setup
Container build fails during setup
Make sure Docker Desktop is open and running. Wait 10 seconds for Docker to fully start, then type
continue in the wizard to retry.WhatsApp disconnected
WhatsApp disconnected
Your WhatsApp session can expire. To reconnect:Scan the QR code with WhatsApp (Settings → Linked Devices → Link a Device), then restart the bot.
Advanced
Give the bot access to files on your computer
Give the bot access to files on your computer
By default, the bot is completely walled off from your computer. To share folders:
- During setup: When asked about directory access, choose “Yes”
- After setup: Run
/customizein Claude Code
Manually start/stop the bot
Manually start/stop the bot
The bot runs as a background service. To control it manually:macOS:
Linux:
| Action | Command |
|---|---|
| Start | launchctl load ~/Library/LaunchAgents/com.nanoclaw.plist |
| Stop | launchctl unload ~/Library/LaunchAgents/com.nanoclaw.plist |
| Restart | launchctl kickstart -k gui/$(id -u)/com.nanoclaw |
| Action | Command |
|---|---|
| Start | systemctl --user start nanoclaw |
| Stop | systemctl --user stop nanoclaw |
| Restart | systemctl --user restart nanoclaw |
Using Claude Code through Venice (no bot)
Using Claude Code through Venice (no bot)
If you just want Claude Code with Venice and don’t need WhatsApp/Telegram:Terminal 1:Terminal 2:
Architecture
| File | Purpose |
|---|---|
proxy/venice-proxy.ts | Translates Anthropic format to OpenAI format for Venice |
src/index.ts | Main orchestrator — message loop, agent invocation |
src/channels/whatsapp.ts | WhatsApp connection via baileys |
src/channels/telegram.ts | Telegram bot via grammy |
src/container-runner.ts | Spawns isolated agent containers |
FAQ
Why do I need a proxy?
Why do I need a proxy?
The Claude Agent SDK speaks Anthropic’s message format. Venice speaks OpenAI’s format. The proxy translates between them.
Can I use open-source models?
Can I use open-source models?
Yes. Tell the bot “switch to zai-org-glm-5” or any Venice model ID. See the model catalog.
Is it secure?
Is it secure?
Agents run in Docker containers with OS-level isolation. The Venice API key is passed via stdin, never written to disk inside containers. Each group gets its own isolated environment.
Do I need an Anthropic subscription?
Do I need an Anthropic subscription?
No. Everything runs through Venice AI. You only need a Venice API key.
Can I use this on a server?
Can I use this on a server?
Yes. It works on any Linux machine with Docker. Use the systemd service for auto-start on boot.