Skip to main content
NanoClaw is a lightweight, self-hosted AI assistant that runs on WhatsApp and Telegram. This fork adds Venice AI support so everything runs privately without an Anthropic subscription.

Pay Per Token

No subscription. Pay only for what you use

Private Inference

Zero data retention on Venice servers

Docker Isolation

Each chat runs in its own secure container

Why Venice AI?

Venice is a privacy-first AI platform. They don’t store or log any prompts or responses on their servers — your conversations exist only on your device. Requests are encrypted end-to-end through their proxy to decentralized GPU providers, with zero data retention. This means your AI assistant conversations stay private, even from Venice themselves. Venice provides anonymized access to frontier models (Claude Opus, Claude Sonnet) and fully private access to open-source models (GLM, Qwen) through a single API — switch between them anytime.
Venice AITraditional AI providers
Data retentionNone — zero logsYes
Prompt privacyEncrypted, never storedStored on provider servers
Open-source modelsYes (GLM, Qwen, and others)No
Frontier modelsClaude, GPT, and others — anonymouslyOnly through direct subscriptions
PricingPay-per-token, no subscription. Or stake DIEM for daily refreshing credits$20–200/mo subscriptions or pay-per-token API
Uncensored inferenceYes (open-source models)No

Why NanoClaw?

NanoClaw is a clean, minimal alternative to larger platforms like OpenClaw. It’s designed for one person running one bot.
NanoClaw (Venice)OpenClaw
Codebase~2,000 lines, handful of files~500,000 lines, 53 config files
Dependencies~15 packages70+ packages
Security modelOS-level Docker container isolationApplication-level allowlists and pairing codes
Per-group isolationEach group gets its own container, filesystem, and memoryShared process, shared memory
SetupOne wizard (/setup), ~10 minutesManual multi-step configuration
AI providerVenice AI (private, no subscription)Anthropic (requires API key or subscription)
CustomizationEdit the code directly — it’s small enough to readConfig files and plugins
Target userOne person, one botMulti-user platform

What You Get

  • Personal AI assistant on Telegram and/or WhatsApp
  • Powered by Venice AI — no Anthropic account needed
  • Bot runs in an isolated Docker container (sandboxed, can’t access your system)
  • Model switching — tell the bot “switch to zai-org-glm-5” or “use opus” anytime
  • Scheduled tasks — set reminders, recurring tasks
  • Web search and browsing built in
  • Markdown formatting in Telegram messages

Prerequisites

Node.js 20+

Check with node --version

Docker

Install and open once so it’s running

Claude Code CLI

Check with claude --version

Venice API Key

Generate from your Venice account
For Telegram (recommended for first-time users):
  1. Open Telegram and search for @BotFather
  2. Send /newbot and follow the prompts
  3. Save the token BotFather gives you (looks like 123456789:ABCdef...)
For WhatsApp — use a virtual number, NOT your personal one:NanoClaw connects as a linked device on your WhatsApp number. That means the agent can see every message coming in and going out — all your personal conversations, group chats, photos, everything. Your phone still works normally, but the bot has full visibility into your entire WhatsApp account.Use a virtual phone number instead. These apps give you a second number that you can dedicate entirely to the bot:
AppPriceNotes
Hushed~$5/moReliable, works well for WhatsApp verification
Burner~$5/moSimilar to Hushed, disposable numbers
Google VoiceFreeUS-only, may not work for WhatsApp verification in all cases
How to set it up:
  1. Get a virtual number from one of the apps above
  2. Install WhatsApp on a second device (old phone, tablet, or emulator) using that virtual number
  3. During NanoClaw setup, scan the QR code with that second device — not your personal phone

Setup

The setup takes about 10 minutes. You only need one Terminal window.
1

Clone and Install

Open Terminal and run:
git clone https://github.com/lorenzovenice/nanoclaw-venice.git
cd nanoclaw-venice
npm install
Wait for npm install to finish with no errors.
2

Launch Claude Code with Venice

Replace your-key with your Venice API key and run:
VENICE_API_KEY=your-key npm run venice
This starts the Venice proxy and launches Claude Code through it in a single command.
Claude Code defaults to GLM 5 (zai-org-glm-5) to keep setup costs low. After setup, type /model inside Claude Code to switch to claude-sonnet-4-6 or claude-opus-4-6 for best performance.
If prompted “Do you want to use this API key?” — select Yes.
3

Run the Setup Wizard

In your Claude Code terminal, type:
/setup
The wizard walks you through:
  1. Bootstrap — checks Node.js and dependencies
  2. Venice API key — validates and saves your key
  3. Channel choice — pick WhatsApp, Telegram, or both
  4. Container build — builds the Docker container (takes a few minutes first time)
  5. WhatsApp auth — scan QR code with your phone (if applicable)
  6. Telegram setup — send a message to your bot so it detects your chat
  7. Trigger word — prefix that activates the bot (default: @Andy)
  8. Mount directories — pick “No” for now (you can add file access later)
  9. Start services — NanoClaw and the Venice proxy both start as background services
The setup wizard installs two background services:
  • NanoClaw — the bot itself
  • Venice proxy — a small local server (localhost:4001) that translates between Claude Code and Venice AI
Both start automatically on boot and restart themselves if they crash.
If the wizard stops between steps, type “continue” or “next step” to nudge it forward.
4

Start Chatting

Once setup is complete, open your chat (Telegram or WhatsApp) and send:
@Andy hello, are you there?
The bot should respond within seconds. In your main channel, you can type normally without the @Andy prefix.You can now close the terminal window. Everything runs as background services and starts automatically when your computer boots.

How It Works

There are two layers to NanoClaw:
LayerWhat It Does
Claude Code CLIAdmin tool for setup, debugging, and customization
The BotAI in your chat, running inside an isolated Docker container
To open Claude Code anytime:
cd nanoclaw-venice
ANTHROPIC_BASE_URL=http://localhost:4001 ANTHROPIC_API_KEY=venice-proxy claude
Use it to run /setup, /debug, /customize, or make changes to the bot’s behavior.

Models

ContextDefault ModelHow to Switch
Bot (in chat)claude-sonnet-4-6Tell the bot: “switch to opus” or “use zai-org-glm-5”
Claude Code CLIzai-org-glm-5 (GLM 5)Use /model in Claude Code or claude --model <name>
The CLI defaults to GLM 5 to keep setup costs low. After setup, switch to claude-sonnet-4-6 or claude-opus-4-6 for best performance.
See the model catalog for all available Venice models.

Troubleshooting

The Venice proxy runs as a background service and restarts itself automatically. If it’s not working:macOS:
# Check if it's running
launchctl list | grep venice-proxy

# Restart it
launchctl kickstart -k gui/$(id -u)/com.nanoclaw.venice-proxy

# Check logs
tail -f ~/nanoclaw-venice/logs/venice-proxy.log
Linux:
# Check if it's running
systemctl --user status nanoclaw-venice-proxy

# Restart it
systemctl --user restart nanoclaw-venice-proxy

# Check logs
tail -f ~/nanoclaw-venice/logs/venice-proxy.log
This means Claude Code can’t connect to the Venice proxy.
  1. Check the proxy is running. See the troubleshooting step above.
  2. Make sure you’re in the right folder. Always cd nanoclaw-venice first.
  3. Start fresh: Close all terminals and run:
    cd nanoclaw-venice
    ANTHROPIC_BASE_URL=http://localhost:4001 ANTHROPIC_API_KEY=venice-proxy claude
    
Restart the proxy and the bot:macOS:
# Restart proxy
launchctl kickstart -k gui/$(id -u)/com.nanoclaw.venice-proxy

# Restart bot
launchctl kickstart -k gui/$(id -u)/com.nanoclaw
Linux:
# Restart proxy
systemctl --user restart nanoclaw-venice-proxy

# Restart bot
systemctl --user restart nanoclaw
Check available models at the model catalog.
Work through these steps in order:
  1. Check your trigger word. Make sure you’re using the right prefix (e.g., @Andy hello).
  2. Check Docker is running. Run docker info — if it errors, open Docker Desktop.
  3. Check the proxy is running. See “The proxy isn’t running” above.
  4. Check logs: tail -f logs/nanoclaw.log in the project folder.
  5. Check container logs. Open the nanoclaw-venice/groups/main/logs/ folder. Open the most recent file that starts with container-.
  6. Restart everything: Restart both proxy and bot (see above).
Make sure Docker Desktop is open and running. Wait 10 seconds for Docker to fully start, then type continue in the wizard to retry.
Your WhatsApp session can expire. To reconnect:
cd nanoclaw-venice
npm run auth
Scan the QR code with WhatsApp (Settings → Linked Devices → Link a Device), then restart the bot:
  • macOS: launchctl kickstart -k gui/$(id -u)/com.nanoclaw
  • Linux: systemctl --user restart nanoclaw

Advanced

By default, the bot is completely walled off from your computer — it can only see its own memory and conversation history.
  • During setup: When asked about directory access, choose “Yes”
  • After setup: Run /customize in Claude Code
NanoClaw runs two background services that start automatically on boot.macOS:
ActionCommand
Start botlaunchctl load ~/Library/LaunchAgents/com.nanoclaw.plist
Stop botlaunchctl unload ~/Library/LaunchAgents/com.nanoclaw.plist
Restart botlaunchctl kickstart -k gui/$(id -u)/com.nanoclaw
Start proxylaunchctl load ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist
Stop proxylaunchctl unload ~/Library/LaunchAgents/com.nanoclaw.venice-proxy.plist
Restart proxylaunchctl kickstart -k gui/$(id -u)/com.nanoclaw.venice-proxy
Linux:
ActionCommand
Start botsystemctl --user start nanoclaw
Stop botsystemctl --user stop nanoclaw
Restart botsystemctl --user restart nanoclaw
Start proxysystemctl --user start nanoclaw-venice-proxy
Stop proxysystemctl --user stop nanoclaw-venice-proxy
Restart proxysystemctl --user restart nanoclaw-venice-proxy
If you just want Claude Code with Venice and don’t need WhatsApp/Telegram, the proxy service needs to be running. If you’ve already run /setup, it’s already running as a background service.
cd nanoclaw-venice
ANTHROPIC_BASE_URL=http://localhost:4001 ANTHROPIC_API_KEY=venice-proxy claude
Tip: Add this to your ~/.zshrc (or ~/.bashrc) so you can quickly switch any terminal to Venice:
alias venice='export ANTHROPIC_BASE_URL=http://localhost:4001 && export ANTHROPIC_API_KEY=venice-proxy && echo "Using Venice API"'
alias anthropic='unset ANTHROPIC_BASE_URL && unset ANTHROPIC_API_KEY && echo "Using Anthropic API"'
Then just type venice in any terminal before running claude to use Venice, or anthropic to switch back.
You can run multiple NanoClaw bots on the same machine (e.g., one for personal use and one for a team). Just clone the repo into a different folder and run setup again. Note: they share the same Docker image, so rebuilding one affects all of them.
For people who want to modify NanoClaw’s code:
npm run dev          # Start proxy + NanoClaw with hot reload
npm run proxy        # Start just the Venice proxy
npm run build        # Compile TypeScript
npm test             # Run tests
./container/build.sh # Rebuild agent container

Architecture

You (WhatsApp/Telegram)

   NanoClaw (Node.js)

   Docker Container (isolated sandbox)

   Venice Proxy (localhost:4001)

   api.venice.ai (private inference)
FilePurpose
proxy/venice-proxy.tsTranslates Anthropic format to OpenAI format for Venice
src/index.tsMain orchestrator — message loop, agent invocation
src/channels/whatsapp.tsWhatsApp connection via baileys
src/channels/telegram.tsTelegram bot via grammy
src/container-runner.tsSpawns isolated agent containers

FAQ

The Claude Agent SDK speaks Anthropic’s message format. Venice speaks OpenAI’s format. The proxy translates between them so everything works without modifying the SDK.
Yes. Venice hosts many models. Tell the bot “switch to zai-org-glm-5” or any Venice model ID. See the model catalog.
Agents run in Docker containers with real OS-level isolation. The Venice API key is passed via stdin, never written to disk inside containers. Each group gets its own isolated environment.
No. Everything runs through Venice AI. You only need a Venice API key.
Yes. It works on any Linux machine with Docker. Use the systemd service for auto-start on boot.

Resources

NanoClaw Venice Repo

Source code and full README

Original NanoClaw

Upstream project by qwibitai

Venice Model Catalog

Browse available models

Venice Privacy

How Venice protects your data