Clawdbot is a powerful open-source AI assistant that lets you interact with AI through your favorite messaging platforms. In this guide, we’ll show you how to deploy Clawdbot on an Azure Ubuntu VM and connect it to Azure OpenAI models using LiteLLM as a translation layer.
What You’ll Build
By the end of this guide, you’ll have:
- A Telegram bot powered by Azure OpenAI that responds intelligently to your messages
- Secure, private deployment on Azure with no public ports exposed
- GitHub integration for repository management and code tasks
- Auto-starting services that survive VM reboots
- Enterprise-grade security with Docker sandboxing and token authentication
Why This Approach?
Clawdbot doesn’t natively support Azure OpenAI’s API format, but Azure OpenAI is API-compatible with OpenAI. We use LiteLLM as a proxy to:
- Translate Clawdbot’s requests to Azure OpenAI format
- Handle parameter compatibility automatically
- Provide a single abstraction layer
This approach gives you cost savings of Azure’s pricing while keeping Clawdbot’s powerful features.
Prerequisites
Before you start, you’ll need:
From Azure
- Active Azure subscription with Azure OpenAI resource deployed
- Azure OpenAI endpoint URL
- Azure OpenAI API key
- Deployed model names (e.g.,
gpt-4o-mini,gpt-4o) - API version (e.g.,
2024-10-21)
From Telegram
- A Telegram account
- A bot token (we’ll create this during setup)
Optional: GitHub Integration
- GitHub Personal Access Token with
reposcope
Local Machine
- SSH client (built-in on macOS/Linux, use WSL or PuTTY on Windows)
- Terminal/Command prompt
Architecture Overview
Here’s how the components interact:
┌─────────────┐
│ Telegram │
│ User │
└──────┬──────┘
│
└──────────────┐
│
▼
┌───────────────────────┐
│ Azure Firewall (NSG) │
│ (Only SSH inbound) │
└───────┬───────────────┘
│
▼
┌──────────────────────┐
│ Ubuntu 22.04 VM │
│ (Private Network) │
│ │
│ ┌────────────────┐ │
│ │ Clawdbot │ │
│ │ Gateway │ │
│ │ (localhost) │ │
│ └────────┬───────┘ │
│ │ │
│ ┌────────▼───────┐ │
│ │ LiteLLM │ │
│ │ Proxy │ │
│ │ (localhost) │ │
│ └────────┬───────┘ │
│ │ │
│ (outbound)│ │
└───────────┼──────────┘
│
▼
┌──────────────────────┐
│ Azure OpenAI API │
│ (HTTPS outbound) │
└──────────────────────┘
Key Security Points:
- ✅ No public ports exposed (only SSH on port 22)
- ✅ Gateway bound to localhost only
- ✅ All outbound connections use HTTPS
- ✅ Token authentication required
- ✅ Secure user approval for messaging
Step 1: Create Azure Ubuntu 22.04 VM
Option A: Azure Portal (GUI)
- Go to Azure Portal → Virtual Machines → Create
- Basics tab:
- Resource group: Create new or select existing
- VM name:
clawdbot-vm - Region: Same as your Azure OpenAI resource
- Image: Ubuntu Server 22.04 LTS
- Size: Standard_B2s (2 vCPUs, 4 GB RAM) – minimum recommended
- Authentication type: SSH public key
- Username:
clawdubuntu
- Networking tab:
- Virtual network: Default is fine
- Public inbound ports: SSH (22) only
- Optional: Use private IP only for maximum security
- Management tab:
- Enable auto-shutdown: Yes (saves costs if forgotten)
- Review + Create → Create
Option B: Azure CLI
az vm create \
--resource-group your-resource-group \
--name clawdbot-vm \
--image Ubuntu2204 \
--size Standard_B2s \
--admin-username clawdubuntu \
--generate-ssh-keys \
--nsg-rule NONE
Access Your VM
ssh clawdubuntu@YOUR_VM_IP
Replace YOUR_VM_IP with your VM’s public or private IP address. If using private IP, ensure you’re accessing through Azure Bastion or VPN.
Step 2: Prepare the System
Update and Install Dependencies
sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential curl wget git software-properties-common screen
Add Swap Space
On a 2GB RAM VM, swap prevents out-of-memory errors during operations:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Verify:
free -h
You should see approximately 2GB in the swap row.
Step 3: Install Node.js 22
Clawdbot requires Node.js 22 or higher.
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
Verify installation:
node --version # Should show v22.x.x or higher
npm --version
Step 4: Install Docker
Docker provides sandboxing for code execution, a critical security feature.
sudo apt update
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
Important: Log out and back in for Docker group permissions to take effect:
exit
Then SSH back in:
ssh clawdubuntu@YOUR_VM_IP
Verify Docker:
docker --version
docker run hello-world
Step 5: Install Clawdbot
sudo npm install -g clawdbot@latest
Update PATH if needed:
export PATH="$(npm prefix -g)/bin:$PATH"
echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
Verify:
clawdbot --version
Create required directories:
mkdir -p ~/.clawdbot ~/clawd
chmod 700 ~/.clawdbot
Step 6: Set Up LiteLLM Proxy
LiteLLM acts as a translation layer between Clawdbot and Azure OpenAI.
Create Python Virtual Environment
python3 -m venv ~/litellm-venv
source ~/litellm-venv/bin/activate
Install LiteLLM
pip install 'litellm[proxy]'
Create LiteLLM Configuration
nano ~/litellm_config.yaml
Paste this configuration (replace placeholders with your actual values):
model_list:
- model_name: gpt-4o-mini
litellm_params:
model: azure/YOUR_DEPLOYMENT_NAME
api_base: https://YOUR_REGION.api.cognitive.microsoft.com
api_key: your-azure-openai-api-key-here
api_version: "2024-10-21"
- model_name: gpt-4o
litellm_params:
model: azure/YOUR_GPT4O_DEPLOYMENT_NAME
api_base: https://YOUR_REGION.api.cognitive.microsoft.com
api_key: your-azure-openai-api-key-here
api_version: "2024-10-21"
general_settings:
master_key: "generate-a-random-secret-string-here"
litellm_settings:
drop_params: true
Sensitive Information Notes:
- api_key: Use the API key from Azure OpenAI “Keys and Endpoint” section
- master_key: Generate a secure random string:
openssl rand -hex 32 - drop_params: Must be
trueto prevent Azure compatibility errors
Save (Ctrl+O, Enter, Ctrl+X).
Test LiteLLM
source ~/litellm-venv/bin/activate
litellm --config ~/litellm_config.yaml --port 4000
You should see:
INFO: Started server on http://0.0.0.0:4000
In a new SSH session, test the connection:
curl http://localhost:4000/health
Should return a JSON response. Press Ctrl+C to stop LiteLLM.
Step 7: Configure Clawdbot
Create Configuration
nano ~/.clawdbot/clawdbot.json
Paste this configuration:
{
"messages": {
"ackReactionScope": "group-mentions"
},
"agents": {
"defaults": {
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
},
"compaction": {
"mode": "safeguard"
},
"workspace": "/home/clawdubuntu/clawd",
"model": {
"primary": "litellm/gpt-4o-mini"
},
"sandbox": {
"mode": "all",
"scope": "session",
"workspaceAccess": "ro"
}
}
},
"gateway": {
"mode": "local",
"auth": {
"mode": "token",
"token": "REPLACE_WITH_RANDOM_TOKEN"
},
"remote": {
"token": "REPLACE_WITH_RANDOM_TOKEN"
},
"port": 18789,
"bind": "loopback",
"tailscale": {
"mode": "off",
"resetOnExit": false
}
},
"auth": {
"profiles": {}
},
"models": {
"mode": "merge",
"providers": {
"litellm": {
"baseUrl": "http://localhost:4000/v1",
"apiKey": "YOUR_LITELLM_MASTER_KEY",
"api": "openai-completions",
"models": [
{
"id": "gpt-4o-mini",
"name": "GPT-4o Mini (Azure)",
"reasoning": false,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 16384
},
{
"id": "gpt-4o",
"name": "GPT-4o (Azure)",
"reasoning": false,
"input": ["text", "image"],
"contextWindow": 128000,
"maxTokens": 16384
}
]
}
}
},
"plugins": {
"entries": {
"telegram": {
"enabled": true
}
}
},
"channels": {
"telegram": {
"enabled": true,
"botToken": "YOUR_TELEGRAM_BOT_TOKEN",
"dmPolicy": "pairing",
"groups": {
"*": {
"requireMention": true
}
}
}
},
"skills": {
"install": {
"nodeManager": "npm"
},
"entries": {}
},
"hooks": {
"internal": {
"enabled": true,
"entries": {
"boot-md": {
"enabled": true
},
"command-logger": {
"enabled": true
},
"session-memory": {
"enabled": true
}
}
}
}
}
Generate Security Tokens
Before saving, generate random tokens:
# Generate gateway token
openssl rand -hex 32
Copy the output and replace both instances of REPLACE_WITH_RANDOM_TOKEN in the config.
Then replace:
YOUR_LITELLM_MASTER_KEYwith the master_key from your LiteLLM configYOUR_TELEGRAM_BOT_TOKEN– we’ll get this next
Secure the Configuration
chmod 600 ~/.clawdbot/clawdbot.json
Step 8: Set Up Telegram Bot
Create a Telegram Bot
- Open Telegram and search for
@BotFather - Send
/newbot - Follow the prompts:
- Bot name:
My Clawdbot(or your preferred name) - Bot username: Must end with
_bot(e.g.,MyClawdbot_bot)
- Bot name:
- Copy the bot token (format:
1234567890:ABCDEFGhijklmnopqrstuvwxyz-1234567890)
Add Telegram Token to Config
nano ~/.clawdbot/clawdbot.json
Find the line:
"botToken": "YOUR_TELEGRAM_BOT_TOKEN",
Replace with your actual token.
Save and exit.
Step 9: Configure Security
Set Up Firewall
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw enable
Verify:
sudo ufw status verbose
Important: Never open port 18789. The gateway is intentionally localhost-only.
Security Features Enabled
Your configuration includes:
- ✅ Gateway on localhost only – Not accessible from network
- ✅ Token authentication – Requires valid token for access
- ✅ Telegram pairing mode – Manual approval for new users
- ✅ Group mention requirement – Bot only responds when mentioned in groups
- ✅ Docker sandboxing – Code runs in isolated containers
- ✅ Read-only workspace – Limited file system access
Step 10: Create Startup Scripts
Managing services manually is tedious. Let’s create scripts to start everything automatically.
Create Start Script
nano ~/start-clawdbot.sh
Paste:
#!/bin/bash
echo "Starting Clawdbot stack..."
# Start LiteLLM
echo "Starting LiteLLM proxy..."
source /home/clawdubuntu/litellm-venv/bin/activate
screen -dmS litellm litellm --config /home/clawdubuntu/litellm_config.yaml --port 4000
# Wait for LiteLLM to initialize
echo "Waiting for LiteLLM to initialize..."
sleep 5
# Start Clawdbot Gateway
echo "Starting Clawdbot gateway..."
screen -dmS clawdbot clawdbot gateway
echo "Done! Services started."
echo ""
echo "Check status: screen -ls"
echo "View LiteLLM logs: screen -r litellm"
echo "View Clawdbot logs: screen -r clawdbot"
echo "Detach: Ctrl+A then D"
Make executable:
chmod +x ~/start-clawdbot.sh
Create Stop Script
nano ~/stop-clawdbot.sh
Paste:
#!/bin/bash
echo "Stopping Clawdbot stack..."
screen -S clawdbot -X quit 2>/dev/null
echo "Clawdbot stopped."
screen -S litellm -X quit 2>/dev/null
echo "LiteLLM stopped."
echo "Done!"
Make executable:
chmod +x ~/stop-clawdbot.sh
Auto-Start on VM Reboot
crontab -e
Choose nano if prompted. Add this line at the bottom:
@reboot sleep 30 && /home/clawdubuntu/start-clawdbot.sh
Save and exit. Now services will auto-start after VM reboots.
Step 11: Test Your Setup
Start Services
~/start-clawdbot.sh
Wait 10 seconds for initialization.
Check Status
screen -ls
You should see both litellm and clawdbot sessions.
clawdbot status
Look for:
- Gateway: reachable
- Model: litellm/gpt-4o-mini
- Telegram: ON and OK
View Logs
screen -r clawdbot
Press Ctrl+A then D to detach and leave running.
Test Telegram Bot
- Open Telegram and find your bot
- Send a message:
Hello - You’ll receive a pairing code (security feature)
Approve Yourself
Back on the VM:
clawdbot pairing list telegram
clawdbot pairing approve telegram YOUR_CODE
Verify It Works
Send another message to your bot. It should respond with an AI-generated message powered by Azure OpenAI!
Try:
What is the capital of France?
The bot should respond correctly using your Azure GPT model.
Step 12: Remote Access to Control Panel
The Clawdbot control panel runs on localhost for security. Here are two ways to access it:
Option A: SSH Port Forwarding
From your local machine (not the VM):
ssh -L 18789:127.0.0.1:18789 clawdubuntu@YOUR_VM_IP
Keep this connection open, then browse to:
http://localhost:18789
Option B: Tailscale (Recommended for Production)
For persistent secure access:
# On the VM
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
# Edit config to enable Tailscale serve
nano ~/.clawdbot/clawdbot.json
Change:
json"tailscale": {
"mode": "serve",
"resetOnExit": false
}
Restart:
~/stop-clawdbot.sh && ~/start-clawdbot.sh
Then get your Tailscale URL from:
clawdbot status
Step 13: Add GitHub Integration (Optional)
Create GitHub Token
- Go to: https://github.com/settings/tokens
- Generate new token (classic)
- Select scopes:
repo,read:org,workflow,gist,read:user,user:email - Copy the token
Configure in Clawdbot
clawdbot auth add github
Follow the prompts to enter your token. Then restart:
bash~/stop-clawdbot.sh && ~/start-clawdbot.sh
Now your bot can interact with GitHub repos, create issues, review code, and more!
Maintenance and Operations
Daily Checks
# Check everything is running
screen -ls
# Check Clawdbot status
clawdbot status
# View recent activity
clawdbot logs
Restart Services
~/stop-clawdbot.sh
~/start-clawdbot.sh
Update Clawdbot
sudo npm update -g clawdbot@latest
~/stop-clawdbot.sh && ~/start-clawdbot.sh
Update LiteLLM
source ~/litellm-venv/bin/activate
pip install --upgrade 'litellm[proxy]'
~/stop-clawdbot.sh && ~/start-clawdbot.sh
Backup Configuration
cp ~/.clawdbot/clawdbot.json ~/clawdbot-backup-$(date +%Y%m%d).json
cp ~/litellm_config.yaml ~/litellm-backup-$(date +%Y%m%d).yaml
Troubleshooting
Services Won’t Start
Check logs:
screen -r clawdbot
screen -r litellm
Restart everything:
~/stop-clawdbot.sh
sleep 5
~/start-clawdbot.sh
Bot Not Responding
Verify services are running:
clawdbot status
Check if LiteLLM is accessible:
curl http://localhost:4000/health
Verify Azure credentials:
- Double-check API key is correct
- Verify deployment names match your Azure setup
- Check API version is current
Out of Memory Errors
Check available memory:
free -h
Check swap is active:
swapon --show
Increase swap if needed:
sudo fallocate -l 4G /swapfile-extra
sudo mkswap /swapfile-extra
sudo swapon /swapfile-extra
Docker Issues
Verify Docker is running:
sudo systemctl status docker
docker ps
Check user is in docker group:
groups
Should include docker. If not, log out and back in.
Security Best Practices
API Keys and Tokens
- ✅ Never commit secrets to git
- ✅ Rotate tokens every 90 days
- ✅ Use different tokens for different environments
- ✅ Keep
~/.clawdbot/withchmod 700permissions - ✅ Keep
clawdbot.jsonwithchmod 600permissions
Network Security
- ✅ Gateway bound to localhost only
- ✅ Firewall blocks all inbound except SSH
- ✅ Use SSH keys instead of passwords
- ✅ Disable SSH password authentication
- ✅ Monitor firewall logs regularly
Azure OpenAI
- ✅ Regenerate API keys if compromised
- ✅ Monitor Azure usage for unexpected charges
- ✅ Set spending limits on Azure subscription
- ✅ Use Azure role-based access control (RBAC)
Clawdbot Specific
- ✅ Use pairing mode for Telegram (enabled by default)
- ✅ Require mentions in group chats
- ✅ Docker sandboxing enabled (prevents system access)
- ✅ Read-only workspace access
- ✅ Regular security audits:
clawdbot security audit
Cost Optimization
VM Sizing
- Standard_B2s (2 vCPUs, 4 GB RAM) is sufficient for light/moderate usage
- Standard_B4ms (4 vCPUs, 16 GB RAM) for heavy usage
- Consider spot instances for non-production workloads (saves 70%+)
Azure OpenAI
- gpt-4o-mini is much cheaper than gpt-4o
- Use gpt-4o only for complex tasks
- Monitor token usage in Azure Portal
- Set spending alerts
VM Costs
- Enable auto-shutdown to avoid forgotten instances running 24/7
- Use Azure Reserved Instances for permanent deployments
- Run during business hours only if not needed 24/7
What’s Next?
Enhance Your Bot
- Add Discord/Slack integration via
clawdbot onboard - Install skills for specialized tasks
- Create custom skills for your specific workflows
- Set up GitHub integration for code-related tasks
- Configure browser automation for web scraping
Scale Your Deployment
- Multiple bots for different teams/purposes
- Load balancing for high-traffic scenarios
- Container deployment on Azure Container Instances
- Kubernetes for enterprise-scale deployments
Monitor and Alert
- Azure Monitor for VM metrics
- Application Insights for Clawdbot analytics
- Log Analytics for centralized logging
- Alert Rules for anomalies
Conclusion
You now have a production-ready Clawdbot deployment on Azure with:
✅ Azure OpenAI integration
✅ Secure private network architecture
✅ Telegram bot with user approval workflow
✅ Docker sandboxing for code execution
✅ Auto-starting services
✅ Easy management scripts
✅ Optional GitHub integration
The architecture is secure by default, with no public endpoints and all API calls using HTTPS. Your secrets are protected, and the bot operates safely within Docker containers.
Next steps:
- Test the bot thoroughly with your team
- Configure additional integrations (Discord, Slack, GitHub)
- Set up monitoring and alerts
- Create custom skills for your specific use cases
- Document your workflows and bot capabilities
Threat model (and some honest gaps)
It’s easy to treat Clawdbot as “just a chat UI on a VM,” but in reality it’s a long‑running agent that can see your data and act on your behalf. If you deploy it the way I describe here without extra hardening, you should assume some important gaps still exist.
- What’s really at risk:
- All conversations that pass through Clawdbot, including sensitive prompts, source code, access tokens pasted into chat, and any files users upload.
- Any systems Clawdbot can reach via tools (file system, internal HTTP APIs, Slack/Teams, GitHub, webhooks). If the bot can talk to it, an attacker who gets control of the bot probably can too.
- Azure OpenAI keys and other secrets that may live in config files, environment variables, logs, or shell history.
- Where attackers come in:
- Publicly exposed ports for the Clawdbot UI or API, especially if you follow a “just open the port and try it out” approach without IP restrictions or a reverse proxy.
- Weak or reused tokens/passwords for the dashboard, stored in plain text or copied into chats and screenshots.
- Prompt‑injection and “curious” users in your own workspace who can trick the model into leaking data or calling tools in ways I did not fully lock down in this walkthrough.
- A compromised or unpatched VM: if someone gets shell access, they effectively own the bot and everything it can access.
- Realistic failure modes with this setup:
- Logs or conversation history quietly accumulate sensitive data on the VM; if the disk or machine is compromised, that data is gone.
- Someone discovers or guesses your bot URL/token and starts abusing tools or burning your Azure OpenAI quota.
- A prompt‑injection attack convinces the model to exfiltrate snippets of secrets, internal URLs, or file contents via chat.
- Config files or scripts with embedded API keys end up in a Git repo, backup, or screenshot and get reused in other environments.
- What this guide does not fully solve:
- It does not turn Clawdbot into a locked‑down, enterprise‑grade, multi‑tenant service by itself.
- It does not guarantee safe tool usage; if you wire powerful tools (filesystem, shell, broad webhooks) into the assistant, you are expanding the blast radius beyond what I show here.
- It does not replace proper security reviews, pen‑testing, or adherence to your organization’s cloud/security standards.
Container privilege escalation (pointed out by a friend)
In this walkthrough I’m running Clawdbot in Docker without doing much container hardening, which means the container still has more privileges than strictly necessary. In a production setup you should assume that if an attacker breaks out of the app, they might try to escalate to the host via the Docker runtime or mounted volumes. To reduce that risk, run the container as a non‑root user, drop all unnecessary Linux capabilities, avoid --privileged, use a read‑only filesystem where possible, and add --security-opt=no-new-privileges so the process cannot gain extra rights at runtime. Even with those safeguards, containers share the host kernel, so they are a mitigation, not a perfect isolation boundary. Thanks Sydney Muzoka for pointing it out!
The goal of calling this out explicitly is to avoid giving a false sense of security. Use this guide as a starting point to get Clawdbot working, but treat the running instance like any other critical app: lock down the network, protect secrets, limit tools, and assume that anything the bot can access might eventually be exposed if you don’t harden it further.

Leave a comment