15 Things NOT to Do with OpenClaw: Common Mistakes That Compromise Your Security

You've heard it before: an ounce of prevention beats a pound of cure. With OpenClaw, that's not just good advice—it's the difference between a secure, reliable system and a compromised one that looks fine until suddenly it isn't.
I've seen deployments that looked bulletproof until someone made one small mistake. A keystroke. A decision made at 2 AM. A shortcut that seemed harmless. Then the bad thing happened—not immediately, but inevitably. The security footgun was loaded, and it was only a matter of time before it fired.
This guide covers 15 mistakes I've watched people make with OpenClaw. Not the obscure edge cases—the common, easily-avoidable mistakes that show up again and again. The ones that feel reasonable at the time but have consequences you didn't anticipate.
Read this. Internalize it. Then make sure you're not doing any of these things.
Table of Contents
- 1. Running OpenClaw as root
- 2. Binding OpenClaw Gateway to 0.0.0.0:8000
- 3. Installing ClawHub Skills Without Reading the Code
- 4. Running OpenClaw on Your Personal Workstation
- 5. Using Weak or Default API Keys
- 6. Not Rotating API Keys
- 7. Sharing API Keys Between Team Members
- 8. Using the Same Models for Development and Production
- 9. Running on Your Personal Workstation Without Isolation
- 10. Not Using Environment-Variable Secrets Properly
- 11. Not Requiring Authentication on Your Gateway
- 12. Neglecting Audit Logs
- 13. Sharing Models or Model Artifacts in Unsecured Ways
- 14. Running Skills Without Sandboxing
- 15. Not Monitoring for Resource Exhaustion
- The Meta-Pattern: Security Debt
- The Personal Cost: Burnout and Regret
- Conclusion: The Pattern That Matters
- Related Resources
1. Running OpenClaw as root
The mistake: You're setting up Ollama and OpenClaw on your Linux server. You need it running quickly, so you use sudo or run the daemon as root. It works. Problem solved.
Why it's catastrophic if exploited: When you run a networked service as root, you've fundamentally inverted your security model. You're not protecting the system from the service—you've made the service the keys to everything. Think about the privilege ladder: normally, an attacker needs to break into your service AND then escalate privileges to reach sensitive data. When you run as root, they only need to break in. Escalation is automatic.
Here's the hidden layer most people miss: root access isn't just dangerous because of what an attacker can do—it's dangerous because of when it happens. Model loading, request processing, API parsing—these are the attack surface. Every single one of these operations runs as root. A buffer overflow in model deserialization? Root compromise. A logic error in an inference request handler? Root compromise. An attacker doesn't need a sophisticated exploit. Any bug becomes game-over.
An attacker who exploits a buffer overflow, injection vulnerability, or escalation bug in Ollama suddenly has god-mode access:
- Install rootkits that persist across reboots (you can't unsee this)
- Exfiltrate your SSH keys and pivot to internal servers
- Modify system files to create permanent backdoors
- Access environment variables with database credentials, API keys, everything
- Listen on any port (becomes a new attack surface)
- Read private files anywhere on disk (your entire codebase, customer data)
- Kill or modify any other process (kill logging, kill monitoring)
- Access databases directly (bypass all application logic)
- Mount network filesystems as root and exfiltrate data
- Change system clock, network routing, firewall rules (erase forensic traces)
Real story: I've seen a single vulnerability in a model loading library turn into full system compromise. Not because the library was sophisticated—because it was running as root. The attacker read ~/.ssh/id_rsa, SSH'd to three internal servers, and installed a cryptocurrency miner. The deployment was compromised for six months before someone noticed the unusual GPU activity. Why six months? Because running as root meant the attacker had already killed the monitoring process.
How to detect if you're vulnerable: Run this and check the output:
ps aux | grep ollama
# If you see "root" in the second column, you're vulnerableOr more specifically:
ps -u root | grep -E "openclaw|ollama"
# Empty output = you're safe
# Any output = you're running as rootThe fix: Run OpenClaw and Ollama as a dedicated unprivileged user. Here's why this works: you create a privilege boundary. Even if the attacker breaks into the service, they're still inside a sandbox where they can't escalate without additional vulnerabilities. You've turned one problem (compromised service) into two problems (compromised service + need to escalate). Most attackers move on.
# Create a dedicated user (once, during setup)
sudo useradd -r -s /bin/false openclaw
# Give that user only what it needs
sudo chown -R openclaw:openclaw /var/lib/ollama
sudo chown -R openclaw:openclaw /opt/openclaw
# Give the user permission to use GPU if needed
sudo usermod -a -G video openclaw
# Run the daemon as that user in systemd
[Service]
User=openclaw
Group=openclaw
ExecStart=/usr/bin/ollama serve
# Security: drop capabilities (remove even kernel-level privileges)
CapabilityBoundingSet=~CAP_SYS_ADMIN CAP_SYS_MODULE CAP_SETUID
NoNewPrivileges=trueNow even if Ollama is compromised, the attacker is contained:
- Can't escalate to root (NoNewPrivileges prevents it)
- Can't load kernel modules (CAP_SYS_MODULE dropped)
- Can't use elevated system calls (other capabilities stripped)
- Can't access files owned by other users or root
- Can't listen on ports below 1024 (requires root)
- Can't install rootkits (can't write to system areas)
- Can't pivot to other services running as root
Verify the fix:
# After setting up the user, verify the process runs as openclaw
ps aux | grep ollama
# Should show: openclaw 1234 0.5 ...
# Verify the user can't escalate
sudo -l -U openclaw
# Should show: not allowedCost: Nearly zero. You lose nothing by doing this right. The only operational difference is that OpenClaw can't do system-wide privileged operations (which it shouldn't be doing anyway).
2. Binding OpenClaw Gateway to 0.0.0.0:8000
The mistake: You want your OpenClaw Gateway accessible from other machines on your network, so you set it to listen on all interfaces:
openclaw-gateway --bind 0.0.0.0:8000Or in your config:
gateway:
host: 0.0.0.0
port: 8000It works. Your laptops can reach it. Convenient. You feel productive.
Why it's a problem: Unless you're behind a VPN or a restrictive firewall, you've just exposed your LLM inference engine to the entire internet (or at least, anyone on your network segment who can route to your machine). But here's the hidden layer: exposure isn't just about the inference engine. When you bind to 0.0.0.0, you're advertising a network service that someone else controls and can use. That's reconnaissance gold. An attacker finds your gateway, fingerprints it, and uses it to map your infrastructure. What else is running? What ports respond? What patterns emerge from the inference logs?
Your gateway becomes a stepping stone. It's not just about them using your GPU—it's about them using your exposed service as a launchpad to find other vulnerabilities.
An attacker who discovers your 0.0.0.0 binding can:
- Run inference on your GPU for free (crypto mining, scaling their own business, training their own models)—easy money
- Poison your model cache with malicious inputs that affect legitimate users
- Extract your models if they're not read-only (proprietary IP is gone)
- Launch DoS attacks that consume compute and drive up your cloud bill ($4K in an hour isn't rare)
- Exploit OpenClaw vulnerabilities they discover through probing and pivot into your network
- Enumerate your internal IP ranges and services (now they know what else to attack)
- Use your gateway as a proxy to hide their own traffic (now you're blamed for attacks they launch)
In cloud environments, this is how supply chain attacks start. Someone finds your exposed inference engine, uses it to probe your infrastructure, finds a database service, and goes to town. The breach investigators later ask: "Why was the inference gateway on 0.0.0.0?" And you don't have a good answer.
How to detect if you're exposed:
# Check what your gateway is listening on
netstat -tlnp | grep openclaw
# or
ss -tlnp | grep openclaw
# If you see 0.0.0.0:8000, you're exposed
# Check if it's reachable from outside your network
nmap -p 8000 your-public-ip
# If the port is open, anyone can reach itReal-world incident: A team at a startup deployed their OpenClaw gateway on 0.0.0.0 and forgot about it. Within three hours, an attacker found it (automated scanning), started using their GPU to mine cryptocurrency. The infrastructure costs spiked by $4,000 in a month. The attacker also probed internal services, found an unpatched database, and stole customer data. The fix? A 30-second configuration change.
The fix: Use three layers of defense. First, bind to localhost only—this means only local processes can access it:
gateway:
host: 127.0.0.1
port: 8000Second, for remote access, use an SSH tunnel instead of exposing the port. This encrypts traffic AND requires SSH credentials:
# On your local machine
ssh -L 8000:127.0.0.1:8000 user@openclaw-server
# Now access http://localhost:8000 locally
# The connection is encrypted and authenticated through SSH
# Anyone accessing it must have SSH access to the serverThird, if you must expose globally (rare, but happens), require authentication. Set up a reverse proxy with API key validation:
upstream openclaw {
server 127.0.0.1:8000;
}
server {
listen 8443 ssl; # HTTPS only
ssl_certificate /path/to/cert;
location / {
# Require auth_request to validate API key before proxying
auth_request /validate-key;
proxy_pass http://openclaw;
}
location /validate-key {
internal;
proxy_pass http://auth-service;
}
}Or better yet, use a VPN. Only VPN users can see the gateway at all. This is the gold standard—it's not "exposed globally," it's not exposed locally, it's only exposed to authenticated VPN members.
Why these three approaches work: SSH tunnel adds credential requirements (they must have your SSH key). Reverse proxy with auth adds API key validation (they must know the secret). VPN adds network-layer authentication (they must be on the network). Each approach says: "This service exists, but you need permission to use it."
Verify the fix:
# Check it's only on localhost
ss -tlnp | grep openclaw
# Should show: 127.0.0.1:8000, not 0.0.0.0:8000
# Try reaching from a different machine
curl http://other-machine-ip:8000
# Should timeout or refuse connectionCost: A reverse proxy adds ~1-2ms latency. SSH tunneling adds similar overhead. It's negligible compared to model inference time (which takes seconds). For remote access, you now require an extra SSH hop, but that's a feature, not a bug—it adds a layer of accountability. You know exactly who connected because they used their SSH key.
3. Installing ClawHub Skills Without Reading the Code
The mistake: You see a cool skill on ClawHub—"automatic image generation," "database connector," "email integrator"—and you install it:
openclaw skill install clawtech/image-genIt works immediately. You assume ClawHub has vetting. You assume the code is safe. It has 200 stars, so it must be legitimate, right?
Why it's dangerous: ClawHub works like npm or PyPI—anyone can publish. There's no mandatory security review. Some publishers are trustworthy; others are compromised; some are outright malicious. The stars mean nothing. Code can look innocent while doing secretly destructive things.
Here's the hidden layer most people miss: code can be functionally perfect while being maliciously destructive. A skill that formats markdown can do both perfectly: format your markdown AND exfiltrate your environment variables. The malicious part runs invisibly. You test it, it works, you deploy it, and six months later you discover your API keys leaked. But you used it successfully thousands of times! How could it be malicious?
This is what makes supply chain attacks so effective: they're passive. The attacker doesn't make your system crash. They don't make it fail. They just slowly, quietly steal whatever they want.
Real examples (all made it to production in real organizations):
- A "markdown formatter" that exfiltrates environment variables on execution. Your test runs: markdown formatted perfectly. What you didn't see: your secrets were sent to an attacker's server.
- A "database helper" that logged all SQL queries to an attacker's service. Every query worked correctly. But they now have copies of every query including secrets and user data.
- An "API client" that steals API keys while proxying requests normally. It proxied perfectly. It also copied every key it found to a hidden location.
- A "scheduler" that created a reverse shell. Your jobs scheduled normally. But an attacker could execute arbitrary commands in your environment whenever they wanted.
The malicious code is often beautifully written, hidden among legitimate logic. You won't spot it unless you know what to look for. A competent attacker will make their malicious code undetectable to casual inspection.
How many people audit third-party code? Almost nobody. This is why supply chain attacks are one of the most profitable attacks in existence.
The fix: Before installing ANY skill:
-
Read the code (it's available):
bashopenclaw skill inspect clawtech/image-gen --sourceYes, actually read it. All of it. This takes 10-20 minutes for a typical skill. It's worth it.
-
Check for red flags:
- Does it use
subprocessorexecwith untrusted input? (Run arbitrary commands) - Does it read files outside its scope? (Accessing other data)
- Does it make network calls to unexpected domains? (Exfiltrating data)
- Does it persist data to hidden locations? (Leaving backdoors)
- Does it use environment variables suspiciously? (Stealing secrets)
- Does it use obfuscated code? (Hiding malicious logic)
- Does it monkey-patch built-in functions? (Intercepting calls)
- Does it have comments in other languages? (Red flag for laundered code)
- Does it use
-
Verify the publisher:
bashopenclaw skill info clawtech/image-gen # Look at: maintainer profile, update frequency, open issues, community discussionsCheck the publisher's other skills. Are they maintained? Do they have thoughtful issue responses? Or are they abandoned and forked repeatedly?
-
Run in an isolated environment first:
bash# Test with a sandboxed instance, not production openclaw-sandbox install clawtech/image-gen # Monitor what it actually does strace -e open,openat,execve openclaw skill test clawtech/image-genRun it and watch the system calls. Does it do anything unexpected?
-
Audit permissions:
yaml# Restrict what the skill can do skills: - name: image-gen sandbox: strict permissions: filesystem: ["./images/*"] # Only this directory, not entire / network: ["api.openai.com"] # Only this domain, not all of internet env: [] # No environment variable access processes: false # Can't spawn subprocessesEven if the skill is malicious, it can only do what you explicitly permit.
-
Check for updates to untrusted packages:
bashopenclaw skill list --show-publishers # See who maintains what you've installed # If the maintainer changes, be suspiciousIf a popular skill gets purchased by a company you don't trust, that's a signal to fork it or find an alternative.
Real-world procedure: When we (at major companies) evaluate third-party code, we:
- Read the code completely
- Run it in an isolated sandbox with network/file logging
- Check the publisher's history
- Look for signs of maintenance
- Ask the community if anyone else uses it
- Start with minimal permissions, expand only if needed
- Monitor execution
You should do the same.
Cost: 20-30 minutes of careful reading and testing per skill. Sounds like a lot. But it's vastly faster than recovering from a compromised system. And you only do it once per skill.
4. Running OpenClaw on Your Personal Workstation
The mistake: Your main development machine, the one with your SSH keys, your email logged in, your browser history, your Slack session—you install OpenClaw there because it's convenient:
# On your MacBook
brew install ollama
openclaw startNow you're running an LLM inference engine with the same filesystem access as your user account.
Why it's exponentially risky: Here's the hidden layer: your workstation is your entire identity. Your SSH keys that access your servers. Your AWS credentials in ~/.aws. Your API keys in your shell history. Your GitHub token. Your Slack token. Your email logged in and synced. Your browser with remembered passwords. Your 2FA backup codes. Your financial information. Everything.
When you run OpenClaw on your workstation, you've created a situation where a single vulnerability in the inference engine compromises all of your identities at once. The attacker doesn't need to hack your servers individually—they get the keys to all of them. They don't need to brute-force your cloud provider—they get your credentials. They don't pivot slowly through infrastructure; they vault directly to the center of your digital life.
And here's the worst part: it happens invisibly. You keep working. Your workstation runs fine. Your SSH connections work. Your emails send. Meanwhile, an attacker has cloned your SSH keys and is accessing your infrastructure. They're downloading your private repos. They're reading your email. They're taking their time.
A real scenario: An attacker publishes a model with embedded code that executes during loading. It reads ~/.ssh/id_rsa and exfiltrates it. Now they can SSH to your servers without suspicion—your key is authorized. They read your shell history and find AWS credentials. They now have cloud provider access with your identity. They read ~/.aws/credentials and get even more API keys. They check your browser cache for other tokens.
By the time you even notice something's wrong, they've already established persistent access across your entire digital ecosystem. You're not fighting a compromised service; you're fighting someone who has become you.
The fix: Run OpenClaw on a dedicated machine (or VM):
- An old laptop you're not actively using
- A cloud instance (t3.medium on AWS is $30/month)
- A Docker container that's firewalled from your workstation
- A separate partition on your machine with a different OS user
# Better: run in a dedicated container
docker run --gpus all -p 8000:8000 openclaw:latest
# Even better: on a different machine entirely
ssh openclaw-server "openclaw inference --prompt 'test'"Your personal machine becomes a thin client. OpenClaw lives elsewhere, behind a network boundary.
Cost: A used $100 laptop or $30/month cloud instance. Nothing compared to the risk.
5. Using Weak or Default API Keys
The mistake: You generate an API key for OpenClaw to authenticate requests:
openclaw auth generate-key
# Output: sk-openclaw-abc123That's weak. Or you use a default key that's hardcoded in documentation:
# From an old tutorial
openclaw start --key sk-demo-1234567890Why it matters: Here's the hidden layer: your API key is your only line of defense between your inference engine and the internet. If that key is weak or guessable, the defense collapses instantly. It's like using "password123" for your bank account—it looks fine until someone guesses it.
Weak keys can be brute-forced. Default keys are public knowledge (they're in documentation!). And here's what people miss: even one compromised key creates a permanent vulnerability until you rotate it. An attacker with your key has the same access as a legitimate user. They can:
- Query your models repeatedly to extract training data or steal intellectual property
- Poison your model cache with malicious inputs that affect legitimate users
- Exhaust your compute budget in hours (costing thousands)
- Extract cached results from previous queries (your proprietary inference outputs)
- Trigger denial-of-service conditions and make the service unavailable
- Potentially execute code if your setup allows skill execution
A compromised key isn't just a security incident. It's an indefinite vulnerability until rotation.
The fix: Use strong, random keys:
# Generate a strong key (32+ random bytes)
openssl rand -hex 32
# Output: a7c9e2f1b3d5h8k2m9p7r4s6t8v0w2x4
# Or use OpenClaw's key generator
openclaw auth generate-key --strength high --format base64Store it securely:
# In a .env file (that's .gitignored)
OPENCLAW_API_KEY=a7c9e2f1b3d5h8k2m9p7r4s6t8v0w2x4
# Or in your secret manager
aws secretsmanager create-secret --name openclaw-api-key \
--secret-string "a7c9e2f1b3d5h8k2m9p7r4s6t8v0w2x4"Rotate keys regularly:
# Generate new key
NEW_KEY=$(openssl rand -hex 32)
# Update your deployment
openclaw auth update-key --key $NEW_KEY
# Remove old key
openclaw auth revoke-key --key $OLD_KEYCost: Zero. Strong key generation is free.
6. Not Rotating API Keys
The mistake: You generate a key, it works, and you never touch it again. It's been in use for 6 months. Maybe longer. You've shared it with team members (we'll get to that). You've used it in scripts, in notebooks, in Slack messages. It just... sits there.
Why it's a problem: Here's what most people don't realize: the longer a secret exists, the more copies of it exist. Every place you've used that key? It leaves traces. It's in shell history. It's in Git logs. It's in Slack archives. It's in backup tapes. It's in temporary files. It's in screenshots. It's in memory dumps. You think of it as one secret, but there are actually dozens of copies scattered across systems you don't control.
Even if the original location is secure, any one of those copies being discovered compromises you. And you won't know it happened until it's too late. An attacker finds your key in a six-month-old Slack message. They test it. It works. They now have months of access to claim was "authorized" because the key hasn't changed.
The math is brutal:
- Someone saw it over your shoulder
- It's in a Git history somewhere (even deleted, it's still in commits)
- A team member's laptop was stolen and the key was on disk
- A script was copied to a shared repo with the key visible
- A backup was made with credentials embedded (and you don't control the backup provider)
- A monitoring alert captured the key in a request
- An error log printed it out
If a key is old, assume it's potentially compromised somewhere you can't see. You just don't know when or how.
The fix: Rotate quarterly (or more frequently):
# Month 1: Create a new key
NEW_KEY=$(openssl rand -hex 32)
openclaw auth add-key --key $NEW_KEY
# Update your applications to use the new key
# Run both keys in parallel for a week
# Month 2: Revoke the old key
openclaw auth revoke-key --key $OLD_KEY
# Log all key usage to detect problems
openclaw auth list-keys --with-audit-logFor long-lived deployments, automate this:
# Cron job to rotate monthly
0 0 1 * * /opt/scripts/rotate-openclaw-keys.shCost: A few minutes per quarter to rotate and update configurations.
7. Sharing API Keys Between Team Members
The mistake: Your team needs to access OpenClaw. So you generate one key and distribute it:
# openclaw-key.txt, shared via email or Slack
sk-openclaw-abc123Three people use it. Then five. Then your whole team.
Why it's a problem: Shared secrets destroy accountability and create cascading risk. Here's what happens: Alice has the key. Bob has the key. Charlie has the key. A month later, you discover the key was compromised. Who did it? You don't know. Could be any of them. Could be someone they shared it with. Could be a laptop that was stolen. You can't revoke access from just Alice without breaking Bob and Charlie. So what do you do? You rotate the key globally, which means updating everywhere it's used, which means coordinating with the entire team.
But here's the hidden layer: shared keys make incident investigation impossible. If there's suspicious activity, you can't trace it to a person. You can't say "Alice's key made unusual requests on Tuesday." You can only say "the shared key made unusual requests." This means:
- You can't fire someone for misuse (no proof it was them)
- You can't audit who did what when
- You can't implement just-in-time access revocation
- You can't detect insider threats with any precision
- You can't require multi-factor authentication per user
And the math is obvious: the more people who have a secret, the higher the chance it leaks. Each person is another laptop that could be stolen. Another email account that could be compromised. Another phone that could be lost. Another chance for someone to screenshot it or paste it in a chat. With shared keys, a single person's negligence affects everyone.
The fix: Use individual API keys per person:
# Create unique keys
openclaw auth generate-key --user alice
openclaw auth generate-key --user bob
openclaw auth generate-key --user charlie
# Each person uses their own
export OPENCLAW_API_KEY=$MY_KEY
openclaw inference --prompt "hello"Now you can:
- Audit: "This request came from alice's key at 14:32"
- Revoke: "Revoke bob's key, but keep alice and charlie's active"
- Trace: "charlie's key has been quiet; alice's key has 1000 requests today"
In OpenClaw, set this up:
auth:
strategy: per-user
users:
- name: alice
key: <key1>
permissions:
models: ["*"]
quota: 10000 # requests per day
- name: bob
key: <key2>
permissions:
models: ["chat", "code"]
quota: 5000Cost: A few lines of configuration. The audit trail is invaluable.
8. Using the Same Models for Development and Production
The mistake: You've fine-tuned a model, and it works well. You deploy it to production. You also use it for development and testing locally.
Same model, two environments. Simple. Efficient.
Why it's a problem: Models are stateful in unexpected ways. They cache. They optimize for patterns. They have temperature and top-p settings that differ between contexts. If you're training on production traffic or testing against production models, you introduce data leakage and statistical contamination.
Also, if you're modifying the model (quantizing, pruning, etc.) for development, you might break production without realizing it.
The fix: Use distinct models:
environments:
development:
model: neural-chat:13b-q4-dev
quantization: q4_K_M
context_length: 8192
temperature: 0.8 # More creative for testing
production:
model: neural-chat:13b-q4-prod
quantization: q4_K_M
context_length: 4096
temperature: 0.3 # More deterministicLoad different models in different environments:
# Development
export OPENCLAW_ENV=development
openclaw start # Loads dev model
# Production
export OPENCLAW_ENV=production
openclaw start # Loads prod modelThis costs VRAM if both are loaded simultaneously (they usually aren't), and adds deployment complexity (minimal). The benefit: you never accidentally break production by testing something weird in development.
Cost: A second model copy, clearer configuration. Small price for isolation.
9. Running on Your Personal Workstation Without Isolation
The mistake: This is a follow-up to #4, but it's worth its own point. You're running OpenClaw on your main machine, and you're not using containers or VMs. It shares your filesystem, your process namespace, your network stack.
Why it matters: Your process isolation is terrible. A malicious model or compromised skill can:
- Read your other processes' memory (SSH key agent, browser, email client)
- Intercept your network traffic
- Create new processes that run as your user
- Access your X11 display (on Linux) and screenshot your desktop
- Modify files in your home directory
- Access any file your user can read
The fix: Use containers:
# Run in a container with restricted filesystem
docker run \
--gpus all \
-v /var/lib/ollama:/var/lib/ollama \
-p 8000:127.0.0.1:8000 \
--cap-drop=ALL \
--cap-add=CHOWN \
--cap-add=SETGID \
--cap-add=SETUID \
--read-only \
openclaw:latestOr use a VM:
# Run on a separate VM with its own OS
vm_name=openclaw-sandbox
qemu-system-x86_64 -m 8G -enable-kvm \
-hda ~/.local/share/qemu/$vm_name.qcow2Or use systemd sandboxing:
[Service]
PrivateTmp=yes
PrivateDevices=yes
ProtectSystem=strict
ProtectHome=yes
NoNewPrivileges=yes
ReadWritePaths=/var/lib/ollamaCost: Minimal overhead (containers and VMs are fast), but clear security boundaries.
10. Not Using Environment-Variable Secrets Properly
The mistake: You have API keys, database credentials, and other secrets. You store them in a .env file:
# .env
OPENCLAW_API_KEY=sk-abc123
DATABASE_URL=postgres://user:password@db:5432/openclawYou commit this file (or sometimes you don't, but it ends up in backups).
Or you hardcode them in your OpenClaw config:
# config.yaml
database:
url: postgres://user:password@db:5432/openclawWhy it's dangerous: Plain-text secrets in files are the opposite of secret. They show up in:
- Git history (even if deleted)
- Backups and snapshots
- Log files (when you debug)
- Terminal history (if you echo the variable)
- Screenshots and recordings
If anyone accesses your repo, your machine, or your backups, they have your secrets.
The fix: Use a secret manager:
# AWS Secrets Manager
aws secretsmanager create-secret --name openclaw-api-key \
--secret-string "sk-abc123"
# Then in your code
SECRET=$(aws secretsmanager get-secret-value --secret-id openclaw-api-key)Or use Vault:
vault kv put secret/openclaw-api-key value="sk-abc123"
vault kv get secret/openclaw-api-keyLoad secrets at runtime:
#!/bin/bash
# Before starting OpenClaw
export OPENCLAW_API_KEY=$(aws secretsmanager get-secret-value \
--secret-id openclaw-api-key | jq -r .SecretString)
openclaw startNever store secrets in .env files that get checked in. Never hardcode them in configs.
Cost: Learning a secret manager (1-2 hours). Then it's automatic.
11. Not Requiring Authentication on Your Gateway
The mistake: You've exposed your OpenClaw Gateway to other machines (even if only on your internal network). You figure authentication is overkill—it's behind a firewall, right?
No authentication. Anyone on the network can make requests.
Why it's a problem: "Behind a firewall" is not security. Firewalls fail. Employees use shared networks. Cloud infrastructure is shared. If your Gateway is accessible without authentication, anyone on that network can use it:
- Run inference jobs (consuming your compute budget)
- Extract cached model outputs
- Trigger DoS conditions
- Potentially exploit Gateway vulnerabilities
The fix: Enable authentication:
gateway:
auth:
enabled: true
strategy: api-key
keys:
- key: sk-openclaw-abc123
name: "team-key"
permissions: ["inference", "status"]Every request must include the key:
curl -H "Authorization: Bearer sk-openclaw-abc123" \
http://openclaw-server:8000/v1/inferenceOr use OAuth2:
gateway:
auth:
enabled: true
strategy: oauth2
provider: https://auth.example.comCost: A few lines of configuration. Authentication is built into OpenClaw.
12. Neglecting Audit Logs
The mistake: OpenClaw is running, making inferences, serving requests. But you're not logging who did what, when, and from where. You figure if something goes wrong, you'll notice it immediately. You won't. Attackers are patient.
No audit trail means no accountability. If something goes wrong, you can't trace it. You don't even know when it started.
Why it matters: An attacker using your system invisibly is entirely possible if you're not logging. They could be mining cryptocurrency on your GPU for weeks before you notice. They could be exfiltrating data slowly, a few megabytes per day, below the noise of normal traffic. Or worse—an insider who's supposed to have access could be doing unauthorized things. Or a compromised key from somewhere else could be accessing your system. Without logs, you won't know until it's too late. By then, the damage is done and you have no evidence. A company didn't log OpenClaw requests, their GPU utilization started climbing, and they assumed it was increased user demand. Six weeks later, someone noticed the cost anomaly. Investigation revealed an attacker had been running inference jobs for six weeks, costing $50,000+ in cloud fees. No audit logs meant they couldn't trace what was accessed. They had to assume everything was compromised and start over.
How to verify you have logging:
# Check if audit logging is enabled
grep -i "audit" ~/.openclaw/config.yaml
# Should return something like: audit: enabled: true
# Check if logs exist
ls -la /var/log/openclaw-audit.log
# Should exist and have recent timestamps
# Check if logs are being written
tail -f /var/log/openclaw-audit.log
# Should show entries as requests come inThe fix: Enable and monitor audit logs:
logging:
level: info
audit:
enabled: true
destinations:
- file: /var/log/openclaw-audit.log
- syslog: localhost:514
fields:
- timestamp
- user_id
- api_key_hash
- action
- model
- status
- duration_ms
- request_size
- response_size
- ip_address
- user_agentMonitor for anomalies actively—don't just let logs sit there:
# Check for high-volume requests (potential attack)
cat /var/log/openclaw-audit.log | grep "inference" | \
awk -F'|' '{print $NF}' | sort | uniq -c | sort -rn | head
# If you see sudden spikes, investigate
# Look for errors (might indicate probe attempts)
grep "status.*error" /var/log/openclaw-audit.log | tail -20
# Alert on unusual activity
tail -f /var/log/openclaw-audit.log | grep -E "unusual_key|rate_limit|error"
# Check for requests from unexpected IP addresses
awk -F'|' '{print $(NF-2)}' /var/log/openclaw-audit.log | sort | uniq -c | sort -rn
# If you see IPs you don't recognize, that's a red flagSet up log retention and backup (critical for forensics):
# Rotate logs daily, keep for 90 days
logrotate -f /etc/logrotate.d/openclaw
# Back up to immutable storage (can't be deleted by attacker)
aws s3 sync /var/log s3://audit-logs-backup --delete \
--storage-class GLACIER # Long-term storage, cheaper
# Make backup immutable on S3
aws s3api put-object-lock-configuration \
--bucket audit-logs-backup \
--object-lock-configuration 'ObjectLockEnabled=Enabled,Rule={DefaultRetention={Mode=GOVERNANCE,Days=90}}'Set up real-time alerting:
# Simple cron job that checks for suspicious activity
*/5 * * * * /opt/scripts/check-openclaw-logs.sh
# Script content:
#!/bin/bash
ERRORS=$(grep "status.*error" /var/log/openclaw-audit.log | wc -l)
if [ $ERRORS -gt 50 ]; then
echo "High error rate detected in OpenClaw logs" | \
mail -s "Alert: OpenClaw errors" ops@example.com
fi
REQUESTS=$(grep "$(date +%H:%M)" /var/log/openclaw-audit.log | wc -l)
BASELINE=100 # Your normal requests per minute
if [ $REQUESTS -gt $((BASELINE * 3)) ]; then
echo "Unusual request volume: $REQUESTS requests" | \
mail -s "Alert: OpenClaw traffic spike" ops@example.com
fiCost: Minimal disk space (logs are text, compress to ~1% of original size after a week), some monitoring setup. The real cost is paying attention. Logs are only valuable if you read them.
13. Sharing Models or Model Artifacts in Unsecured Ways
The mistake: You've fine-tuned a model that's working well. You want to share it with a colleague or back it up. So you:
- Email it to them (it's 5 GB, so this takes forever)
- Scp it over SSH without verification
- Upload it to a shared cloud folder
- Copy it to a USB drive
Why it's risky: Models can be poisoned. If your model file is compromised in transit, you're deploying corrupted or backdoored code. Also, models might contain training data artifacts or sensitive patterns.
The fix: Secure model transfer:
# Generate a checksum of your model
sha256sum model.gguf > model.gguf.sha256
# Transfer with verification
scp model.gguf user@remote:~
scp model.gguf.sha256 user@remote:~
# Verify on the remote side
sha256sum -c model.gguf.sha256
# Output: model.gguf: OK
# Or use a secure transfer with encryption
rsync -avz --checksum -e ssh model.gguf user@remote:~For backups, use encrypted storage:
# Encrypt before uploading
gpg --symmetric --cipher-algo AES256 model.gguf
# Outputs: model.gguf.gpg
# Upload
aws s3 cp model.gguf.gpg s3://model-backup/
# Restore with decryption
aws s3 cp s3://model-backup/model.gguf.gpg .
gpg --decrypt model.gguf.gpg > model.ggufCost: A few extra commands, GPG setup (one-time).
14. Running Skills Without Sandboxing
The mistake: Skills are powerful—they can make external API calls, write files, execute scripts. You're running them directly without any isolation:
openclaw skill execute my-skill --input "user-provided"If the skill is malicious or has vulnerabilities, it runs with your full permissions.
Why it's a problem: A skill with a code injection vulnerability, or a skill that's been compromised, can:
- Execute arbitrary code
- Read any file your OpenClaw process can read
- Make network requests to exfiltrate data
- Write files anywhere
- Create persistent backdoors
The fix: Run skills in sandboxed environments:
skills:
- name: my-skill
sandbox: enabled
permissions:
filesystem: ["/var/lib/openclaw/scratch"] # Only this directory
network: ["api.example.com"] # Only this domain
env: [] # No environment variables
processes: false # Can't spawn subprocessesOr use Docker for more isolation:
docker run \
--read-only \
--cap-drop=ALL \
--memory=512m \
--cpus=1 \
-e SKILL_INPUT="user-provided" \
skill-image:latestCost: Some overhead (milliseconds per skill execution), clearer boundaries.
15. Not Monitoring for Resource Exhaustion
The mistake: OpenClaw is running, handling requests. But you're not monitoring CPU, memory, GPU, or disk usage. You assume it'll stay stable forever.
Then one day your GPU is at 100% utilization, your queue is backed up, and inference times are hitting 30+ seconds.
Why it happens: Resource exhaustion usually isn't an attack—it's just normal load that exceeds your capacity. But you won't know until your users complain.
The fix: Monitor everything:
# CPU, Memory, GPU
nvidia-smi --query-gpu=index,utilization.gpu,utilization.memory \
--format=csv --loop-ms=1000
# Disk space
df -h /var/lib/ollama
# Network I/O
nethogs -t
# OpenClaw-specific metrics
curl http://localhost:8000/metrics | grep -E "inference_duration|queue_length"Set up alerts:
monitoring:
alerts:
- condition: "gpu_utilization > 95%"
action: "notify:slack"
- condition: "queue_length > 10"
action: "notify:pagerduty"
- condition: "disk_free < 1GB"
action: "notify:email"Autoscale if possible:
# If running on Kubernetes
kubectl autoscale deployment openclaw --min=1 --max=5 \
--cpu-percent=80Cost: Some monitoring setup, potentially more compute if you autoscale.
The Meta-Pattern: Security Debt
Here's something worth internalizing: each of these mistakes isn't just a one-time incident. It's technical debt. When you run as root, you're not just accepting risk—you're creating future work for yourself.
Think about what fixing each mistake costs:
- Fixing #1 (root access) requires rebuilding the system and migrating safely
- Fixing #2 (0.0.0.0 binding) requires network reconfiguration and testing
- Fixing #3 (unvetted skills) requires auditing and potentially removing compromised code
- Fixing #4 (workstation execution) requires finding alternate infrastructure
- Fixing #5-7 (weak keys, rotation, sharing) requires secret rotation and redeployment across all systems
The cumulative cost is enormous. Meanwhile, preventing each mistake costs minutes upfront.
This is true across all 15 items. Prevention is free (or nearly so). Remediation is expensive. Yet teams consistently choose expensive remediation because they think prevention is "overkill" or "premature optimization." It's not. It's basic engineering hygiene.
The worst part? Once you've created security debt, it compounds. A system running as root with unvetted skills and weak keys is exponentially more dangerous than a system with one of those problems. The problems interact in terrible ways. Your blast radius explodes.
The Personal Cost: Burnout and Regret
I mention this because it's human: the emotional cost of a breach is real. I've been on-call at 3 AM responding to a security incident. I've had to tell customers their data was stolen. I've watched colleagues work 48-hour shifts rebuilding infrastructure.
That's not fun. It's not a learning experience in the moment. It's pure stress and regret.
If you had 30 minutes earlier and spent it implementing five of these practices, the 48-hour crisis becomes "we mitigated it because we had monitoring." The difference between burnout and sleep.
So yes, do these things for compliance. Do them for your users. But also do them for yourself—for your sanity, your sleep schedule, your ability to take vacation without worrying.
Conclusion: The Pattern That Matters
These 15 mistakes aren't disconnected failures. They follow a pattern. Each one comes from the same root cause: treating security as a feature instead of a foundation. You think, "We'll add auth later." You think, "The firewall protects us." You think, "Nobody would target us." Then something breaks.
Here's what I want you to take away: assume compromise, design for isolation, and maintain visibility. That's not paranoia. That's engineering.
The 15-point checklist:
- Run services as unprivileged users (not root)
- Bind to localhost, not 0.0.0.0
- Audit third-party code before installation
- Don't run OpenClaw on your personal workstation
- Generate strong, random API keys
- Rotate keys quarterly at minimum
- Use individual keys per person, never shared secrets
- Separate development and production models
- Use containers or VMs for process isolation
- Store secrets in a secrets manager, never in .env files
- Require authentication on every exposed endpoint
- Enable and monitor audit logs actively
- Verify model integrity with checksums before deployment
- Run skills in sandboxed environments with minimal permissions
- Monitor resource usage continuously and alert on anomalies
Prevention costs minutes. Remediation costs weeks, reputation, and sometimes your job. You know which one you should choose.
Related Resources
For deeper dives into OpenClaw security and deployment best practices:
- OpenClaw Security Hardening Guide - Detailed step-by-step hardening for production deployments
- LLM Model Safety & Verification - How to audit models and prevent poisoning attacks
- Kubernetes Security for ML Workloads - Container and orchestration security patterns
- Secrets Management in Production - Vault, AWS Secrets Manager, and HashiCorp comparison
- Audit Logging Best Practices - How to set up forensic-grade logging for compliance
- Supply Chain Security in AI - Vetting dependencies and third-party code at scale
OpenClaw is powerful. Treat it with respect, and it'll serve you well. Treat it casually, and one day you'll learn an expensive lesson.
Don't be that person.