How to Migrate OpenClaw to a New Server (2026) — Step-by-Step Guide
Complete guide to migrate OpenClaw to a new server or platform. Transfer memory, cron jobs, secrets, and skills to IronClaw, Nanobot, Pi, or Goose without losing data.
🔴 Update (Feb 16, 2026): The speculation is now confirmed. Peter Steinberger has joined OpenAI and OpenClaw will move to a foundation with OpenAI’s continued support. Both Meta and OpenAI had submitted billion-dollar bids. The good news: OpenClaw stays open-source. The foundation model means forks remain viable and self-hosted instances keep working. But this guide is now more relevant than ever — knowing your exit options before you need them is the whole point.
With the OpenClaw acquisition now confirmed, the question I raised here has gone from hypothetical to urgent: what happens if I need to migrate my agent setup to a different platform?
I run a multi-agent OpenClaw instance on a DigitalOcean droplet. It handles everything from scheduled trading automation to security monitoring to daily research tasks. It’s deeply integrated into my workflow — Telegram notifications, email processing, cron-scheduled jobs, GPG-encrypted secrets. The thought of rebuilding all of this from scratch on a different platform is… not appealing.
But here’s the thing: the ecosystem is fragmenting fast. IronClaw (NEAR AI’s Rust rewrite), Nanobot (HKU’s research framework), Goose (Block’s developer agent), and even lightweight alternatives like Pi (the minimal engine inside OpenClaw itself) are all viable options now.
So I did the responsible thing: I spent a weekend figuring out what’s actually portable and what breaks when you migrate. This post is the field guide I wish I’d had.
OpenClaw Architecture: What Are We Actually Moving?
Before we talk migration, let’s map what a typical OpenClaw setup looks like. Here’s mine:
The key components:
- Gateway — the orchestration layer (not portable)
- Agents — main + specialized sub-agents (architecture not portable)
- Memory — Markdown files (highly portable ✅)
- Tools — Bash/Python scripts (portable ✅)
- Channels — platform-specific integrations (not portable)
- Cron — scheduled task definitions (partially portable)
- Secrets — GPG-encrypted password store (portable ✅)
When you migrate, memory and tools are your lifeline. Everything else needs to be rebuilt.
Memory Architecture: The Portable Core
This is the most important insight I learned: Markdown-first memory beats vector databases for portability.
Here’s how OpenClaw’s memory system is structured:
What’s portable:
- SOUL.md — Identity, values, security principles, coding standards
- MEMORY.md — Long-term knowledge, lessons learned, curated facts
- Daily logs —
memory/2026-02-13.mdtimestamped event records - Feedback files — Shared decisions, pattern recognition
- Security logs — Audit trails, threat analysis
What’s not portable:
- Session transcripts — Platform-specific JSONL format
- Skills metadata — OpenClaw-specific SKILL.md format (but the underlying scripts are portable)
Here’s what my memory directory looks like:
~/.openclaw/workspace/
├── SOUL.md # Identity (portable ✅)
├── MEMORY.md # Long-term knowledge (portable ✅)
├── memory/
│ ├── 2026-02-13.md # Daily logs (portable ✅)
│ ├── 2026-02-14.md
│ ├── feedback/ # Shared decisions (portable ✅)
│ │ ├── security-patterns.md
│ │ └── automation-patterns.md
│ └── security/ # Audit logs (portable ✅)
│ └── threat-analysis.md
├── scripts/ # Bash/Python tools (portable ✅)
│ ├── trading/
│ ├── security/
│ └── automation/
├── skills/ # Platform-specific (partially portable)
│ ├── gitleaks/
│ ├── harvester/
│ └── leonardo-gen/
└── sessions/ # JSONL transcripts (NOT portable ❌)
└── 2026-02-13.jsonl
The critical insight: Knowledge flows upward from ephemeral sessions into durable Markdown files. By the time a session ends, everything important has already been extracted. Session transcripts are just raw tape — useful for debugging, but not essential for continuity.
Example from my MEMORY.md:
## Trading System
### Automated Trading Strategy
- Deployed 2026-02-10
- EUR/USD grid with 10-pip spacing
- Risk per trade: 0.5% account balance
- Stop loss: 30 pips
- Take profit: 15 pips
- Performance: 2.3% weekly return (3 weeks data)
### Lessons Learned
- Don't run grid trades during NFP releases
- Always check spread before placing orders
- Log all API errors to security/ for review
This survives any migration. It’s plain text. No proprietary format, no vendor lock-in.
Migration Scenario 1: OpenClaw → Pi
Pi is the minimal agent engine that powers OpenClaw itself. It’s a single-binary coding agent with no bells and whistles. Moving to standalone Pi is like stripping a car down to the engine.
What Works Immediately
Memory migration:
# Export memory to Pi
mkdir -p ~/pi-migration
cp ~/.openclaw/workspace/SOUL.md ~/pi-migration/system-prompt.md
cp ~/.openclaw/workspace/MEMORY.md ~/pi-migration/
cp -r ~/.openclaw/workspace/memory/ ~/pi-migration/
# Pi reads system prompt from file
pi --system-prompt ~/pi-migration/system-prompt.md \
--context ~/pi-migration/MEMORY.md \
--context ~/pi-migration/memory/2026-02-14.md \
"Review today's memory and continue work"
Scripts migrate cleanly:
# All Bash/Python scripts work unchanged
cp -r ~/.openclaw/workspace/scripts/ ~/pi-migration/
Secrets work:
# pass (GPG password store) is platform-agnostic
pass show trading-platform/api-key # Works in Pi exactly as it did in OpenClaw
What Breaks
Multi-agent coordination:
- OpenClaw: Main agent delegates to specialized sub-agents
- Pi: Single agent only — everything runs sequentially
Workaround: Spawn multiple Pi instances with different system prompts:
#!/bin/bash
# security-pi.sh
pi --system-prompt security-prompt.md "Scan repos for secrets"
# trading-pi.sh
pi --system-prompt trading-prompt.md "Execute grid strategy"
Cron scheduling:
- OpenClaw: Built-in cron scheduler with Telegram announcements
- Pi: No scheduler — use systemd timers
Workaround:
# /etc/systemd/system/pi-security-scan.timer
[Unit]
Description=Daily security scan via Pi
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
# /etc/systemd/system/pi-security-scan.service
[Unit]
Description=Pi security scan
[Service]
Type=oneshot
User=youruser
ExecStart=/home/youruser/bin/security-pi.sh
Telegram integration:
- OpenClaw: Native channel support
- Pi: CLI-only
Workaround: Telegram Bot API wrapper:
# telegram-pi-bridge.py
import telebot
import subprocess
bot = telebot.TeleBot(os.getenv('TELEGRAM_TOKEN'))
@bot.message_handler(func=lambda m: True)
def handle_message(message):
result = subprocess.run(
['pi', '--system-prompt', 'system.md', message.text],
capture_output=True, text=True
)
bot.reply_to(message, result.stdout)
bot.polling()
Session persistence:
- OpenClaw: Continuous conversation context
- Pi: Each invocation starts fresh
Workaround: Manually append to memory:
#!/bin/bash
# pi-with-memory.sh
echo "## $(date +%Y-%m-%d) Session" >> ~/pi-migration/memory/daily.md
pi --system-prompt system.md --context memory/daily.md "$@" | tee -a memory/daily.md
Migration Effort
- Core functionality: 1 day
- Full automation parity: 1 week
- Memory persistence: 95% — reads perfectly, writes require wrapper
Verdict: Pi is a solid escape hatch if you need minimal dependencies and are willing to rebuild automation glue.
Migration Scenario 2: OpenClaw → Nanobot
Nanobot from Hong Kong University is a 4,000-line Python framework that replicates the core OpenClaw loop: memory, tools, conversation management.
Setup
# Install Nanobot
git clone https://github.com/HKUDS/nanobot
cd nanobot
pip install -e .
# Import memory
cp ~/.openclaw/workspace/SOUL.md nanobot/config/system_prompt.md
cp ~/.openclaw/workspace/MEMORY.md nanobot/memory/knowledge_base.md
Memory Integration
Nanobot has its own memory persistence system, but it can import Markdown:
# Import OpenClaw memory into Nanobot
from nanobot.memory import MemoryStore
memory = MemoryStore()
# Load SOUL.md as system identity
with open('config/system_prompt.md') as f:
memory.set_identity(f.read())
# Load MEMORY.md as knowledge base
with open('memory/knowledge_base.md') as f:
for section in parse_markdown_sections(f.read()):
memory.add_knowledge(section['title'], section['content'])
# Load daily logs
import glob
for log_file in sorted(glob.glob('memory/2026-*.md')):
with open(log_file) as f:
memory.add_session_history(log_file, f.read())
Tool Migration
OpenClaw skills are Bash/Python scripts with SKILL.md metadata. Nanobot uses Python decorators.
OpenClaw skill:
# ~/.openclaw/workspace/skills/gitleaks/gitleaks.sh
#!/bin/bash
gitleaks detect --source "$1" --report-format json
Nanobot tool:
# nanobot/tools/security.py
from nanobot.tools import tool
import subprocess
import json
@tool
def scan_repository_secrets(repo_path: str) -> dict:
"""Scan a git repository for leaked secrets using gitleaks."""
result = subprocess.run(
['gitleaks', 'detect', '--source', repo_path, '--report-format', 'json'],
capture_output=True, text=True
)
return json.loads(result.stdout) if result.stdout else {"findings": []}
What Breaks
Multi-agent: Nanobot is single-agent. No sub-agent delegation.
Channel integrations: No built-in Telegram/email. Same bridge approach as Pi.
Cron: Use systemd timers or Python APScheduler:
# nanobot-scheduler.py
from apscheduler.schedulers.blocking import BlockingScheduler
from nanobot import Agent
agent = Agent()
scheduler = BlockingScheduler()
@scheduler.scheduled_job('cron', hour=9)
def morning_security_scan():
agent.run("Scan all repositories for secrets and report findings")
@scheduler.scheduled_job('cron', hour='*/4')
def trading_check():
agent.run("Check portfolio positions and rebalance if needed")
scheduler.start()
Migration Effort
- Core functionality: 2-3 days
- Full parity: 2 weeks
- Memory persistence: 90% — requires format conversion but well-documented
Advantage: Python-native means deeper integration. Trading bots, security tools, API clients can become first-class Nanobot tools instead of subprocesses.
Verdict: Best option if you want to stay in Python ecosystem and don’t mind academic-style documentation.
Migration Scenario 3: OpenClaw → IronClaw
IronClaw is NEAR AI’s Rust rewrite with WASM sandboxing. Still in development, but architecturally interesting.
Current Status (as of Feb 2026)
IronClaw is not production-ready yet, but the architecture is promising:
- Rust core — better performance, memory safety
- WASM sandboxing — tools run in isolated environments
- Security-first — reduces supply chain attack surface
What We Know
Memory: No public documentation yet on memory persistence format. Likely Markdown-compatible given NEAR AI’s focus on interoperability.
Tools: Must be compiled to WASM or wrapped in sandboxed runners. Your Bash/Python scripts won’t run directly — they need WASM bridges.
MCP support: IronClaw is designed around Model Context Protocol, which means MCP-compatible tools should work.
Migration Strategy (Hypothetical)
# 1. Export memory (same as before)
mkdir -p ~/ironclaw-migration
cp -r ~/.openclaw/workspace/SOUL.md ~/ironclaw-migration/
cp -r ~/.openclaw/workspace/MEMORY.md ~/ironclaw-migration/
cp -r ~/.openclaw/workspace/memory/ ~/ironclaw-migration/
# 2. Convert tools to WASM (significant work)
# Option A: Rewrite in Rust
# Option B: Use WASI runtime for Python/Bash
# Option C: Wait for IronClaw's tooling to mature
# 3. Configure IronClaw (hypothetical)
ironclaw init --memory ~/ironclaw-migration/MEMORY.md \
--identity ~/ironclaw-migration/SOUL.md
Migration Effort
- Experimental: 4+ weeks
- Production-ready: Wait 6-12 months for ecosystem maturity
Verdict: Interesting for future-proofing, but not viable for migration today. Keep an eye on it.
The Portability Checklist
Here’s my practical checklist for making any agent setup portable:
✅ Do This Now
1. Keep memory in Markdown
# Good: Plain text, version-controlled
SOUL.md
MEMORY.md
memory/2026-02-13.md
# Bad: Proprietary formats, binary stores
agent.db
embeddings.faiss
2. Version control everything
cd ~/.openclaw/workspace
git init
git add SOUL.md MEMORY.md memory/ scripts/
git commit -m "Initial memory export"
git remote add origin git@github.com:youruser/agent-memory.git
git push
3. Store secrets in pass
# Install pass (GPG-encrypted password store)
sudo apt install pass
gpg --gen-key
pass init your-gpg-key-id
# Store credentials
pass insert trading-platform/api-key
pass insert telegram/bot-token
# Export GPG key for migration
gpg --export-secret-keys your-gpg-key-id > ~/private.key
4. Document automation in code
# Instead of just configuring cron through UI:
# Document it in scripts/
cat > scripts/cron-jobs.sh << 'EOF'
#!/bin/bash
# Cron schedule documentation (for migration reference)
# 0 9 * * * - Daily security scan
# 0 */4 * * * - Trading position check
# 0 0 * * 0 - Weekly backup
# Example: Daily security scan
if [ "$(date +%H)" -eq 9 ]; then
~/bin/gitleaks git ~/projects/
fi
EOF
5. Write portable tools
# Good: Standard Bash/Python, minimal dependencies
#!/bin/bash
curl -s "https://api.example.com/data" | jq '.results'
# Bad: Platform-specific APIs
const openclaw = require('openclaw-sdk');
openclaw.memory.write('key', 'value'); # Only works in OpenClaw
📦 Export Script
Save this as agent-export.sh:
#!/bin/bash
# agent-export.sh — Portable agent backup
set -e
BACKUP_DIR="agent-backup-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"
echo "Exporting memory..."
cp SOUL.md "$BACKUP_DIR/"
cp MEMORY.md "$BACKUP_DIR/"
cp -r memory/ "$BACKUP_DIR/"
echo "Exporting scripts..."
cp -r scripts/ "$BACKUP_DIR/"
echo "Exporting secrets..."
mkdir -p "$BACKUP_DIR/secrets"
pass git pull # Sync password store
tar -czf "$BACKUP_DIR/secrets/password-store.tar.gz" -C ~ .password-store
echo "Exporting GPG keys..."
gpg --export-secret-keys > "$BACKUP_DIR/secrets/gpg-private.key"
echo "Documenting cron jobs..."
crontab -l > "$BACKUP_DIR/crontab.txt" 2>/dev/null || echo "No crontab"
echo "Creating archive..."
tar -czf "${BACKUP_DIR}.tar.gz" "$BACKUP_DIR"
rm -rf "$BACKUP_DIR"
echo "✅ Backup complete: ${BACKUP_DIR}.tar.gz"
echo " To restore: tar -xzf ${BACKUP_DIR}.tar.gz"
Usage:
chmod +x agent-export.sh
./agent-export.sh
# Produces: agent-backup-20260215-143022.tar.gz
🧪 Test Your Migration
Periodically verify portability:
# Spin up a clean container
docker run -it --rm ubuntu:24.04 bash
# Inside container: restore from backup
apt update && apt install -y git pass curl jq python3
tar -xzf agent-backup-20260215-143022.tar.gz
cd agent-backup-20260215-143022
# Test memory loading
cat SOUL.md MEMORY.md memory/$(date +%Y-%m-%d).md
# Should contain your full agent context
# Test secrets
tar -xzf secrets/password-store.tar.gz -C ~/
gpg --import secrets/gpg-private.key
pass show trading-platform/api-key # Should decrypt successfully
# Test scripts
bash scripts/trading/check-positions.sh
python3 scripts/security/scan-repo.py ~/test-repo
If this works, your migration will work.
Proposed: Agent Portability Specification
What the ecosystem needs (and doesn’t have) is a standard agent portability format. Here’s what I’d propose:
agent.json Manifest
{
"spec_version": "1.0",
"agent": {
"name": "my-agent",
"created": "2026-01-30T00:00:00Z",
"identity_file": "SOUL.md",
"memory_files": [
"MEMORY.md",
"memory/*.md"
],
"preferred_model": "anthropic/claude-sonnet-4-5"
},
"capabilities": {
"tools": [
{
"name": "gitleaks",
"type": "executable",
"path": "scripts/security/gitleaks.sh",
"description": "Scan repositories for secrets"
},
{
"name": "trading-platform_api",
"type": "mcp",
"server": "scripts/trading/trading-platform-mcp-server.py",
"description": "trading-platform trading API client"
}
],
"channels": [
{
"type": "telegram",
"config_file": "config/telegram.json"
}
],
"schedules": [
{
"cron": "0 9 * * *",
"task": "Daily security scan",
"command": "scripts/security/daily-scan.sh"
}
]
},
"secrets": {
"manager": "pass",
"gpg_key_id": "ABC123",
"keys": [
"trading-platform/api-key",
"telegram/bot-token"
]
},
"dependencies": {
"system": ["bash", "python3", "jq", "curl"],
"python": ["requests", "pandas"],
"binaries": ["gitleaks", "pass"]
}
}
Standard Memory Interface
All platforms should support:
# Read identity
identity = agent.memory.read_identity() # Loads SOUL.md or equivalent
# Read long-term memory
knowledge = agent.memory.read_knowledge() # Loads MEMORY.md or equivalent
# Read recent context
recent = agent.memory.read_daily(date='2026-02-14')
# Write new memory
agent.memory.write_knowledge("Trading", "Automated Trading Strategy performs best in ranging markets")
agent.memory.write_daily("Deployed new security scan automation")
MCP Tool Compatibility
Adopt Model Context Protocol as the standard:
# tools/trading-platform-mcp.py
from mcp import MCPServer, Tool
server = MCPServer()
@server.tool
def get_positions(account_id: str) -> list:
"""Retrieve current open positions from trading-platform."""
# Implementation
return positions
if __name__ == '__main__':
server.run()
Any MCP-compliant tool works across OpenClaw, IronClaw, Nanobot, etc.
Import/Export CLI
# Export from any platform
agent-cli export --format portable-agent-1.0 --output my-agent.tar.gz
# Import to any platform
openclaw import my-agent.tar.gz
ironclaw import my-agent.tar.gz
nanobot import my-agent.tar.gz
Current status: This doesn’t exist yet. But the ecosystem is young enough that we could standardize now before everyone builds incompatible systems.
Lessons Learned
After this deep dive, here’s what I know:
1. Markdown-First Memory Is Non-Negotiable
Vector databases, binary formats, proprietary stores — they all create lock-in. Markdown is:
- Human-readable
- Version-controllable
- Platform-agnostic
- Greppable
- Future-proof
Action: If your agent stores memory anywhere other than plain text files, fix it now.
2. Platform-Specific Features Are Borrowed Time
Multi-agent coordination, built-in channels, native cron — these are conveniences, not dependencies. Design your automation so it uses platform features but doesn’t require them.
Action: Document every platform-specific feature you use and sketch the fallback plan.
3. Test Migration Before You Need It
“We’ll figure it out when the time comes” is how you end up locked in. Spin up a test environment, try loading your memory into a different platform, see what breaks.
Action: Quarterly migration test. Set a calendar reminder.
4. Secrets Are Harder Than Memory
GPG-encrypted pass stores work, but key management is fragile. Migrating secrets is where most people will hit pain.
Action: Document your GPG key recovery process. Test it. Store the recovery doc somewhere that survives your current platform.
5. The Ecosystem Will Converge (Eventually)
Right now every framework does things differently. But MCP is gaining traction, and there’s momentum toward standardization. In 12-24 months we might have real interoperability.
Action: Design for the world we want. Use MCP-compatible tools where possible. Push for portability specs.
Conclusion
I still run OpenClaw. I like it. The acquisition might be great, might be fine, might be nothing.
But I sleep better knowing that if I wake up tomorrow and need to migrate, I can do it in a weekend. My agent’s memory lives in Markdown files in a Git repo. My tools are Bash and Python scripts. My secrets are in pass. My automation is documented in code.
Portability isn’t about expecting the worst. It’s about removing vendor leverage.
The AI agent ecosystem is going to fragment before it consolidates. We’ll have acquisitions, pivots, shutdowns, and forks. The agents that survive won’t be the ones on the “best” platform — they’ll be the ones that can move.
So keep your memory in Markdown. Write portable tools. Test your migration. Document your automation.
And know what you’d pack.
References & Further Reading
- OpenClaw: github.com/openclaw/openclaw
- Pi (minimal agent): OpenClaw Pi documentation
- Nanobot: github.com/HKUDS/nanobot
- IronClaw: github.com/nearai/ironclaw
- Goose (Block’s agent): github.com/block/goose
- Model Context Protocol: modelcontextprotocol.io
- OpenClaw acquisition rumors: TrendingTopics article
- Lex Fridman on AI agents: YouTube interview
Related
- Agent-Friendly Skills: A Portable SKILL.md Standard — how to write portable skills that work across Pi and Nanobot, with complete examples and migration patterns.
Written by Surya | setiyaputra.me | GitHub