How to Make AI Agent Skills Portable and Reusable
Build AI agent skills once, use everywhere. Learn the SKILL.md standard that makes capabilities portable across platforms with real migration examples.
Enter SKILL.md: a portable, human-readable specification format that lets you write once and adapt anywhere.
Why Portable Skills Matter
When you invest time building a useful agent capability — say, fetching weather data, taking screenshots, or compiling a daily digest — you want it to work across every agent you run. Not just today’s platform, but next month’s upgrade or next year’s migration.
Without a standard:
- Skills are tightly coupled to platform-specific APIs
- Moving between agents means rewriting everything
- Sharing skills with the community is fragile (works on my machine!)
- Testing and debugging require the full agent runtime
With a portable standard:
- Skills are self-contained and self-documenting
- Platform adapters translate the spec into local bindings
- You can test skills independently (no agent needed)
- Migration is a matter of running adapters, not rewriting code
Anatomy of SKILL.md
A complete SKILL.md file has seven sections:
# Skill Name
## Description
What this skill does, in plain language. One or two sentences.
## Triggers
When should the agent invoke this skill?
- User asks "what's the weather"
- Scheduled daily at 6am
- Detected keyword in email subject
## Dependencies
- bash (any version)
- curl (for HTTP requests)
- jq (JSON parsing)
- Optional: API key stored in environment variable WEATHER_API_KEY
## Usage
How the agent should call this skill. Arguments, expected outputs, exit codes.
```bash
./skill.sh [arguments]
Scripts
The actual implementation. Can be bash, Python, Node.js — whatever works. Store the script alongside SKILL.md or reference external files.
Platform Notes
Pi
Call via bash alias, store in ~/skills/
Nanobot
Use @tool decorator wrapper in Python, import skill.sh via subprocess.
Examples
Real-world usage examples with expected inputs and outputs.
That's it. No JSON schema, no YAML frontmatter, just structured markdown. Human-readable, git-friendly, and easy to parse.
## Four Complete Skill Examples
Let's walk through four real skills with full implementations.
### 1. Weather Lookup
**SKILL.md:**
```markdown
# Weather Lookup
## Description
Fetches current weather data for a given city using the wttr.in API.
## Triggers
- User asks "what's the weather in [city]"
- Morning briefing automation
## Dependencies
- curl (any version)
- jq (for JSON parsing)
## Usage
```bash
./weather.sh [city]
Returns JSON: {"temp": "18°C", "condition": "Clear", "humidity": "65%"}
Exit code 0 on success, 1 on error.
Scripts
See weather.sh
Platform Notes
Pi
Save to ~/skills/weather/ and create alias:
alias weather="bash ~/skills/weather/weather.sh"
Nanobot
Wrap in Python tool decorator:
import subprocess
@tool
def weather(city: str) -> dict:
result = subprocess.run(["./weather.sh", city], capture_output=True)
return json.loads(result.stdout)
Examples
Input: ./weather.sh Sydney
Output: {"temp": "22°C", "condition": "Partly cloudy", "humidity": "70%"}
**weather.sh:**
```bash
#!/usr/bin/env bash
set -euo pipefail
CITY="${1:-Sydney}"
# Fetch weather from wttr.in API (no auth required)
RESPONSE=$(curl -sf "https://wttr.in/${CITY}?format=j1")
# Parse JSON response
TEMP=$(echo "$RESPONSE" | jq -r '.current_condition[0].temp_C')
CONDITION=$(echo "$RESPONSE" | jq -r '.current_condition[0].weatherDesc[0].value')
HUMIDITY=$(echo "$RESPONSE" | jq -r '.current_condition[0].humidity')
# Output as JSON
cat <<EOF
{"temp": "${TEMP}°C", "condition": "${CONDITION}", "humidity": "${HUMIDITY}%"}
EOF
2. Memory Writer
SKILL.md:
# Memory Writer
## Description
Extracts key facts from conversation context and appends them to a structured markdown knowledge base.
## Triggers
- User says "remember this"
- Agent detects factual statement worth persisting
- End of significant conversation thread
## Dependencies
- bash (any version)
- A writable memory directory (set via MEMORY_DIR environment variable)
## Usage
```bash
./memory_write.sh "[topic]" "[fact]"
Creates or appends to $MEMORY_DIR/[topic].md with timestamped entry.
Exit code 0 on success, 1 if MEMORY_DIR not set or not writable.
Scripts
See memory_write.sh
Platform Notes
Pi
Set MEMORY_DIR in ~/.bashrc:
export MEMORY_DIR=~/pi-memory
Nanobot
Pass MEMORY_DIR as environment variable to subprocess call.
Examples
Input: ./memory_write.sh "skills" "SKILL.md format enables portability"
Effect: Appends to memory/skills.md:
## 2026-02-15 01:30 UTC
SKILL.md format enables portability
**memory_write.sh:**
```bash
#!/usr/bin/env bash
set -euo pipefail
TOPIC="$1"
FACT="$2"
MEMORY_DIR="${MEMORY_DIR:-./memory}"
# Ensure memory directory exists
mkdir -p "$MEMORY_DIR"
MEMORY_FILE="$MEMORY_DIR/${TOPIC}.md"
TIMESTAMP=$(date -u +"%Y-%m-%d %H:%M UTC")
# Append to memory file
cat >> "$MEMORY_FILE" <<EOF
## $TIMESTAMP
$FACT
EOF
echo "✓ Saved to $MEMORY_FILE"
3. Daily Digest
SKILL.md:
# Daily Digest
## Description
Compiles a summary of key sources (RSS feeds, saved links, project updates) into a markdown digest.
## Triggers
- Scheduled daily at 6:00 AM local time
- User manually requests "give me the digest"
## Dependencies
- bash, curl, jq
- xmllint (for RSS parsing)
- A readable sources.txt file with RSS/JSON feed URLs
## Usage
```bash
./daily_digest.sh > digest-$(date +%Y-%m-%d).md
Outputs markdown with timestamped sections for each source. Exit code 0 on success, 1 if sources.txt missing or network error.
Scripts
See daily_digest.sh
Platform Notes
Pi
Pi doesn’t support scheduled tasks natively. Run manually or use system cron with Pi’s workspace path.
Nanobot
Use Python scheduler (APScheduler) to call skill:
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
scheduler.add_job(lambda: subprocess.run(["./daily_digest.sh"]), 'cron', hour=6)
scheduler.start()
Examples
Output format:
# Daily Digest — 2026-02-15
## Hacker News Top Stories
- [Article Title](https://example.com) — 234 points
- [Another Article](https://example.com) — 189 points
## Project Updates
- repo-name: 3 commits since yesterday
**daily_digest.sh:**
```bash
#!/usr/bin/env bash
set -euo pipefail
SOURCES_FILE="${SOURCES_FILE:-./sources.txt}"
DATE=$(date +%Y-%m-%d)
echo "# Daily Digest — $DATE"
echo ""
# Read sources.txt line by line
while IFS= read -r SOURCE; do
[[ "$SOURCE" =~ ^# ]] && continue # Skip comments
[[ -z "$SOURCE" ]] && continue # Skip blank lines
# Detect source type and parse
if [[ "$SOURCE" =~ rss ]]; then
# RSS feed
FEED_DATA=$(curl -sf "$SOURCE")
FEED_TITLE=$(echo "$FEED_DATA" | xmllint --xpath '//channel/title/text()' - 2>/dev/null || echo "RSS Feed")
echo "## $FEED_TITLE"
echo "$FEED_DATA" | xmllint --xpath '//item/title/text()' - 2>/dev/null | head -5
echo ""
elif [[ "$SOURCE" =~ json ]]; then
# JSON API
JSON_DATA=$(curl -sf "$SOURCE")
echo "## JSON Source"
echo "$JSON_DATA" | jq -r '.items[0:5] | .[] | "- \(.title)"' 2>/dev/null || echo "Parse error"
echo ""
fi
done < "$SOURCES_FILE"
echo "---"
echo "Generated at $(date -u)"
4. Screenshot & Analyze
SKILL.md:
# Screenshot & Analyze
## Description
Takes a screenshot of a given URL and runs it through a vision model for analysis.
## Triggers
- User says "screenshot [URL] and tell me what you see"
- Automated monitoring of web pages for visual changes
## Dependencies
- playwright (npx playwright install chromium)
- curl (for API calls to vision model)
- Environment variable: VISION_API_KEY (e.g., OpenAI API key)
## Usage
```bash
./screenshot_analyze.sh [URL] "[question]"
Outputs: Screenshot saved to /tmp/screenshot.png, vision model analysis printed to stdout. Exit code 0 on success, 1 on browser error, 2 on API error.
Scripts
See screenshot_analyze.sh
Platform Notes
Pi
Requires Playwright installed via npx. Store screenshots in ~/screenshots/
Nanobot
Use Playwright Python library instead of bash + npx:
from playwright.sync_api import sync_playwright
def screenshot(url):
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
page.goto(url)
page.screenshot(path="screenshot.png")
browser.close()
Examples
Input: ./screenshot_analyze.sh "https://example.com" "What's the main headline?"
Output:
Screenshot saved: /tmp/screenshot-example-com.png
Analysis: The main headline reads "Example Domain" with descriptive text below.
**screenshot_analyze.sh:**
```bash
#!/usr/bin/env bash
set -euo pipefail
URL="$1"
QUESTION="${2:-Describe this page}"
OUTPUT_FILE="/tmp/screenshot-$(echo "$URL" | sed 's|https\?://||g' | tr '/' '-').png"
# Take screenshot with Playwright
npx -y playwright screenshot "$URL" "$OUTPUT_FILE" --viewport-size=1280,720
# Encode image as base64
IMAGE_B64=$(base64 -w 0 "$OUTPUT_FILE")
# Call vision API (OpenAI example)
RESPONSE=$(curl -sf https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $VISION_API_KEY" \
-H "Content-Type: application/json" \
-d @- <<EOF
{
"model": "gpt-4o",
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "$QUESTION"},
{"type": "image_url", "image_url": {"url": "data:image/png;base64,$IMAGE_B64"}}
]
}]
}
EOF
)
# Extract answer
ANALYSIS=$(echo "$RESPONSE" | jq -r '.choices[0].message.content')
echo "Screenshot saved: $OUTPUT_FILE"
echo "Analysis: $ANALYSIS"
Platform Adaptation
The beauty of SKILL.md is that the spec is platform-agnostic, but the execution can be tailored.
Pi Adapter
Pi has no formal skill system, so skills are just bash scripts called via aliases or direct paths.
# Adapter: install skill for Pi
SKILL_NAME="weather"
mkdir -p ~/skills/$SKILL_NAME
cp ./$SKILL_NAME/* ~/skills/$SKILL_NAME/
chmod +x ~/skills/$SKILL_NAME/*.sh
# Add alias to .bashrc
echo "alias weather='bash ~/skills/weather/weather.sh'" >> ~/.bashrc
source ~/.bashrc
Now Pi can call weather Sydney directly in conversation.
Nanobot Adapter
Nanobot uses Python decorators to define tools. Wrap bash skills with subprocess calls.
adapters/nanobot_adapter.py:
import subprocess
import json
from typing import Dict, Any
def load_skill(skill_name: str, script_name: str):
"""Generic adapter to call bash skills from Nanobot."""
def tool_fn(*args, **kwargs) -> Any:
# Convert kwargs to positional args
cmd_args = list(args) + [str(v) for v in kwargs.values()]
result = subprocess.run(
[f"./skills/{skill_name}/{script_name}"] + cmd_args,
capture_output=True,
text=True,
check=True
)
# Try to parse JSON output
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {"output": result.stdout.strip()}
return tool_fn
# Register skills
from nanobot import tool
weather = tool(load_skill("weather", "weather.sh"), name="weather", description="Fetch weather for a city")
memory_write = tool(load_skill("memory", "memory_write.sh"), name="memory_write", description="Save a fact to memory")
Now Nanobot can invoke weather(city="Sydney") and get structured JSON back.
Portable Skills Repo Pattern
To maximize portability, organize skills in a dedicated git repository:
portable-agent-skills/
├── README.md
├── skills/
│ ├── weather/
│ │ ├── SKILL.md
│ │ └── weather.sh
│ ├── memory/
│ │ ├── SKILL.md
│ │ └── memory_write.sh
│ ├── digest/
│ │ ├── SKILL.md
│ │ ├── daily_digest.sh
│ │ └── sources.txt
│ └── screenshot/
│ ├── SKILL.md
│ └── screenshot_analyze.sh
├── adapters/
│ ├── pi_adapter.sh
│ └── nanobot_adapter.py
└── tests/
└── test_weather.sh
Each adapter script reads SKILL.md, parses platform notes, and installs the skill accordingly.
#!/usr/bin/env bash
for SKILL in skills/*/; do
SKILL_NAME=$(basename "$SKILL")
cp -r "$SKILL" "$SKILLS_DIR/"
chmod +x "$SKILLS_DIR/$SKILL_NAME"/*.sh
done
echo "✓ All skills installed to $SKILLS_DIR"
Migration Checklist
-
Audit existing skills
- List all custom tools, scripts, and automations currently used by your agent
- Identify dependencies (APIs, environment variables, file paths)
-
Extract skill logic
- Pull out the core script or function that performs the skill
- Make scripts executable and self-contained
-
Write SKILL.md for each skill
- Document what it does, when to trigger, dependencies, usage
- Add platform notes for each target system
-
Test independently
- Run each skill script directly from the command line
- Verify exit codes, error handling, and output format
- Don’t rely on agent runtime for testing
-
Create adapters
- Test adapters in isolated environments (fresh VM or container)
-
Migrate incrementally
- Don’t switch all skills at once — move one at a time
- Keep old and new versions running in parallel during transition
- Validate each migrated skill before decommissioning the old version
-
Document migration
- Update agent config to point to new skill paths
- Record any breaking changes or updated environment variables
- Share lessons learned with your team or community
Conclusion
By standardizing skill definitions in human-readable markdown and decoupling implementation from platform bindings, you get:
- Portability: Write once, adapt anywhere
- Testability: Skills run independently of agent runtimes
- Shareability: Git-friendly, self-documenting skill libraries
- Future-proofing: When the next agent platform launches, you’ve got adapters ready
Start small: pick one useful skill, write a SKILL.md, and test it across two platforms. Then build from there.
Your agents will thank you. Or at least, they’ll work reliably across every runtime you throw at them.
Related Posts:
- Building Skills for Claude: A Practical Guide — Anthropic’s official patterns for skill design
- How to Monetize AI Agent Skills — Turn your skills into revenue
- Experimenting with OpenClaw — Multi-agent system architecture
Further Reading:
- Nanobot on GitHub — Python-native agent framework with tool decorators
Written by Surya, February 2026