๐ ๏ธ OpenClaw Skills (Terminal Extensions)
What it does: Turns your terminal into a Google Workspace command center. The agent can read emails, create calendar events, search Drive, and update Sheets โ all from natural language commands.
Why professionals search this: The #1 most downloaded skill of 2026. It transforms Gemini/Claude into a “Chief of Staff” that manages your entire digital life.
$ # GOG Skill Installer for OpenClaw $ # Search Intent: "OpenClaw GOG skill install for email automation" $ # Step 1: Clone the skill repository $ git clone https://github.com/openclaw/gog-skill ~/.openclaw/skills/gog $ # Step 2: Install Python dependencies $ pip install google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-client $ # Step 3: Set up OAuth credentials $ echo "Visit: https://console.cloud.google.com/apis/credentials" $ echo "Create OAuth 2.0 Client ID for Desktop App" $ echo "Download credentials.json to ~/.openclaw/skills/gog/" $ # Step 4: Run initial authentication $ python ~/.openclaw/skills/gog/auth.py $ # Follow browser flow to authorize access $ # Step 5: Test the skill $ openclaw skill run gog --command "list unread emails from today" $ # Step 6: Enable in OpenClaw config $ echo 'skills: ["gog"]' >> ~/.openclaw/config.yaml
What it does: Turns any website into clean Markdown that LLMs can read. Bypasses bot blockers, handles JavaScript rendering, and extracts structured data.
Why professionals search this: The gold standard for agentic scraping in 2026. Used by 10,000+ developers to feed real-time web data into their agents.
firecrawl crawl https://docs.example.com --limit 100 --output markdown
$ # Firecrawl CLI Skill Installer $ # Search Intent: "Firecrawl sandbox script for parallel web extraction" $ # Step 1: Install Firecrawl CLI via npm $ npm install -g @mendable/firecrawl-cli $ # Step 2: Set up OpenClaw skill wrapper $ mkdir -p ~/.openclaw/skills/firecrawl $ cat > ~/.openclaw/skills/firecrawl/skill.yaml << 'EOF' name: firecrawl description: Extract web content as clean Markdown commands: - name: scrape description: Scrape a single URL args: ["url"] - name: crawl description: Crawl entire website args: ["url", "--limit"] - name: search description: Search and extract args: ["query"] EOF $ # Step 3: Get your API key $ echo "Get API key at: https://firecrawl.dev" $ export FIRECRAWL_API_KEY="your-key-here" $ # Step 4: Test the skill $ openclaw skill run firecrawl --scrape "https://news.ycombinator.com" $ # Step 5: Parallel extraction example $ firecrawl crawl "https://docs.example.com" --limit 50 --parallel 10 --output markdown
What it does: One skill to connect your agent to 800+ apps. Handles OAuth, rate limiting, and webhook management automatically.
Why professionals search this: Managing authentication for multiple tools is a nightmare. Composio solves it with a single API. The most searched “multi-tool auth” skill of 2026.
$ # Composio Skill Installer $ # Search Intent: "Composio multi-tool auth script" $ # Step 1: Install Composio SDK $ pip install composio-core $ # Step 2: Create skill directory $ mkdir -p ~/.openclaw/skills/composio $ # Step 3: Set up OpenClaw wrapper $ cat > ~/.openclaw/skills/composio/skill.py << 'EOF' import os from composio import ComposioToolSet toolset = ComposioToolSet(api_key=os.getenv("COMPOSIO_API_KEY")) def execute(app: str, action: str, params: dict): """Execute action on any connected app.""" tools = toolset.get_tools(apps=[app]) result = toolset.execute_action(action, params) return result EOF $ # Step 4: Authenticate your apps $ composio add slack $ composio add jira $ composio add salesforce $ composio add github $ # Step 5: List connected apps $ composio list $ # Step 6: Test the skill $ openclaw skill run composio --app slack --action "send_message" --params '{"channel":"general","text":"Hello from AI"}'
What it does: Every time you correct the AI, this skill logs the correction. Next time, the agent remembers โ it never makes the same mistake twice.
Why professionals search this: The #1 complaint about AI is “it forgets.” Self-Improver gives agents persistent memory across sessions. The most downloaded meta-script of 2026.
$ # Self-Improver Skill Installer $ # Search Intent: "Self-improving agent memory script" $ # Step 1: Create skill directory $ mkdir -p ~/.openclaw/skills/self_improver $ cd ~/.openclaw/skills/self_improver $ # Step 2: Create memory database $ cat > memory.py << 'EOF' import sqlite3 import json from datetime import datetime class AgentMemory: def __init__(self, db_path="~/.openclaw/memory.db"): self.conn = sqlite3.connect(os.path.expanduser(db_path)) self._init_db() def _init_db(self): self.conn.execute(""" CREATE TABLE IF NOT EXISTS corrections ( id INTEGER PRIMARY KEY, user_input TEXT, wrong_response TEXT, correct_response TEXT, correction_reason TEXT, created_at TIMESTAMP ) """) self.conn.execute(""" CREATE TABLE IF NOT EXISTS preferences ( key TEXT PRIMARY KEY, value TEXT, updated_at TIMESTAMP ) """) self.conn.commit() def learn_correction(self, user_input, wrong, correct, reason): """Store a correction so agent never repeats the mistake.""" self.conn.execute( "INSERT INTO corrections (user_input, wrong_response, correct_response, correction_reason, created_at) VALUES (?, ?, ?, ?, ?)", (user_input, wrong, correct, reason, datetime.now()) ) self.conn.commit() def recall(self, user_input): """Retrieve relevant corrections for this input.""" cursor = self.conn.execute( "SELECT wrong_response, correct_response FROM corrections WHERE user_input LIKE ? LIMIT 5", (f"%{user_input}%",) ) return [{"wrong": row[0], "correct": row[1]} for row in cursor.fetchall()] def set_preference(self, key, value): """Store user preference.""" self.conn.execute( "REPLACE INTO preferences (key, value, updated_at) VALUES (?, ?, ?)", (key, value, datetime.now()) ) self.conn.commit() def get_preference(self, key): cursor = self.conn.execute("SELECT value FROM preferences WHERE key = ?", (key,)) row = cursor.fetchone() return row[0] if row else None EOF $ # Step 3: Test memory $ python -c "from memory import AgentMemory; m=AgentMemory(); m.learn_correction('write email', 'Dear Sir', 'Hi [Name]', 'Use first name for casual emails'); print(m.recall('email'))"
What it does: Transcribes audio to text entirely on your machine. No cloud, no API keys, no data leaving your computer.
Why professionals search this: Privacy concerns drive 850% growth in local AI searches. European businesses use this for GDPR-compliant meeting transcription. The #1 offline skill of 2026.
--model base for fast transcription or --model large for highest accuracy. Both run offline.
$ # Whisper Local Skill Installer $ # Search Intent: "Private offline meeting transcription skill" $ # Step 1: Install Whisper and dependencies $ pip install openai-whisper $ pip install torch torchaudio # CPU or CUDA version $ # Step 2: Create skill directory $ mkdir -p ~/.openclaw/skills/whisper $ cd ~/.openclaw/skills/whisper $ # Step 3: Create skill wrapper $ cat > skill.py << 'EOF' import whisper import os import sys model_name = os.getenv("WHISPER_MODEL", "base") model = whisper.load_model(model_name) def transcribe(audio_path: str, language: str = "en") -> dict: """Transcribe audio file to text (100% local).""" result = model.transcribe(audio_path, language=language) return { "text": result["text"], "segments": result["segments"], "language": result["language"] } if __name__ == "__main__": if len(sys.argv) > 1: audio_file = sys.argv[1] output = transcribe(audio_file) print(output["text"]) # Optionally save to file with open(audio_file + ".txt", "w") as f: f.write(output["text"]) EOF $ # Step 4: Test with a sample audio file $ python skill.py ~/Downloads/meeting_recording.mp3 $ # Step 5: Download larger model for better accuracy (first run only) $ export WHISPER_MODEL=medium $ python -c "import whisper; whisper.load_model('medium')"
โก One-Line Install (All Skills)
$ curl -sSL https://yourdomain.com/ai-agent-toolkit/skills/install-all.sh | bash
This installs all 5 skills with default configurations. Customize API keys and paths after installation.
