CrawlioCrawlio Docs

Resources, Prompts, and Skills

Resources

MCP resources provide direct, read-only access to Crawlio state without calling tools. Clients can subscribe to resource updates for live data.

Static resources

URI MIME type Description
crawlio://status application/json Engine state, progress counters, seed URL
crawlio://settings application/json Current download settings and crawl policy
crawlio://site-tree text/plain Downloaded file paths as ASCII directory tree
crawlio://enrichment application/json All browser enrichment data across all URLs

crawlio://status tries the HTTP API first, then falls back to reading the state file on disk. crawlio://site-tree reads from file-based state. All other resources use HTTP only.

Resource template

URI template MIME type Description
crawlio://enrichment/{url} application/json Enrichment data for a specific URL

Example: crawlio://enrichment/https%3A%2F%2Fexample.com returns framework detection, network requests, console logs, and DOM snapshot for that URL.

When to use resources vs tools

Use resources when you want a live data subscription or a simple read without parameters. Use tools when you need filtering, pagination, or write operations.

Need Use
Subscribe to crawl progress updates crawlio://status resource
Get enrichment for a specific URL crawlio://enrichment/{url} template
Filter logs by category and level get_crawl_logs tool
Get only failed URLs get_failed_urls tool
Start or stop a crawl start_crawl / stop_crawl tools

Prompts

MCP prompts are pre-built workflow templates that AI clients can invoke. They guide the AI through multi-step operations with the right tool sequence.

Prompt Arguments Description
crawl-and-analyze url (required), maxDepth (optional) Crawl a site, then analyze framework, structure, and errors
export-site url (required), format (required), destination (optional) Crawl a site and export in a specified format
compare-sites url1 (required), url2 (required) Crawl two sites and compare structure, frameworks, and content
fix-failed-urls (none) Diagnose failed URLs from the current crawl and retry them

Using prompts

In clients that support MCP prompts (such as Claude Desktop), prompts appear in the prompt picker. Select one, fill in the arguments, and the AI receives a structured instruction to execute the workflow.

Example with crawl-and-analyze:

Prompt: crawl-and-analyze
Arguments:
  url: "https://docs.stripe.com"
  maxDepth: 3
 
The AI receives instructions to:
  1. Call start_crawl
  2. Poll get_crawl_status until completed
  3. Call trigger_capture on the seed URL
  4. Call get_enrichment to read framework data
  5. Call get_errors to check for failures
  6. Summarize findings

Skills

Skills are multi-step workflow prompts installed as files in ~/.claude/skills/. They provide guided orchestration patterns for Claude Code.

Installed skills

The crawlio-mcp init wizard installs 5 skills:

Skill What it does
crawlio-mcp Full tool reference (49 tools with parameters and examples)
crawl-site Intelligent crawl workflow: start, monitor, adjust, export
audit-site Multi-phase site audit: crawl, capture, enrich, analyze, report
observe Query the observation log with filters
finding Create evidence-backed findings from observations

One agent template is also installed:

Agent What it does
site-auditor Multi-pass analysis agent that crawls, captures, and reports

Using skills in Claude Code

Invoke a skill with the / command:

/crawl-site https://docs.stripe.com
/audit-site https://example.com
/observe host:example.com since:1h
/finding "React hydration errors on /products"

Skill distribution

Skills come from the crawlio-plugin repository. Three install paths:

  1. crawlio-mcp init (primary): downloads from GitHub, falls back to bundled copies
  2. npx skills add Crawlio-app/crawlio-plugin: skills.sh registry
  3. Project-level: skills in .claude/skills/ are available when working inside the Crawlio repo

Skills are idempotent. Running init again overwrites with the latest versions.

Next steps

© 2026 Crawlio. All rights reserved.