Code Mode
The problem
In full mode, Crawlio App (Pillar 3) exposes 49 tools. Each tool definition includes a name, description, and JSON Schema for parameters. Before your AI does any work, it loads all 49 schemas into its context window, consuming ~5,500 tokens on schema alone.
The solution
Code mode compresses 49 tools into 6. Your AI searches a catalog of 52 entries, then calls endpoints directly.
| Mode | Tools | Schema tokens | Use case |
|---|---|---|---|
| Code (default) | 6 | ~1,200 | CLI pipelines, token-constrained agents |
Full (--full) |
49 | ~5,500 | Interactive use, GUI clients, debugging |
The 6 code-mode tools
| Tool | What it does |
|---|---|
search_api |
Search the catalog of 52 endpoints by keyword. Returns names, descriptions, HTTP methods, paths, and parameters. |
execute_api |
Call any endpoint directly by HTTP method and path. |
trigger_capture |
WebKit runtime capture (kept as a shortcut because it is called often). |
analyze_page |
Composite: trigger capture + poll enrichment + return evidence. |
compare_pages |
Composite: run analyze_page on two URLs, return structured comparison. |
extract_text_from_image |
Vision OCR on local images. No running app needed. |
How it works
Step 1: Search
Your AI discovers available endpoints by keyword:
search_api(query: "crawl")
--> [
{ name: "start_crawl", httpMethod: "POST", httpPath: "/start", ... },
{ name: "get_crawl_status", httpMethod: "GET", httpPath: "/status", ... },
{ name: "stop_crawl", httpMethod: "POST", httpPath: "/stop", ... }
]The catalog indexes 52 entries. Results include parameters so your AI knows exactly how to call each endpoint.
Step 2: Execute
Your AI calls the endpoint directly:
execute_api(method: "POST", path: "/start", body: { url: "https://example.com" })
--> { "status": "started", "urlCount": 1 }
execute_api(method: "GET", path: "/status")
--> { "engineState": "crawling", "progress": { "downloaded": 85, "queued": 63 } }Step 3: Use shortcuts for common patterns
The composite tools handle multi-step workflows in one call:
analyze_page(url: "https://example.com")
--> { url, timestamp, enrichment, enrichmentStatus, crawlStatus }
compare_pages(urlA: "https://a.com", urlB: "https://b.com")
--> { siteA, siteB, comparisonSummary }Examples
Start crawl, poll, export
search_api(query: "export")
execute_api(method: "POST", path: "/start", body: { url: "https://example.com" })
execute_api(method: "GET", path: "/status") // poll until completed
execute_api(method: "POST", path: "/export", body: {
format: "warc",
destinationPath: "/tmp/site.warc.gz"
})Update settings before crawling
search_api(query: "settings")
execute_api(method: "PATCH", path: "/settings", body: {
settings: { maxConcurrent: 20 },
policy: { maxDepth: 5 }
})
execute_api(method: "POST", path: "/start", body: { url: "https://docs.stripe.com" })Check enrichment data
search_api(query: "enrichment")
execute_api(method: "GET", path: "/enrichment?url=https://example.com")
--> { framework: { name: "Next.js", version: "14.2.0" }, networkRequests: [...] }Token savings
Code mode saves ~78% on schema tokens compared to full mode. For context-constrained environments (long conversations, multiple MCP servers), this matters.
The savings come from replacing 49 individual tool schemas with 6 compact ones. The search_api tool returns only the entries matching your query, not the full catalog.
When to use full mode
Switch to full mode when:
- You use a GUI client (like Claude Desktop) where tool picker UIs work better with individual tools
- You are debugging tool behavior and want to see exact parameters
- You want tool annotations (readOnlyHint, destructiveHint) visible to the client
To enable full mode:
{
"mcpServers": {
"crawlio": {
"command": "npx",
"args": ["-y", "crawlio-mcp", "--full"]
}
}
}Code mode is the default
You do not need any flag to use code mode. It is on by default:
{
"mcpServers": {
"crawlio": {
"command": "npx",
"args": ["-y", "crawlio-mcp"]
}
}
}Beyond Pillar 3
Code mode applies to Pillar 3 (Crawlio App) only. If you also use the aggregator, your AI sees 5 meta-tools that route across all 3 pillars. The aggregator handles its own context loading through JIT discovery.
For browser automation (Pillar 1 and 2), the crawlio-browser server has its own code mode with search, execute, and connect_tab. See JIT Context for details.
Next steps
- Method Mode: composite tools that build on code mode
- Evidence Mode: typed findings and confidence propagation
- Tool Reference: all 49 full-mode tools with parameters
- JIT Context: how the aggregator loads context on demand