Intelligence Commands
Intelligence commands render a page via WebKit and extract structured analytical data. Each command uses WebKitRuntimeCapture for full JavaScript execution, then runs specialized extractors against the rendered DOM, computed styles, or network activity.
/stack, /fonts, and /colors are Core tier. /docs is Pro tier.
/stack <url>
Detect the technology stack powering a page. Combines two detection passes: a runtime detection via WebKit's enrichment framework (catches JS frameworks that only manifest after execution) and a static detection via StaticFrameworkDetector against the DOM snapshot (62 signature database covering 9 categories).
No disk crawl is performed. The page is loaded once, both detectors run, and results are merged with deduplication.
Output
| Field | Type | Description |
|---|---|---|
framework |
String? |
Primary framework name (subtype preferred, e.g. "Next.js" over "React") |
version |
String? |
Detected version string |
confidence |
String? |
Confidence level from runtime detection ("high", "medium", "low") |
technologies |
[DetectedTechnology] |
All detected technologies (see below) |
finalURL |
URL |
Final URL after redirects |
Each DetectedTechnology entry:
| Field | Type | Description |
|---|---|---|
name |
String |
Technology name (e.g. "React", "Tailwind CSS", "Shopify") |
category |
String? |
Category (e.g. "jsLibrary", "cssFramework", "ecommerce", "cms", "builder", "hosting", "ssg", "backend", "jsRuntime") |
confidence |
Double |
Confidence score (0.0--1.0) |
version |
String? |
Detected version, if available |
signals |
[String] |
Evidence that triggered detection (e.g. ["meta:generator ^Next.js", "header:x-nextjs-cache"]) |
CLI
crawlio cmd stack https://vercel.comMCP
{ "name": "slash_stack", "arguments": { "url": "https://vercel.com" } }Returns the full StackCommandResult JSON including the technologies array.
/fonts <url>
Render the page and extract font information from two sources: computed styles (the font-family values actually applied to visible elements) and web font URLs from @font-face rules and the document.fonts API.
The page is rendered via WebKit, then a dedicated extractFonts() JS bridge collects both data sets. This catches web fonts loaded dynamically via JavaScript font loaders.
Output
| Field | Type | Description |
|---|---|---|
fontFamilies |
[String] |
Unique font-family names from computed styles |
webFonts |
[WebFont] |
Web font file references (see below) |
finalURL |
URL |
Final URL after redirects |
Each WebFont entry:
| Field | Type | Description |
|---|---|---|
url |
URL |
URL of the font file (woff2, woff, ttf, otf, etc.) |
format |
String? |
Font format hint (e.g. "woff2", "truetype") |
CLI
crawlio cmd fonts https://linear.appMCP
{ "name": "slash_fonts", "arguments": { "url": "https://linear.app" } }Returns { fontFamilies: string[], webFonts: [{ url, format }], finalURL: string }.
/colors <url>
Render the page and extract the dominant color palette. Colors are collected from computed styles (color, backgroundColor, borderColor) across all visible DOM elements, deduplicated, and sorted by usage count descending.
The top 24 colors by usage are returned. Each swatch includes the hex value and how many elements use it.
Output
| Field | Type | Description |
|---|---|---|
palette |
[ColorSwatch] |
Color swatches sorted by usage (see below) |
finalURL |
URL |
Final URL after redirects |
Each ColorSwatch entry:
| Field | Type | Description |
|---|---|---|
hex |
String |
CSS hex color value (e.g. "#1a1a2e") |
usage |
Int |
Number of elements using this color |
CLI
crawlio cmd colors https://stripe.comMCP
{ "name": "slash_colors", "arguments": { "url": "https://stripe.com" } }Returns { palette: [{ hex, usage }], finalURL: string }.
/docs <url>
Crawl a documentation site and produce an AI-digestible llms.txt bundle. This is a Pro tier command.
The DocsCrawler follows internal links starting from the given URL, converts each page's content to Markdown, and assembles a DocsBundle. The companion DocsRenderer then produces a single llms.txt file structured for consumption by large language models. A progress callback reports (pagesCompleted, maxPages) during the crawl.
Default crawl limit is 50 pages.
Output
| Field | Type | Description |
|---|---|---|
bundle |
DocsBundle |
Full crawl result (see below) |
llmsTxt |
String |
Rendered llms.txt content |
DocsBundle fields:
| Field | Type | Description |
|---|---|---|
origin |
String |
Root URL the crawl started from |
host |
String |
Hostname |
startedAt |
Date |
Crawl start timestamp |
finishedAt |
Date |
Crawl end timestamp |
pages |
[DocPage] |
Extracted pages |
skippedURLs |
[String] |
URLs visited but excluded (non-HTML, 4xx, 5xx) |
estimatedTokens |
Int |
Sum of per-page token estimates |
Each DocPage:
| Field | Type | Description |
|---|---|---|
url |
String |
Canonical page URL |
path |
String |
URL path (used as llms.txt section header) |
title |
String? |
Page title |
markdown |
String |
Page body converted to Markdown |
tokenEstimate |
Int |
Approximate token count (char count / 3.5) |
fetchedAt |
Date |
When this page was fetched |
CLI
crawlio cmd docs https://docs.stripe.comMCP
{ "name": "slash_docs", "arguments": { "url": "https://docs.stripe.com" } }Returns { llmsTxt: string, bundle: DocsBundle }.