Start Automating
Once the MCP server and Chrome extension are set up, you can start automating.
First command
Open your AI client (Claude Code, Cursor, or any MCP-compatible client) and try:
Go to crawlio.app and take a screenshotThe AI connects to a tab, navigates to the URL, and captures a screenshot. All through natural language.
Capture a page with full browser context
Capture the current page with network requests, console logs, and framework detectionWhat happens: The capture_page tool fires. It takes a DOM snapshot, records network requests, captures console output, and runs framework detection. If Crawlio.app is running, the enrichment data is sent to the crawl engine automatically.
Output: URL, title, detected framework (name, confidence, signals), network entries, console logs, DOM snapshot.
Detect frameworks on a live site
What JavaScript framework does stripe.com use?What happens: detect_framework inspects JS globals, hydration markers, and meta tags. It identifies the base framework (React, Vue, Angular, Svelte) and meta-framework (Next.js, Nuxt, SvelteKit, Gatsby, Remix, Astro).
Output: Framework name, subtype, confidence level, matched signals, version (if detectable), SSR mode.
Authenticated crawling with SessionVault
Log in to dashboard.example.com, then crawl it with CrawlioWhat happens:
vault_request_loginopens an interactive login prompt for the domain.- You log in through your browser. The session is encrypted and stored in the vault.
- The crawl engine (or headless agent) retrieves the session via
vault_get_sessionand uses it for authenticated requests.
Future crawls to the same domain reuse the stored session. No re-authentication needed until the session expires.
Batch screenshot multiple pages
Take screenshots of these 5 competitor landing pages and save themWhat happens: The AI connects to a tab, navigates to each URL in sequence, and captures a screenshot at each stop. Works in both live and headless modes.
Extract data from JS-rendered pages
Get the product prices from this React app. The prices load after JavaScript runs.What happens: The browser renders the page fully (including JavaScript), then extracts data from the live DOM. Static crawling would miss this content. The smart object in code mode can detect the framework and use framework-specific accessors (React state, Vue data) for precise extraction.
Monitor a page for changes
Check this page every 5 minutes and tell me if the price changesWhat happens: The AI navigates to the page, captures the target content, waits, then checks again. Network capture and DOM snapshots let it compare states precisely.
Full and code mode
By default, the server runs in code mode (3 meta-tools, ~95% token savings). For full tool access, use --full when configuring the server. See Install MCP Server.
Both modes have access to the same capabilities. Code mode uses bridge.send() inside the execute tool to call any of the 114 browser tools.
Next steps
- Browser Tools Reference: all 114 tools with parameters
- JIT Context Runtime: how the smart execution sandbox works
- Code Mode: token-efficient operation
- MCP Tools Reference: Crawlio app control tools