Firecrawl Local MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
npm install
npm run build
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add firecrawl-local -- node "<FULL_PATH_TO_FIRECRAWL_LOCAL_MCP>/dist/index.js"

Replace <FULL_PATH_TO_FIRECRAWL_LOCAL_MCP>/dist/index.js with the actual folder you prepared in step 1.

README.md

Web scraping and crawling via a self-hosted Firecrawl instance

Firecrawl Local MCP Server

An MCP (Model Context Protocol) server for interacting with a self-hosted Firecrawl instance. This server provides web scraping and crawling capabilities through your local Firecrawl deployment.

Features

  • Web Scraping: Extract content from single web pages in markdown format
  • Web Crawling: Crawl entire websites with customizable depth and filtering
  • Site Mapping: Generate lists of all accessible URLs on a website
  • Job Monitoring: Track the status of crawling jobs
  • No API Key Required: Works directly with self-hosted Firecrawl instances

Installation

npm install
npm run build

Configuration

The server connects to your Firecrawl instance using the FIRECRAWL_URL environment variable. By default, it connects to http://localhost:3002.

To change the Firecrawl URL, set the FIRECRAWL_URL environment variable in your MCP configuration.

Usage

With Claude Desktop

Add this to your Claude Desktop configuration file (claude_desktop_config.json):

{
  "mcpServers": {
    "firecrawl-local": {
      "command": "node",
      "args": ["/absolute/path/to/firecrawl-local-mcp/dist/index.js"],
      "env": {
        "FIRECRAWL_URL": "http://localhost:3002"
      }
    }
  }
}

With Cline

Add this to your Cline MCP configuration file:

{
  "mcpServers": {
    "firecrawl-local": {
      "command": "node",
      "args": ["dist/index.js"],
      "cwd": "/absolute/path/to/firecrawl-local-mcp",
      "env": {
        "FIRECRAWL_URL": "http://localhost:3002"
      }
    }
  }
}

Available Tools

firecrawl_scrape

Scrape a single webpage and return its content in markdown format.

Parameters:

  • url (required): The URL to scrape
  • formats: Output formats (default: ["markdown"])
  • onlyMainContent: Extract only main content (default: true)
  • includeTags: HTML tags to include
  • excludeTags: HTML tags to exclude
firecrawl_crawl

Crawl a website starting from a URL and return content from multiple pages.

Parameters:

  • url (required): The starting URL to crawl
  • includes: URL patterns to include (supports wildcards)
  • excludes: URL patterns to exclude (supports wildcards)
  • maxDepth: Maximum crawl depth (default: 2)
  • limit: Maximum number of pages to crawl (default: 10)
  • allowBackwardLinks: Allow crawling backward links (default: false)
  • allowExternalLinks: Allow crawling external links (default: false)
firecrawl_crawl_status

Check the status of a crawl job.

Parameters:

  • jobId (required): The job ID returned from a crawl request
firecrawl_map

Map a website to get a list of all accessible URLs.

Parameters:

  • url (required): The URL to map
  • search: Search query to filter URLs
  • ignoreSitemap: Ignore the website's sitemap (default: false)
  • includeSubdomains: Include subdomains (default: false)
  • limit: Maximum number of URLs to return (default: 5000)

Testing

Test the server functionality:

node test.js

This will test both the tool listing and a sample scrape operation.

Example Usage

Once configured in Claude Desktop, you can use natural language commands like:

Requirements

Troubleshooting

  1. Connection Issues: Verify your Firecrawl instance is running and accessible
  2. Timeout Errors: Adjust timeout values in src/index.ts for slow websites
  3. Authentication Errors: Ensure USE_DB_AUTHENTICATION=false in your Firecrawl .env file

Tools (4)

firecrawl_scrapeScrape a single webpage and return its content in markdown format.
firecrawl_crawlCrawl a website starting from a URL and return content from multiple pages.
firecrawl_crawl_statusCheck the status of a crawl job.
firecrawl_mapMap a website to get a list of all accessible URLs.

Environment Variables

FIRECRAWL_URLThe URL of your self-hosted Firecrawl instance

Configuration

claude_desktop_config.json
{"mcpServers": {"firecrawl-local": {"command": "node", "args": ["/absolute/path/to/firecrawl-local-mcp/dist/index.js"], "env": {"FIRECRAWL_URL": "http://localhost:3002"}}}}

Try it

Scrape the content from https://example.com and summarize it for me.
Crawl the documentation site at https://docs.example.com with a depth of 3.
Map all the URLs on https://example.com to see the site structure.
Check the status of my recent crawl job with ID abc123.

Frequently Asked Questions

What are the key features of Firecrawl Local?

Extract website content in markdown format. Crawl entire websites with customizable depth and filtering. Generate lists of all accessible URLs on a website. Monitor the status of active crawling jobs. Operates without an external API key using self-hosted instances.

What can I use Firecrawl Local for?

Converting complex documentation sites into clean markdown for AI analysis. Mapping website structures to identify all available pages for research. Automating data extraction from internal or private web services. Monitoring large-scale website changes by crawling and comparing content.

How do I install Firecrawl Local?

Install Firecrawl Local by running: npm install && npm run build

What MCP clients work with Firecrawl Local?

Firecrawl Local works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Firecrawl Local docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare