Crawl4AI MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add crawl4ai -- uv run crawl4ai-mcp
README.md

Locally-hosted MCP server providing advanced web crawling capabilities

Crawl4AI MCP Server

本地运行的 Crawl4AI MCP Server,为 AI 助手提供强大的网页爬取能力。

✨ 功能特性

  • 🌐 网页爬取:爬取单个或多个 URL,返回干净的 Markdown
  • 📝 结构化提取:使用 CSS/XPath 或 LLM 提取结构化数据
  • 📸 截图功能:获取网页截图
  • 🕸️ 深度爬取:支持深度爬取整个网站
  • 🚀 本地运行:完全本地运行,无需 API Key(基础功能)
  • ⚡ 高性能:异步并发,智能缓存

🚀 快速开始

1. 安装

# 克隆项目
git clone <your-repo-url>
cd crawl4ai-mcp

# 使用启动脚本(推荐)
chmod +x start.sh
./start.sh

# 或者手动安装
uv sync
uv run crawl4ai-setup

2. 配置

# 复制环境变量示例
cp .env.example .env

# 编辑 .env 文件,添加你的 API keys(用于结构化提取)

3. 运行

# stdio 模式(用于 Claude Code)
uv run crawl4ai-mcp

# HTTP 模式(用于开发调试)
uv run crawl4ai-mcp --transport http --port 8000

🔧 配置 Claude Code

stdio 模式(推荐)

claude mcp add crawl4ai uv run --project /path/to/crawl4ai-mcp crawl4ai-mcp

HTTP 模式

# 1. 启动服务器
uv run crawl4ai-mcp --transport http --port 8000

# 2. 添加到 Claude Code
claude mcp add --transport http crawl4ai http://localhost:8000/mcp

📚 可用工具

crawl_url - 爬取网页

crawl_url(
    url="https://example.com",
    word_count_threshold=10,
    bypass_cache=False,
    magic=False
)

crawl_multiple - 批量爬取

crawl_multiple(
    urls=["https://example.com/page1", "https://example.com/page2"],
    max_concurrent=3,
    word_count_threshold=10
)

extract_structured - 结构化提取

extract_structured(
    url="https://example.com/products",
    instruction="提取所有产品名称和价格",
    provider="openai/gpt-4o-mini",
    api_token="your-api-key"
)

get_screenshot - 网页截图

get_screenshot(
    url="https://example.com",
    full_page=True,
    viewport_width=1920,
    viewport_height=1080
)

deep_crawl - 深度爬取

deep_crawl(
    url="https://example.com",
    max_depth=2,
    max_pages=10,
    strategy="bfs"  # 或 "dfs"
)

📖 文档

🛠️ 开发

# 运行测试
make test

# 代码格式化
make fmt

# 代码检查
make lint

# 类型检查
make typecheck

# 运行所有检查
make check

🔒 环境变量

# LLM API Keys(用于结构化提取)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# 或使用通用配置
LLM_PROVIDER=openai/gpt-4o-mini
LLM_API_KEY=sk-...

📄 许可证

MIT License

🤝 贡献

欢迎提交 Issue 和 PR!

💬 支持

Tools (5)

crawl_urlCrawl a single webpage and return content as Markdown.
crawl_multipleCrawl multiple URLs concurrently.
extract_structuredExtract structured data from a webpage using LLM instructions.
get_screenshotCapture a screenshot of a webpage.
deep_crawlPerform a deep crawl of a website.

Environment Variables

OPENAI_API_KEYAPI key for OpenAI models used in structured extraction
ANTHROPIC_API_KEYAPI key for Anthropic models used in structured extraction
LLM_PROVIDERThe LLM provider to use for extraction tasks
LLM_API_KEYGeneral API key for the selected LLM provider

Configuration

claude_desktop_config.json
{"mcpServers": {"crawl4ai": {"command": "uv", "args": ["run", "crawl4ai-mcp"]}}}

Try it

Crawl the documentation page at https://docs.crawl4ai.com and summarize the key features in Markdown.
Extract all product names and prices from this e-commerce page: https://example.com/products
Take a full-page screenshot of https://example.com and save it for my review.
Perform a deep crawl of https://example.com up to a depth of 2 pages to gather site structure information.

Frequently Asked Questions

What are the key features of Crawl4AI?

Crawl single or multiple URLs into clean Markdown. Extract structured data using CSS, XPath, or LLM instructions. Capture full-page or viewport-specific screenshots. Support for deep website crawling with configurable depth and strategy. Local execution with intelligent caching and asynchronous concurrency.

What can I use Crawl4AI for?

Converting complex web documentation into clean Markdown for RAG pipelines. Automating the extraction of pricing or product data from competitor websites. Visual auditing of web layouts using automated screenshot capture. Gathering comprehensive site content for local knowledge base indexing.

How do I install Crawl4AI?

Install Crawl4AI by running: git clone <your-repo-url> && cd crawl4ai-mcp && uv sync && uv run crawl4ai-setup

What MCP clients work with Crawl4AI?

Crawl4AI works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Crawl4AI docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare