ScraperAPI MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add -e "API_KEY=${API_KEY}" scraperapi -- docker run -i -e "API_KEY=${API_KEY}" --rm scraperapi-mcp-server
Required:API_KEY
README.md

Bypass bot detection, geo-restrictions, and JavaScript rendering challenges

ScraperAPI MCP server

The ScraperAPI MCP server enables LLM clients to retrieve and process web scraping requests using the ScraperAPI services.

Features

  • Full implementation of the Model Context Protocol specification
  • Seamless integration with ScraperAPI for web scraping
  • Simple setup with Python or Docker

Architecture

          ┌───────────────┐     ┌───────────────────────┐     ┌───────────────┐
          │  LLM Client   │────▶│  Scraper MCP Server   │────▶│    AI Model   │
          └───────────────┘     └───────────────────────┘     └───────────────┘
                                            │
                                            ▼
                                  ┌──────────────────┐
                                  │  ScraperAPI API  │
                                  └──────────────────┘

Installation

The ScraperAPI MCP Server is designed to run as a local server on your machine, your LLM client will launch it automatically when configured.

Prerequisites

  • Python 3.11+
  • Docker (optional)

Using Python

Install the package:

pip install scraperapi-mcp-server

Add this to your client configuration file:

{
  "mcpServers": {
    "ScraperAPI": {
      "command": "python",
      "args": ["-m", "scraperapi_mcp_server"],
      "env": {
        "API_KEY": "<YOUR_SCRAPERAPI_API_KEY>"
      }
    }
  }
}

Using Docker

Add this to your client configuration file:

{
  "mcpServers": {
    "ScraperAPI": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "-e",
        "API_KEY=${API_KEY}",
        "--rm",
        "scraperapi-mcp-server"]
    }
  }
}

[!TIP]

If your command is not working (for example, you see a package not found error when trying to start the server), double-check the path you are using. To find the correct path, activate your virtual environment first, then run:

which <YOUR_COMMAND>

API Reference

Available Tools

  • scrape
    • Scrape a URL from the internet using ScraperAPI
    • Parameters:
      • url (string, required): URL to scrape
      • render (boolean, optional): Whether to render the page using JavaScript. Defaults to False. Set to True only if the page requires JavaScript rendering to display its content.
      • country_code (string, optional): Activate country geotargeting (ISO 2-letter code)
      • premium (boolean, optional): Activate premium residential and mobile IPs
      • ultra_premium (boolean, optional): Activate advanced bypass mechanisms. Can not combine with premium
      • device_type (string, optional): Set request to use mobile or desktop user agents
      • output_format (string, optional): Allows you to instruct the API on what the response file type should be.
      • autoparse (boolean, optional): Activate auto parsing for select websites. Defaults to False. Set to True only if you want the output format in csv or json.
    • Returns: The scraped content as a string

Prompt templates

  • Please scrape this URL <URL>. If you receive a 500 server error identify the website's geo-targeting and add the corresponding country_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding premium=true. For persistent failures, activate ultra_premium=true to use enhanced anti-blocking measures.
  • Can you scrape URL <URL> to extract <SPECIFIC_DATA>? If the request returns missing/incomplete<SPECIFIC_DATA>, set render=true to enable JS Rendering.

Configuration

Settings

  • API_KEY: Your ScraperAPI API key.

Configure Claude Desktop App & Claude Code

Claude Desktop:

  1. Open Claude Desktop and click the settings icon
  2. Select the "Developer" tab
  3. Click "Edit Config" and paste the JSON configuration file

Claude Code:

  1. Add the server manually to your .claude/settings.json with the JSON configuration file, or run:
    claude mcp add scraperapi -e API_KEY=<YOUR_SCRAPERAPI_API_KEY> -- python -m scraperapi_mcp_server
    

Configure Cursor Editor

  1. Open Cursor
  2. Access the Settings Menu
  3. Open Cursor Settings
  4. Go to Tools & Integrations section
  5. Click '+ Add MCP Server'
  6. Choose Manual and paste the JSON configuration file

More here

Configure Windsurf Editor

  1. Open Windsurf
  2. Access the Settings Menu
  3. Click on the Cascade settings
  4. Click on the MCP server sectio

Tools (1)

scrapeScrape a URL from the internet using ScraperAPI

Environment Variables

API_KEYrequiredYour ScraperAPI API key

Configuration

claude_desktop_config.json
{"mcpServers": {"ScraperAPI": {"command": "python", "args": ["-m", "scraperapi_mcp_server"], "env": {"API_KEY": "<YOUR_SCRAPERAPI_API_KEY>"}}}}

Try it

Please scrape this URL https://example.com. If you receive a 500 server error, identify the website's geo-targeting and add the corresponding country_code to overcome geo-restrictions.
Can you scrape https://example.com to extract the product pricing? If the request returns missing data, set render=true to enable JS Rendering.
Scrape the content of https://example.com using premium residential proxies to ensure the request is not blocked.
Extract the main article text from https://example.com using mobile user agents.

Frequently Asked Questions

What are the key features of ScraperAPI?

Full implementation of the Model Context Protocol specification. Seamless integration with ScraperAPI for web scraping. Bypass bot detection and geo-restrictions. Support for JavaScript rendering. Simple setup with Python or Docker.

What can I use ScraperAPI for?

Extracting data from websites that require JavaScript rendering to display content. Accessing region-locked content by utilizing country-specific proxy rotation. Gathering competitive intelligence from websites with aggressive anti-bot protections. Automating data collection for research or analysis tasks.

How do I install ScraperAPI?

Install ScraperAPI by running: pip install scraperapi-mcp-server

What MCP clients work with ScraperAPI?

ScraperAPI works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep ScraperAPI docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare