Job Listings MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone https://github.com/itsyashvardhan/mcp-server
cd mcp-server
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add job-listings-mcp -- python "<FULL_PATH_TO_MCP_SERVER>/dist/index.js"

Replace <FULL_PATH_TO_MCP_SERVER>/dist/index.js with the actual folder you prepared in step 1.

README.md

Scrapes, deduplicates, and stores fresh job listings from multiple platforms.

MCP Server

A standalone Python microservice that scrapes fresh job listings using Jobspy, stores them in SQLite with deduplication, and exposes a /jobs REST endpoint for embedding in a portfolio site as a live feed.


Features

  • Multi-site scraping
  • Tiered role search
  • Smart deduplication
  • APScheduler
  • Query filtering
  • CORS-enabled
  • Deploy-ready

Architecture

APScheduler (1hr)  →  Jobspy Scraper  →  SQLite (deduped)  ←  FastAPI /jobs
                                                                    ↕
                                                          Portfolio Site (fetch)

Quick Start

1. Clone & Install

cd jobs-mcp-server
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -r requirements.txt

2. Configure

cp .env.example .env
# Edit .env as needed

3. Run

python main.py

The server starts at http://localhost:8000. An initial scrape runs automatically in the background.


API Endpoints

`GET /` — Health Check

{
  "status": "healthy",
  "service": "Job Listings MCP Server",
  "total_jobs_in_db": 142,
  "scrape_interval_hours": 1
}

`GET /jobs` — List Job Listings

Query Params:

Param Type Description
location string Filter by location (substring, case-insensitive)
keyword string Filter by keyword in job title
hours int Only jobs scraped within the last N hours
limit int Max results (default 100, max 500)
offset int Pagination offset

Example:

curl "http://localhost:8000/jobs?location=San%20Francisco&keyword=AI&hours=24"

Response:

{
  "count": 5,
  "filters": {
    "location": "San Francisco",
    "keyword": "AI",
    "hours": 24
  },
  "jobs": [
    {
      "id": 1,
      "job_title": "AI Solutions Engineer",
      "company": "Acme Corp",
      "location": "San Francisco, CA",
      "salary": "USD 120,000–160,000/yearly",
      "apply_link": "https://linkedin.com/jobs/...",
      "date_posted": "2025-01-15",
      "date_scraped": "2025-01-15T12:00:00+00:00",
      "source_site": "linkedin",
      "role_tier": "T2 — Secondary"
    }
  ]
}

`POST /scrape` — Manual Trigger

Triggers a scrape run in the background.

curl -X POST http://localhost:8000/scrape

`GET /status` — Last Scrape Status

curl http://localhost:8000/status

`GET /roles` — Configured Role Tiers

curl http://localhost:8000/roles

Deployment

Railway

  1. Fork the mcp-server repo to a new GitHub repo (or subdirectory).
  2. Connect Railway to the repo.
  3. Railway auto-detects the Dockerfile.
  4. Add a Volume at /data to persist the SQLite DB.
  5. Set environment variables in the Railway dashboard.

Render

  1. Create a new Web Service.
  2. Point to the repo/directory.
  3. Set Build Command: pip install -r requirements.txt
  4. Set Start Command: python main.py
  5. Add a Disk at /data and set DATA_DIR=/data.

🔗 Portfolio Integration

In your Next.js portfolio, fetch from the deployed URL:

// In a Next.js API route or client component
const API_URL = process.env.NEXT_PUBLIC_JOBS_API_URL || 'https://your-jobs-server.up.railway.app';

async function fetchJobs(filters?: { location?: string; keyword?: string; hours?: number }) {
  const params = new URLSearchParams();
  if (filters?.location) params.set('location', filters.location);
  if (filters?.keyword) params.set('keyword', filters.keyword);
  if (filters?.hours) params.set('hours', String(filters.hours));

  const res = await fetch(`${API_URL}/jobs?${params.toString()}`);
  return res.json();
}

License

MIT

Tools (4)

get_jobsRetrieve a list of job listings with optional filtering by location, keyword, and time.
trigger_scrapeManually trigger a background job scraping process.
get_statusCheck the status of the last scrape operation.
get_rolesList all configured role tiers for job searching.

Environment Variables

DATA_DIRDirectory path for persisting the SQLite database.

Configuration

claude_desktop_config.json
{"mcpServers": {"job-listings": {"command": "python", "args": ["/path/to/main.py"]}}}

Try it

Find me recent AI solutions engineer jobs in San Francisco posted in the last 24 hours.
Trigger a manual scrape to update the job listings database.
What is the current status of the last job scrape operation?
List all the configured role tiers available for job searching.

Frequently Asked Questions

What are the key features of Job Listings MCP Server?

Multi-site job scraping using Jobspy. Automated background scraping with APScheduler. Smart deduplication of job listings. REST API endpoints for filtering and querying jobs. CORS-enabled for direct portfolio site integration.

What can I use Job Listings MCP Server for?

Embedding a live, filtered job feed into a personal portfolio website.. Aggregating job listings from multiple platforms into a single database.. Automating the discovery of new job opportunities based on specific keywords and locations..

How do I install Job Listings MCP Server?

Install Job Listings MCP Server by running: git clone https://github.com/itsyashvardhan/mcp-server && cd mcp-server && python -m venv venv && source venv/bin/activate && pip install -r requirements.txt

What MCP clients work with Job Listings MCP Server?

Job Listings MCP Server works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Job Listings MCP Server docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare