MCP Splunk MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone https://github.com/vforvishal12/mcp-splunk.git
cd mcp-splunk
pip install -r requirements.txt
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add -e "OPENAI_API_KEY=${OPENAI_API_KEY}" mcp-splunk -- node "<FULL_PATH_TO_MCP_SPLUNK>/dist/index.js"

Replace <FULL_PATH_TO_MCP_SPLUNK>/dist/index.js with the actual folder you prepared in step 1.

Required:OPENAI_API_KEY
README.md

Automated log retrieval and threat analysis using LangGraph and RAG.

MCP Splunk — Full Setup & Architecture Guide

This guide explains:

• utilities & frameworks used
• how each component fits in the architecture
• step‑by‑step Windows local setup
• how MCP, RAG, LangGraph, Guardrails & LLM integrate
• basic → advanced usage flow


🧩 Architecture & Technology Flow

User → Streamlit UI → LangGraph Agent

            │
            ▼
   ┌────────────────────────────┐
   │     AGENT ORCHESTRATION    │
   │        LangGraph           │
   └────────────┬───────────────┘
                │
   ┌────────────┼─────────────┐
   ▼            ▼             ▼
Log Fetch    Runbook RAG   Detection Engine
(MCP API)    (Vector DB)   (Pattern Logic)

   │            │             │
   └────────────┴─────────────┘
                ▼
         LLM Reasoning Layer
        (OpenRouter / Llama3)

                ▼
          Guardrails Validation
             (Pydantic)

                ▼
          Structured Response

🧰 Utilities & Frameworks Used

Core Runtime

Python 3.10+

Primary runtime for orchestration and services.


LLM Layer

OpenRouter + Llama‑3

Used for reasoning over logs and generating security findings.


LangChain Ecosystem

LangChain

Provides embedding and vector search integration.

LangGraph

Used for deterministic agent orchestration.

✔ stateful workflows
✔ branching logic
✔ production reliability

LangSmith (Optional)

Observability & debugging for agent flows.


RAG Stack

SentenceTransformers

Creates semantic embeddings.

Model:

all-MiniLM-L6-v2

ChromaDB

Local vector database storing runbook embeddings.


MCP Service Layer

FastAPI

Provides log access endpoints.

Simulates enterprise log providers like Splunk or Elastic.


Guardrails

Pydantic

Validates LLM output structure.

Prevents malformed responses.


Detection Engine

Custom Python detection for:

✔ SSH brute force attempts
✔ suspicious IP activity


🖥️ Windows Local Setup

1️⃣ Install Python

Verify:

python --version

2️⃣ Clone Repo

git clone https://github.com/vforvishal12/mcp-splunk.git
cd mcp-splunk

3️⃣ Virtual Environment

python -m venv venv
venv\Scripts\activate

4️⃣ Install Dependencies

pip install -r requirements.txt

If needed:

pip install streamlit fastapi uvicorn requests python-dotenv
pip install langchain langgraph chromadb sentence-transformers
pip install openai pydantic

5️⃣ Environment Variables

Create .env

OPENAI_API_KEY=your_key

6️⃣ Build Vector DB

Run once:

python
from agent.rag import build_vector_db
build_vector_db()
exit()

7️⃣ Start MCP Server

uvicorn mcp_server:app --port 9000

Verify:

http://localhost:9000/service_health


8️⃣ Launch App

streamlit run app.py

Open:

http://localhost:8501


🔄 Execution Flow

  1. User submits query
  2. Agent fetches logs via MCP
  3. Logs parsed & categorized
  4. Threat detection executed
  5. Runbook context retrieved (RAG)
  6. LLM generates security analysis
  7. Guardrails validate output
  8. Structured results displayed

🧠 Basic vs Advanced Usage

Basic

✔ run locally
✔ detect suspicious activity

Advanced

✔ integrate Splunk/Elastic
✔ stream logs via Kafka
✔ enable LangSmith tracing
✔ deploy via Docker & Kubernetes


🚀 Production Upgrade Path

  1. Replace file logs → streaming ingestion
  2. deploy vector DB remotely
  3. enable SIEM alerting
  4. multi-host correlation

Tools (1)

log_fetchFetches and retrieves logs for analysis.

Environment Variables

OPENAI_API_KEYrequiredAPI key for LLM reasoning capabilities

Configuration

claude_desktop_config.json
{"mcpServers": {"splunk": {"command": "uvicorn", "args": ["mcp_server:app", "--port", "9000"]}}}

Try it

Analyze the latest logs for any signs of SSH brute force attempts.
Check for suspicious IP activity in the recent log data.
Retrieve the security runbook for handling detected brute force attacks.
Generate a structured security insight report based on the current log analysis.

Frequently Asked Questions

What are the key features of MCP Splunk?

Automated log retrieval via MCP API. Threat detection engine for SSH brute force and suspicious IPs. RAG-based runbook retrieval using ChromaDB. Deterministic agent orchestration with LangGraph. Pydantic-based guardrails for structured LLM output.

What can I use MCP Splunk for?

Automating initial triage of security logs for SOC analysts. Providing real-time runbook guidance during incident response. Detecting and categorizing common network attack patterns. Standardizing security incident reporting through LLM reasoning.

How do I install MCP Splunk?

Install MCP Splunk by running: git clone https://github.com/vforvishal12/mcp-splunk.git && cd mcp-splunk && pip install -r requirements.txt

What MCP clients work with MCP Splunk?

MCP Splunk works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep MCP Splunk docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare