MCP Local LLM Server MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
npm install
npm run build
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add mcp-local-llm -- node "<FULL_PATH_TO_MCPLOCALHELPER>/dist/index.js" --settings /path/to/mcpLocalLLM/env.settings

Replace <FULL_PATH_TO_MCPLOCALHELPER>/dist/index.js with the actual folder you prepared in step 1.

README.md

A privacy-first MCP server that provides local LLM-enhanced tools

MCP Local LLM Server

A privacy-first MCP (Model Context Protocol) server that provides unique LLM-enhanced tools for VS Code Copilot. All analysis uses your local LLM - code never leaves your machine.

Key Features

  • Privacy-First: All LLM analysis runs locally - your code never leaves your machine
  • VS Code Copilot Optimized: Designed to complement (not duplicate) VS Code's built-in tools
  • LLM-Enhanced Tools: Every tool adds intelligent analysis, not just raw data
  • Symbol-Aware: Understands code structure, not just text patterns
  • Security Scanning: Automatic detection of secrets, API keys, and vulnerabilities
  • Multiple Backends: Ollama, LM Studio, OpenRouter support

Documentation Map

  • docs/API_REFERENCE.md - full tool schemas and usage
  • docs/TOOL_VISIBILITY_TIERS.md - tool surfacing strategy (core/discoverable/hidden)
  • docs/examples/client-configuration-guide.md - IDE/client setup patterns
  • docs/operations/scripts-guide.md - operational scripts and maintenance
  • docs/operations/test-utils.md - test harness utilities
  • docs/prompts/ - curated prompt suites for QA/regression workflows

Prerequisites

  • Node.js 20+ and npm
  • One local LLM backend running (LM Studio or Ollama)
  • Python 3.10+ (only required for run_all_tests_ALL.py)

Quick Start

Windows Users - Automated Setup

# Start the server (auto-installs dependencies if needed)
start.bat

# Stop the server
stop.bat

Manual Installation

# Install dependencies from the project root
npm install
npm run build

# Configure (optional)
cp env.settings.example env.settings

# Start
npm start

VS Code Integration

There are several ways to configure MCP Local LLM with VS Code. Choose the method that best fits your workflow.

Option 1: Environment Variable (Recommended for Distribution)

Step 1: Set an environment variable pointing to your mcpLocalLLM installation:

Windows (PowerShell - add to profile for persistence):

$env:MCP_LOCAL_LLM_PATH = "C:\path\to\mcpLocalLLM"
[Environment]::SetEnvironmentVariable("MCP_LOCAL_LLM_PATH", "C:\path\to\mcpLocalLLM", "User")

macOS/Linux:

# Add to ~/.bashrc or ~/.zshrc
export MCP_LOCAL_LLM_PATH="/path/to/mcpLocalLLM"

Step 2: Create .vscode/mcp.json in any project:

{
  "mcp": {
    "servers": {
      "mcp-local-llm": {
        "command": "node",
        "args": [
          "${env:MCP_LOCAL_LLM_PATH}/dist/index.js",
          "--settings",
          "${env:MCP_LOCAL_LLM_PATH}/env.settings"
        ]
      }
    }
  }
}

This same configuration works across all projects without modification.

Option 2: Absolute Path (Simple, Project-Specific)

Create .vscode/mcp.json with the full path:

{
  "mcp": {
    "servers": {
      "mcp-local-llm": {
        "command": "node",
        "args": [
          "C:/Users/yourname/mcpLocalLLM/dist/index.js",
          "--settings",
          "C:/Users/yourname/mcpLocalLLM/env.settings"
        ]
      }
    }
  }
}

Note: Pass --settings to ensure the server uses the intended settings file (especially when you have multiple installs).

Option 3: Per-Project Configuration with Custom Workspace

For projects that need custom workspace settings, create a project-local env.settings:

Step 1: Copy env.settings.example to your project as env.settings

Step 2: Configure workspace roots + allowlist via the Web UI (http://127.0.0.1:3000/) or by editing [config] CONFIG_JSON in env.settings.

Step 3: Point your .vscode/mcp.json to this settings file:

{
  "mcp": {
    "servers": {
      "mcp-local-llm": {
        "command": "node",
        "args": [
          "${env:MCP_LOCAL_LLM_PATH}/dist/index.js",
          "--settings",
          "${workspaceFolder}/env.settings"
        ]
      }
    }
  }
}
Optional: OpenRouter for Testing

If you want to test with an external SOTA backend (not needed for normal use):

{
  "mcp": {
    "servers": {
      "mcp-local-llm": {
        "command": "node",
        "args": [
          "${env:MCP_LOCAL_LLM_PATH}/dist/index.js",
          "--settings",
          "${env:MCP_LOCAL_LLM_PATH}/env.settings"
        ],
        "env": {
          "TESTING_MODE_ENABLED": "true",
          "OPENROUTER_API_KEY": "sk-or-v1-your-key-here"
        }
      }
    }
  }
}

Other IDEs

Cursor

Create .cursor/mcp.json:

{
  "mcpServers": {
    "mcp-local-llm": {
      "command": "node",
      "args": ["${env:MCP_LOCAL_LLM_PATH}/dist/index.js"],
      "env": {
        "WORKSPACE_ROOT": "${workspaceFolder}"
      }
    }
  }
}
Windsurf

Create .windsurf/mcp.json:

{
  "mcpServers": {
    "mcp-local-llm": {
      "command": "node",
      "args": ["${env:MCP_LOCAL_LLM_PATH}/dist/index.js"],
      "env": {
        "WORKSPACE_ROOT": "${workspaceFolder}"
      }
    }
  }
}
Claude Desktop

Add to

Environment Variables

MCP_LOCAL_LLM_PATHPath to the mcpLocalLLM installation directory
OPENROUTER_API_KEYAPI key for OpenRouter when testing in external mode
TESTING_MODE_ENABLEDEnables testing mode for external backends

Configuration

claude_desktop_config.json
{"mcpServers": {"mcp-local-llm": {"command": "node", "args": ["/path/to/mcpLocalLLM/dist/index.js", "--settings", "/path/to/mcpLocalLLM/env.settings"]}}}

Try it

Analyze the current codebase for potential security vulnerabilities or hardcoded secrets.
Perform a symbol-aware code review of the authentication module.
Scan the workspace for API keys or sensitive credentials that should not be committed.
Explain the structure and dependencies of the current project using local LLM analysis.

Frequently Asked Questions

What are the key features of MCP Local LLM Server?

Privacy-first local LLM analysis. Symbol-aware code structure understanding. Automatic security scanning for secrets and vulnerabilities. Support for multiple backends including Ollama and LM Studio. Optimized for VS Code Copilot workflows.

What can I use MCP Local LLM Server for?

Performing secure, offline code reviews on sensitive enterprise projects.. Automating the detection of leaked API keys or secrets before pushing code.. Exploring complex codebases with symbol-aware navigation and analysis.. Integrating local LLM intelligence into IDEs like Cursor, Windsurf, and VS Code..

How do I install MCP Local LLM Server?

Install MCP Local LLM Server by running: npm install && npm run build

What MCP clients work with MCP Local LLM Server?

MCP Local LLM Server works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep MCP Local LLM Server docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare