Tuning Engines MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add tuning-engines -- npx -y tuningengines-cli mcp
README.md

Domain-specific fine-tuning of open-source LLMs and SLMs.

Tuning Engines CLI & MCP Server

Own your sovereign AI model. Domain-specific fine-tuning of open-source LLMs and SLMs with total control and zero infrastructure hassle.

Tuning Engines provides specialized tuning agents to tailor top open models to your needs — fast, predictable, fully delivered. Fine-tune Qwen, Llama, DeepSeek, Mistral, Gemma, Phi, StarCoder, and CodeLlama models from 1B to 72B parameters on your data via CLI or any MCP-compatible AI assistant. LoRA, QLoRA, and full fine-tuning supported. GPU provisioning, training orchestration, and model delivery fully managed.

Training Agents

Tuning Engines uses specialized agents that control how your data is analyzed and converted into training data. Each agent produces a different kind of domain-specific fine-tuned model optimized for its use case. Current agents focus on code, with more coming for customer support, data extraction, security review, ops, and other domains.

Cody (`code_repo`) — Code Autocomplete Agent

Cody fine-tunes on your GitHub repo using QLoRA (4-bit quantized LoRA) via the Axolotl framework (HuggingFace Transformers + PEFT). It learns your codebase's patterns, naming conventions, and project structure to produce a fast, lightweight adapter optimized for real-time completions.

Best for: code autocomplete, inline suggestions, tab-complete, code style matching, pattern completion.

te jobs create --agent code_repo \
  --base-model Qwen/Qwen2.5-Coder-7B-Instruct \
  --repo-url https://github.com/your-org/your-repo \
  --output-name my-cody-model

SIERA (`sera_code_repo`) — Bug-Fix Specialist

SIERA (Synthetic Intelligent Error Resolution Agent) uses the Open Coding Agents approach from AllenAI to generate targeted bug-fix training data from your repository. It synthesizes realistic error scenarios and their resolutions, then fine-tunes a model that learns your team's debugging style, error handling conventions, and fix patterns.

Best for: debugging, error resolution, patch generation, root cause analysis, fix suggestions.

te jobs create --agent sera_code_repo \
  --quality-tier high \
  --base-model Qwen/Qwen2.5-Coder-7B-Instruct \
  --repo-url https://github.com/your-org/your-repo \
  --output-name my-siera-model

Quality tiers (SIERA only):

  • low — Faster, fewer synthetic pairs (default)
  • high — Deeper analysis, more training data, better results

Coming Soon

Agent Persona What it does
Resolve Mira Fine-tunes on support tickets, macros, and KB articles for automated ticket resolution
Extractor Flux Trains for strict schema extraction from docs, PDFs, and business text
Guard Aegis Security-focused code reviewer that catches risky patterns and proposes safer fixes
OpsPilot Atlas Incident response agent trained on runbooks, postmortems, and on-call notes

Supported Base Models

Size Models
3B Qwen/Qwen2.5-Coder-3B-Instruct
7B codellama/CodeLlama-7b-hf, deepseek-ai/deepseek-coder-7b-instruct-v1.5, Qwen/Qwen2.5-Coder-7B-Instruct
13-15B codellama/CodeLlama-13b-Instruct-hf, bigcode/starcoder2-15b, Qwen/Qwen2.5-Coder-14B-Instruct
32-34B deepseek-ai/deepseek-coder-33b-instruct, codellama/CodeLlama-34b-Instruct-hf, Qwen/Qwen2.5-Coder-32B-Instruct
70-72B codellama/CodeLlama-70b-Instruct-hf, meta-llama/Llama-3.1-70B-Instruct, Qwen/Qwen2.5-72B-Instruct

Quick Start

npm install -g tuningengines-cli

# Sign up or log in (opens browser — works for new accounts too)
te auth login

# Add credits (opens browser to billing page)
te billing add-credits

# Estimate cost before training
te jobs estimate --base-model Qwen/Qwen2.5-Coder-7B-Instruct

# Train Cody on your repo
te jobs create --agent code_repo \
  --base-model Qwen/Qwen2.5-Coder-7B-Instruct \
  --repo-url https://github.com/your-org/your-repo \
  --output-name my-model

# Monitor training
te jobs status <job-id> --watch

# View your trained models
te models list

MCP Server Setup

The CLI includes a built-in MCP server with 18 tools. Any AI assistant that supports MCP can fine-tune models, manage training jobs, and check billing through natural language.

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{

Tools (4)

jobs_createCreate a new fine-tuning job for a specified agent and base model.
jobs_statusCheck the status of a specific training job.
models_listList all trained models associated with the account.
billing_add_creditsAdd credits to the account for training.

Configuration

claude_desktop_config.json
{"mcpServers": {"tuning-engines": {"command": "npx", "args": ["-y", "tuningengines-cli", "mcp"]}}}

Try it

Create a new fine-tuning job using the code_repo agent for my repository at https://github.com/my-org/my-repo using Qwen/Qwen2.5-Coder-7B-Instruct.
What is the current status of my training job with ID 12345?
List all the fine-tuned models I have created so far.
Estimate the cost for fine-tuning a Qwen/Qwen2.5-Coder-7B-Instruct model on my codebase.

Frequently Asked Questions

What are the key features of Tuning Engines?

Fine-tune open-source models including Qwen, Llama, DeepSeek, Mistral, and StarCoder.. Supports LoRA, QLoRA, and full fine-tuning techniques.. Specialized agents for code autocomplete (Cody) and bug-fix resolution (SIERA).. Managed GPU provisioning, training orchestration, and model delivery.. Built-in MCP server for managing training jobs and billing via AI assistants..

What can I use Tuning Engines for?

Training a lightweight code autocomplete model tailored to a specific company's internal naming conventions.. Generating a bug-fix specialist model that learns a team's specific debugging style and error handling patterns.. Automating the creation of domain-specific models for codebases without managing infrastructure.. Estimating training costs for large-scale model fine-tuning before committing resources..

How do I install Tuning Engines?

Install Tuning Engines by running: npm install -g tuningengines-cli

What MCP clients work with Tuning Engines?

Tuning Engines works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Tuning Engines docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare