Domain-specific fine-tuning of open-source LLMs and SLMs.
Tuning Engines CLI & MCP Server
Own your sovereign AI model. Domain-specific fine-tuning of open-source LLMs and SLMs with total control and zero infrastructure hassle.
Tuning Engines provides specialized tuning agents to tailor top open models to your needs — fast, predictable, fully delivered. Fine-tune Qwen, Llama, DeepSeek, Mistral, Gemma, Phi, StarCoder, and CodeLlama models from 1B to 72B parameters on your data via CLI or any MCP-compatible AI assistant. LoRA, QLoRA, and full fine-tuning supported. GPU provisioning, training orchestration, and model delivery fully managed.
Training Agents
Tuning Engines uses specialized agents that control how your data is analyzed and converted into training data. Each agent produces a different kind of domain-specific fine-tuned model optimized for its use case. Current agents focus on code, with more coming for customer support, data extraction, security review, ops, and other domains.
Cody (`code_repo`) — Code Autocomplete Agent
Cody fine-tunes on your GitHub repo using QLoRA (4-bit quantized LoRA) via the Axolotl framework (HuggingFace Transformers + PEFT). It learns your codebase's patterns, naming conventions, and project structure to produce a fast, lightweight adapter optimized for real-time completions.
Best for: code autocomplete, inline suggestions, tab-complete, code style matching, pattern completion.
te jobs create --agent code_repo \
--base-model Qwen/Qwen2.5-Coder-7B-Instruct \
--repo-url https://github.com/your-org/your-repo \
--output-name my-cody-model
SIERA (`sera_code_repo`) — Bug-Fix Specialist
SIERA (Synthetic Intelligent Error Resolution Agent) uses the Open Coding Agents approach from AllenAI to generate targeted bug-fix training data from your repository. It synthesizes realistic error scenarios and their resolutions, then fine-tunes a model that learns your team's debugging style, error handling conventions, and fix patterns.
Best for: debugging, error resolution, patch generation, root cause analysis, fix suggestions.
te jobs create --agent sera_code_repo \
--quality-tier high \
--base-model Qwen/Qwen2.5-Coder-7B-Instruct \
--repo-url https://github.com/your-org/your-repo \
--output-name my-siera-model
Quality tiers (SIERA only):
low— Faster, fewer synthetic pairs (default)high— Deeper analysis, more training data, better results
Coming Soon
| Agent | Persona | What it does |
|---|---|---|
| Resolve | Mira | Fine-tunes on support tickets, macros, and KB articles for automated ticket resolution |
| Extractor | Flux | Trains for strict schema extraction from docs, PDFs, and business text |
| Guard | Aegis | Security-focused code reviewer that catches risky patterns and proposes safer fixes |
| OpsPilot | Atlas | Incident response agent trained on runbooks, postmortems, and on-call notes |
Supported Base Models
| Size | Models |
|---|---|
| 3B | Qwen/Qwen2.5-Coder-3B-Instruct |
| 7B | codellama/CodeLlama-7b-hf, deepseek-ai/deepseek-coder-7b-instruct-v1.5, Qwen/Qwen2.5-Coder-7B-Instruct |
| 13-15B | codellama/CodeLlama-13b-Instruct-hf, bigcode/starcoder2-15b, Qwen/Qwen2.5-Coder-14B-Instruct |
| 32-34B | deepseek-ai/deepseek-coder-33b-instruct, codellama/CodeLlama-34b-Instruct-hf, Qwen/Qwen2.5-Coder-32B-Instruct |
| 70-72B | codellama/CodeLlama-70b-Instruct-hf, meta-llama/Llama-3.1-70B-Instruct, Qwen/Qwen2.5-72B-Instruct |
Quick Start
npm install -g tuningengines-cli
# Sign up or log in (opens browser — works for new accounts too)
te auth login
# Add credits (opens browser to billing page)
te billing add-credits
# Estimate cost before training
te jobs estimate --base-model Qwen/Qwen2.5-Coder-7B-Instruct
# Train Cody on your repo
te jobs create --agent code_repo \
--base-model Qwen/Qwen2.5-Coder-7B-Instruct \
--repo-url https://github.com/your-org/your-repo \
--output-name my-model
# Monitor training
te jobs status <job-id> --watch
# View your trained models
te models list
MCP Server Setup
The CLI includes a built-in MCP server with 18 tools. Any AI assistant that supports MCP can fine-tune models, manage training jobs, and check billing through natural language.
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
Tools (4)
jobs_createCreate a new fine-tuning job for a specified agent and base model.jobs_statusCheck the status of a specific training job.models_listList all trained models associated with the account.billing_add_creditsAdd credits to the account for training.Configuration
{"mcpServers": {"tuning-engines": {"command": "npx", "args": ["-y", "tuningengines-cli", "mcp"]}}}