Transforms vague prompts into platform-optimized prompts for 58+ AI platforms
ClarifyPrompt MCP
An MCP server that transforms vague prompts into platform-optimized prompts for 58+ AI platforms across 7 categories — with support for registering custom platforms and providing markdown instruction files.
Send a raw prompt. Get back a version specifically optimized for Midjourney, DALL-E, Sora, Runway, ElevenLabs, Claude, ChatGPT, or any of the 58+ supported platforms — with the right syntax, parameters, and structure each platform expects. Register your own platforms and provide custom optimization instructions via .md files.
How It Works
You write: "a dragon flying over a castle at sunset"
ClarifyPrompt returns (for Midjourney):
"a majestic dragon flying over a medieval castle at sunset
--ar 16:9 --v 6.1 --style raw --q 2 --chaos 30 --s 700"
ClarifyPrompt returns (for DALL-E):
"A majestic dragon flying over a castle at sunset. Size: 1024x1024"
Same prompt, different platform, completely different output. ClarifyPrompt knows what each platform expects.
Quick Start
With Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}
With Claude Code
claude mcp add clarifyprompt -- npx -y clarifyprompt-mcp
Set the environment variables in your shell before launching:
export LLM_API_URL=http://localhost:11434/v1
export LLM_MODEL=qwen2.5:7b
With Cursor
Add to your .cursor/mcp.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}
Supported Platforms (58+ built-in, unlimited custom)
| Category | Platforms | Default |
|---|---|---|
| Image (10) | Midjourney, DALL-E 3, Stable Diffusion, Flux, Ideogram, Leonardo AI, Adobe Firefly, Grok Aurora, Google Imagen 3, Recraft | Midjourney |
| Video (11) | Sora, Runway Gen-3, Pika Labs, Kling AI, Luma, Minimax/Hailuo, Google Veo 2, Wan, HeyGen, Synthesia, CogVideoX | Runway |
| Chat (9) | Claude, ChatGPT, Gemini, Llama, DeepSeek, Qwen, Kimi, GLM, Minimax | Claude |
| Code (9) | Claude, ChatGPT, Cursor, GitHub Copilot, Windsurf, DeepSeek Coder, Qwen Coder, Codestral, Gemini | Claude |
| Document (8) | Claude, ChatGPT, Gemini, Jasper, Copy.ai, Notion AI, Grammarly, Writesonic | Claude |
| Voice (7) | ElevenLabs, OpenAI TTS, Fish Audio, Sesame, Google TTS, PlayHT, Kokoro | ElevenLabs |
| Music (4) | Suno AI, Udio, Stable Audio, MusicGen | Suno |
Tools
`optimize_prompt`
The main tool. Optimizes a prompt for a specific AI platform.
{
"prompt": "a cat sitting on a windowsill",
"category": "image",
"platform": "midjourney",
"mode": "concise"
}
All parameters except prompt are optional. When category and platform are omitted, ClarifyPrompt auto-detects them from the prompt content.
Three calling modes:
| Mode | Example |
|---|---|
| Zero-config | { "prompt": "sunset over mountains" } |
| Category only | { "prompt": "...", "category": "image" } |
| Fully explicit | { "prompt": "...", "category": "image", "platform": "dall-e" } |
Parameters:
| Parameter | Required | Description |
|---|---|---|
prompt |
Yes | The prompt to optimize |
category |
No | chat, image, video, voice, music, code, document. Auto-detected when omitted. |
platform |
No | Platform ID (e.g. midjourney, dall-e, sora, claude). Uses category default when omitted. |
mode |
No | Output style: concise, detailed, structured, step-by-step, bullet-points, technical, simple. Default: detailed. |
enrich_context |
No | Set true to use web search for context enrichment. Default: false. |
Response:
{
"originalPrompt": "a dragon flying over a castle at sunset",
"optimizedPrompt": "a majestic dragon flying over a medieval castle at sunset --ar 16:9 --v 6.1 --style raw --q 2 --s 700",
"category": "image",
"platform": "midjourney",
"mode": "concise",
"detection": {
"autoDetected": true,
"detectedCategory": "image",
"de
Tools (1)
optimize_promptOptimizes a raw prompt for a specific AI platform, category, or output style.Environment Variables
LLM_API_URLrequiredThe API endpoint for the LLM used for optimizationLLM_MODELrequiredThe specific model to use for prompt optimizationConfiguration
{"mcpServers": {"clarifyprompt": {"command": "npx", "args": ["-y", "clarifyprompt-mcp"], "env": {"LLM_API_URL": "http://localhost:11434/v1", "LLM_MODEL": "qwen2.5:7b"}}}}