Zero-cost, crash-proof LLM orchestration
DagPipe
The reliability layer that makes AI workflows safe to ship: crash recovery, schema validation, and cost routing
NeurIPS 2025 research analyzing 1,642 real-world multi-agent execution traces found a 41–86.7% failure rate across 7 state-of-the-art open-source systems. The root cause: cascading error propagation, where one failed node corrupts all downstream nodes.
DagPipe makes cascade failure structurally impossible.
Every node's output is independently validated and checkpointed before the next node executes. A failure at node 4 cannot corrupt nodes 1, 2, or 3. Delete nothing. Just re-run. DagPipe resumes exactly where it stopped, automatically.
Pipeline: research → outline → draft → edit → publish
↑
crashed here
Re-run → research ✓ (restored) → outline ✓ (restored) → draft (re-runs) → ...
Zero infrastructure. Zero subscription. Runs entirely on free-tier APIs.
Install
pip install dagpipe-core
Requirements: Python 3.12+ · pydantic >= 2.0 · pyyaml · A free Groq API key (no credit card)
Three Ways to Use DagPipe
For developers: install the library and build crash-proof LLM pipelines in Python:
pip install dagpipe-core
For non-coders: describe your workflow in plain English, receive production-ready crash-proof pipeline code as a downloadable zip. No coding required: 👉 Pipeline Generator on Apify ($0.05/run)
For AI agents and IDE users: connect directly via MCP. Use DagPipe from Claude Desktop, Cursor, Windsurf, or any MCP-compatible client without writing any code: 👉 DagPipe Generator MCP on Smithery
The generator outputs DagPipe pipelines, as every generated zip already has crash recovery, schema validation, and cost routing built in by default. No other LLM pipeline framework ships this.
Why DagPipe?
| 🔴 Without DagPipe | 🟢 With DagPipe |
|---|---|
| Pipeline crashes = start over from zero | JSON checkpointing: resume from last successful node |
| Paying for large models on every task | Cognitive routing: route easy tasks to free-tier models |
| LLM returns malformed JSON | Guaranteed structured output: auto-retry with error feedback |
| Tight coupling to one provider | Provider-agnostic: any callable works as a model function |
| Fragile sequential scripts | Topological DAG execution: safe dependency resolution |
| Silent bad data passes through | Semantic assertions: catch structurally valid but wrong output |
What's New in v0.2.3
v0.2.3 adds the official MCP Registry metadata and identifier to the package repository, enabling one-click discovery on the official MCP Registry.
What's New in v0.2.2
v0.2.2 improves PyPI discoverability with optimized metadata, a clearer project description, and enhanced AI agent categorization.
What's New in v0.2.1
v0.2.1 brings crucial generator reliability fixes and a highly requested DX feature:
verbose=TrueOutput: Passverbose=Trueto thePipelineOrchestratorto get real-time, per-node CLI progress updates with execution times, node descriptions, and running costs.- **Generator Core F
Tools (1)
generate_pipelineGenerates a crash-proof Python LLM pipeline based on a natural language description.Environment Variables
GROQ_API_KEYrequiredAPI key for Groq to power the pipeline generation and execution.Configuration
{"mcpServers": {"dagpipe": {"command": "npx", "args": ["-y", "@devilsfave/dagpipe"]}}}