Three specialized AI agents, each with a defined role. QA failures loop back to the Engineer automatically — up to three times before escalating.
Designs the workflow blueprint. Selects node types, defines execution flow, identifies required credentials, and consults the RAG-indexed pattern library.
Generates the complete n8n workflow JSON from the blueprint. Injects matched patterns from the project-local library to reproduce proven approaches exactly.
Validates the workflow against structural rules and n8n constraints. On failure, analyzes the error type, applies targeted fixes, and loops back to the Engineer.
workflow.json
README.md (+ Mermaid diagram)
The full workflow lifecycle — from first generation to production deployment.
Generate a complete workflow from a plain-English description. The agentic pipeline handles design, build, and validation end-to-end.
Modify a local file or live n8n workflow using natural language. Interactively browse local files and remote instance workflows.
Deploy ephemerally and validate. AI self-repair loop analyzes errors and applies targeted fixes automatically. Save fixtures for offline replay.
Push a local workflow JSON to your n8n instance. CI-friendly non-interactive flags for create-or-update without prompts.
Extract reusable engineering patterns from validated workflows. Import community patterns from GitHub. Patterns feed directly into future Engineer prompts.
Capture real execution data from n8n and replay it offline. No live instance, credentials, or external API calls needed for subsequent test runs.
Generate visual documentation for any workflow. Outputs a Mermaid.js flowchart and AI-generated execution summary as README.md.
Launch an MCP server exposing all n8m capabilities as tools for Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.
Every design decision prioritizes local-first, no-lock-in, developer-grade workflows.
Credentials and workflow files live on your machine. No cloud service, no account, no telemetry. Works entirely with your existing n8n instance.
When tests fail, the AI repair loop identifies the error type, applies targeted fixes (Code nodes, payloads, binary fields), and retries — no human needed.
Every validated workflow you n8m learn on enriches the library. Future generations automatically apply proven approaches to similar problems.
Capture real n8n execution data once, then replay it offline forever. Commit fixtures to your repo — your team runs the same tests without a live instance.
The agent pauses mid-run for human review. Sessions persist in SQLite — resume interrupted runs at any time with n8m resume <thread-id>.
Works with OpenAI, Claude, Gemini, Ollama, Groq, Together, or any OpenAI-compatible API endpoint. Switch providers with one config flag.
n8m is provider-agnostic. Configure once, works everywhere.
n8m exposes 8 tools via the Model Context Protocol. Any MCP client can create, test, and deploy n8n workflows directly from a conversation.
n8m ships fast. Check GitHub for the latest.
n8m learn --github)Sponsorship funds AI API costs, continued development, and keeping up with n8n's rapidly evolving node ecosystem.
No account. No server. No lock-in. Just your n8n instance and an AI key.