New · MCP Server · Pattern Library · Offline Fixtures

Stop clicking.Start shipping.

n8m is an open-source CLI that wraps your n8n instance with an agentic AI layer. Describe what you want in plain English — the agent designs, builds, validates, and deploys it.

No account. No server. Bring your own AI key.

$ npx  n8m  create  "your workflow"
n8m — terminal
Architecture

The agentic pipeline

Three specialized AI agents, each with a defined role. QA failures loop back to the Engineer automatically — up to three times before escalating.

Stage 01
Architect

Designs the workflow blueprint. Selects node types, defines execution flow, identifies required credentials, and consults the RAG-indexed pattern library.

Stage 02
Engineer

Generates the complete n8n workflow JSON from the blueprint. Injects matched patterns from the project-local library to reproduce proven approaches exactly.

Stage 03
QA

Validates the workflow against structural rules and n8n constraints. On failure, analyzes the error type, applies targeted fixes, and loops back to the Engineer.

Output
./workflows/

workflow.json
README.md (+ Mermaid diagram)


CLI Reference

Every command you need

The full workflow lifecycle — from first generation to production deployment.

n8m create

Generate a complete workflow from a plain-English description. The agentic pipeline handles design, build, and validation end-to-end.

$ n8m create "Slack DM on new GitHub PR"
$ n8m create --multiline
$ n8m create "..." --output ./my-flow.json
n8m modify

Modify a local file or live n8n workflow using natural language. Interactively browse local files and remote instance workflows.

$ n8m modify ./flow.json
    "Add error handling to HTTP node"
n8m test

Deploy ephemerally and validate. AI self-repair loop analyzes errors and applies targeted fixes automatically. Save fixtures for offline replay.

$ n8m test --ai-scenarios
$ n8m test --fixture .n8m/fixtures/abc
n8m deploy

Push a local workflow JSON to your n8n instance. CI-friendly non-interactive flags for create-or-update without prompts.

$ n8m deploy ./flow.json --activate
$ n8m deploy ./flow.json --update # CI
n8m learn

Extract reusable engineering patterns from validated workflows. Import community patterns from GitHub. Patterns feed directly into future Engineer prompts.

$ n8m learn --all
$ n8m learn --github owner/repo
n8m fixture

Capture real execution data from n8n and replay it offline. No live instance, credentials, or external API calls needed for subsequent test runs.

$ n8m fixture capture abc123
$ n8m fixture init abc123
n8m doc

Generate visual documentation for any workflow. Outputs a Mermaid.js flowchart and AI-generated execution summary as README.md.

$ n8m doc ./workflows/my-flow.json
# → writes README.md + Mermaid diagram
n8m mcp

Launch an MCP server exposing all n8m capabilities as tools for Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.

$ n8m mcp
# → 8 tools available over stdio

Why n8m

Built for real automation work

Every design decision prioritizes local-first, no-lock-in, developer-grade workflows.

Local-first, zero lock-in

Credentials and workflow files live on your machine. No cloud service, no account, no telemetry. Works entirely with your existing n8n instance.

Self-healing test loop

When tests fail, the AI repair loop identifies the error type, applies targeted fixes (Code nodes, payloads, binary fields), and retries — no human needed.

Self-improving pattern library

Every validated workflow you n8m learn on enriches the library. Future generations automatically apply proven approaches to similar problems.

Offline fixture testing

Capture real n8n execution data once, then replay it offline forever. Commit fixtures to your repo — your team runs the same tests without a live instance.

HITL pauses & resume

The agent pauses mid-run for human review. Sessions persist in SQLite — resume interrupted runs at any time with n8m resume <thread-id>.

Bring any AI model

Works with OpenAI, Claude, Gemini, Ollama, Groq, Together, or any OpenAI-compatible API endpoint. Switch providers with one config flag.


AI Providers

Bring your own AI

n8m is provider-agnostic. Configure once, works everywhere.

OpenAI gpt-4o
Anthropic Claude claude-sonnet-4-6
Google Gemini gemini-2.5-flash
Ollama local / llama3
Groq llama-3.3-70b
Any OpenAI-compatible --ai-base-url
$ n8m config --ai-provider anthropic --ai-key sk-ant-...
$ n8m config --n8n-url https://n8n.example.com --n8n-key ...
# credentials saved to ~/.n8m/config.json — persists across npx invocations

MCP Server

Use n8m inside Claude Desktop

n8m exposes 8 tools via the Model Context Protocol. Any MCP client can create, test, and deploy n8n workflows directly from a conversation.

create_workflow, modify_workflow, test_workflow
deploy_workflow, get_workflow, list_workflows
delete_workflow, generate_docs
MCP setup guide
claude_desktop_config.json
"mcpServers" "n8m" "command""npx" "args""n8m" "mcp"

Roadmap

What's shipped, what's next

n8m ships fast. Check GitHub for the latest.

Shipped
Agentic graph (Architect → Engineer → QA)
SQLite session persistence
HITL interrupts and resume
Sub-workflow dependency resolution in tests
Multi-provider AI support (OpenAI, Claude, Gemini, Ollama…)
Automatic documentation generation (Mermaid + summary)
Fixture record & replay — offline testing with real data
Pattern library (extract & reuse from validated workflows)
GitHub pattern archive import (n8m learn --github)
MCP server — 8 tools for full workflow lifecycle
Non-interactive deploy flags (CI-ready)
Credential awareness — AI consults live instance
Coming Soon
n8m watch — file-system watcher with live reload
n8m diff — structural diff between workflow versions
Parallel test runs — multiple fixtures concurrently
n8m debug — stream node-by-node output in real time
n8m chat — interactive REPL with incremental AI edits
Workflow linter — static analysis before deploy
Multi-instance deploy — staging and production in one command
Workflow marketplace — publish & install community workflows
LangGraph trace export — full agentic reasoning log


Get started

Ready to stop clicking?

No account. No server. No lock-in. Just your n8n instance and an AI key.

$ npx  n8m  create  "describe your workflow"