Skip to content

pessini/aria

Aria logo

Automation & Reasoning Intelligent Agent

Open-source LangGraph AI agent for building and operating n8n workflows with a skills-first architecture.

License Backend CI Security Policy Aegra

Quick StartArchitectureContributingSecurity

Aria, powered by Aegra, helps automation builders and agent developers create, modify, and operate n8n workflows through a single assistant. It combines local skill packs, MCP tools, and n8n API access so workflow tasks are guided by domain best practices before tool execution.

Read the full story behind this project: Aria: LangGraph Agent Skills + MCP for n8n Workflows

Why Aria?

Aria packages workflow design knowledge, node configuration details, and reliable execution tooling into one assistant so users can move faster with fewer errors.

Feature Highlights

  • Skills-first workflow guidance before MCP tool calls
  • Direct n8n integration through n8n-mcp
  • Thin runtime overlay for Aegra config and service wiring
  • Modular skills architecture under backend/agents/
  • Optional UI for local manual end-to-end testing
  • Docker-based local stack for backend + n8n + UI

Agent Skills

Aria uses a skills-first architecture built on LangGraph: before the agent calls any tool, it loads the relevant skill pack to ground its reasoning in domain best practices. The skill packs and MCP integration are based on the work of @czlonkowski (n8n-skills, n8n-mcp). The design was inspired by Anthropic's Agent Skills and follows the Agent Skills open format.

Skill Packs

Skill Purpose
n8n-code-javascript JavaScript code node expertise — built-in functions, common patterns, error handling
n8n-code-python Python code node expertise — data access, standard library, error patterns
n8n-expression-syntax n8n expression language — syntax reference, common mistakes, examples
n8n-mcp-tools-expert MCP tools integration — search, validation, and workflow operation guides
n8n-node-configuration Node setup patterns — operation patterns, dependency management
n8n-validation-expert Error troubleshooting — error catalog, false positive identification
n8n-workflow-patterns Workflow architecture — webhooks, scheduled tasks, HTTP APIs, AI agents, DB operations

Progressive Disclosure

Skills load in three levels to keep the context window lean:

  1. Catalog — skill names and one-line descriptions are always in the system prompt
  2. Instructions — full SKILL.md loaded on demand via load_skill()
  3. Reference — supporting docs loaded individually via read_skill_file()

New skills are auto-discovered at startup — just drop a folder with a SKILL.md into backend/agents/n8n_agent/skills/ and restart.

Quick Start

Fast Path (Under 2 Minutes)

Run these commands from repository root:

make backend-cli-install
make n8n-up
cp backend/.env.example backend/.env
# edit backend/.env and set OPENAI_API_KEY + N8N_API_KEY
make backend-up

In a second terminal:

curl http://localhost:4242/health
cp ui/.env.example ui/.env
make ui-docker-up

Expected outcomes:

  • n8n UI at http://localhost:4245
  • backend health check returns 200 OK at http://localhost:4242/health
  • optional UI at http://localhost:4241

Detailed Setup

Prerequisites

  • Docker
  • uv (Python package manager)
  • A running n8n instance with API access enabled (see n8n Setup below)

Important: The agent connects to n8n via its REST API to read workflows, execute actions, and use n8n tools. Without a configured n8n instance the agent starts but cannot do useful work, and the UI will show errors on every message.

Step 1 — Install Aegra CLI

make backend-cli-install

Or directly with uv:

uv tool install aegra-cli==0.7.2

Verify:

aegra --version
# aegra 0.7.2

Step 2 — Start n8n (Terminal 2)

make n8n-up

n8n is available at http://localhost:4245

First-time setup:

  1. Open http://localhost:4245 and complete the owner account setup
  2. Go to Settings → n8n API
  3. Click Create an API key and copy it

Step 3 — Configure Environment

cp backend/.env.example backend/.env

Open backend/.env and fill in:

Variable Value
OPENAI_API_KEY Your OpenAI key (or switch to Ollama — see LLM Configuration)
N8N_API_KEY The API key you created in Step 2

N8N_API_URL is pre-filled to http://localhost:4245/api/v1 and works with make n8n-up as-is.

Step 4 — Start Backend (Terminal 1 — stays in foreground)

make backend-up

This starts n8n-mcp via Docker and then runs aegra dev with your agents loaded. Keep this terminal open; Aegra logs stream here.

Step 5 — Verify Backend (Terminal 2)

curl http://localhost:4242/health

Expected: 200 OK

Step 6 — Start UI (Optional, Terminal 2)

cp ui/.env.example ui/.env
make ui-docker-up

UI is available at http://localhost:4241

Usage Examples

Typical prompts for Aria:

  • "Create a workflow that receives webhook data and stores it in a database."
  • "Review this workflow JSON and suggest reliability improvements."
  • "Add retry, timeout, and error-handling patterns to my HTTP Request nodes."
  • "Generate a scheduled reporting workflow and explain each node configuration."

Security Notes

  • The UI is local development only and not production-safe.
  • The UI stores conversations and app settings in browser IndexedDB (plaintext).
  • Any JavaScript on the same origin can read this local data.
  • See SECURITY.md for vulnerability reporting.

n8n Setup

Local (recommended for development)

The n8n/ folder contains a Docker Compose file for running n8n locally:

make n8n-up    # start
make n8n-down  # stop

Data is persisted in a named Docker volume (aria_n8n_data) so your workflows survive restarts.

Generating an API Key

  1. Open http://localhost:4245
  2. Complete the initial owner account setup if needed
  3. Navigate to Settings → n8n API
  4. Click Create an API key, give it a name, and copy the key
  5. Paste it into backend/.env as N8N_API_KEY

Using an Existing n8n Instance

Point the agent at any reachable n8n instance by setting these in backend/.env:

N8N_API_URL=https://your-n8n.example.com/api/v1
N8N_API_KEY=your-api-key
N8N_WEB_URL=https://your-n8n.example.com

LLM Configuration

Edit backend/.env and activate one of the two options:

Option A — OpenAI (cloud):

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-5-mini

Option B — Ollama (local, free):

LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen3

Port Reference

Port Service
4241 UI (Vite dev server)
4242 Aegra backend
4244 n8n-mcp sidecar
4245 n8n

Repository Layout

Path Purpose
backend/agents/ Agent graph, tools, prompts, and skill packs
backend/ Thin runtime overlay (Aegra config + service compose)
n8n/ Local n8n Docker Compose for development
ui/ Optional React harness for manual end-to-end checks (stores local conversations/settings in IndexedDB)
scripts/ CI and local guard/smoke scripts

Where to start if you are:

  • user/integrator: this README.md
  • contributor: CONTRIBUTING.md
  • maintainer: ARCHITECTURE.md and backend/UPSTREAM.md

Contributing

See CONTRIBUTING.md for contribution guidelines and preferred workflows.

Upstream Pin

This repository does not vendor Aegra source code. See backend/UPSTREAM.md for runtime pinning and upgrade policy.

License

This project is licensed under the GNU General Public License v3.0 only (GPL-3.0-only). See LICENSE.

Acknowledgements

This project builds on the work of:

  • Aegra — the agent runtime framework that powers the backend.
  • n8n-mcp — the MCP server that exposes n8n workflows and tools to the agent.
  • n8n-skills — skill pack patterns and reference implementations the agent skills are based on.
  • Lovable — the UI was scaffolded and developed using Lovable.

About

Skills-first LangGraph agent for building and operating n8n workflows via MCP tools, with reusable agent skill packs.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors