Deploy n8n workflow automation on Kubernetes and Red Hat OpenShift with native AI capabilities, OpenShift MCP Server integration, and Developer Sandbox support.
First-class support for Red Hat OpenShift including Routes, SCCs, and Developer Sandbox compatibility with restricted security contexts.
Integrate with OpenShift AI models like IBM Granite 3.1 via LiteLLM proxy and MCP Server for intelligent infrastructure monitoring.
Optional built-in Mailpit SMTP test server for previewing email reports from workflows without external email infrastructure.
Connect to OpenShift and Kubernetes MCP Servers to query cluster state, analyze pods, deployments, routes, and security posture.
Supports queue mode with Valkey/Redis, worker autoscaling, PostgreSQL backend, ServiceMonitor for Prometheus, and HPA.
Non-root containers, restricted SCC support, enableServiceLinks control, and proper RBAC with dynamic naming.
| Component | Description |
|---|---|
| n8n | Workflow automation engine deployed via Helm |
| LiteLLM | OpenAI-compatible proxy routing to Granite/Qwen models |
| OpenShift MCP Server | MCP server exposing OpenShift/Kubernetes API as tools |
| K8s MCP Server | Additional Kubernetes-native MCP tool server |
| Mailpit | Lightweight SMTP test server with web UI (optional) |
For Red Hat OpenShift Developer Sandbox, use these values to ensure compatibility with restricted SCCs:
| Setting | Value | Reason |
|---|---|---|
image.variant | ubi | Uses Red Hat UBI image with curl for workflow downloads |
enableServiceLinks | false | Avoids N8N_PORT env conflict in OpenShift |
route.sccRoleDisabled | true | Developer Sandbox users cannot create SCC Roles |
main.config.n8n.user_folder | /data | Writable path for random UID assigned by OpenShift |
main.persistence.mountPath | /data | Mount PVC at writable path instead of /home/node/.n8n |
podSecurityContext | {} | No fsGroup (restricted SCC) |
main.persistence.storageClass | gp3-csi | Sandbox default StorageClass |
Each workflow follows a 5-node pipeline using the MCP Streamable HTTP protocol with full session handling (initialize β notifications/initialized β tools/call with Mcp-Session-Id), AI-powered analysis via LiteLLM/Granite, and delivers a branded HTML email report through Mailpit:
Calls monitorDeployments via Quarkus MCP Server to retrieve deployment health, replica counts, and rollout status. AI formats and explains the output via LiteLLM/Granite, then delivers a branded HTML email report via Mailpit.
Calls pods_list_in_namespace via K8s MCP Server to list all pods with status, readiness, restarts, and node placement. AI analyzes the pod inventory and provides health assessment via LiteLLM/Granite.
Calls analyzePodDisruptions via Quarkus MCP Server to detect evictions, OOM kills, and restart patterns in the last 24 hours. AI provides a structured disruption analysis with recommendations via LiteLLM/Granite.
Calls events_list via K8s MCP Server to list Kubernetes events (warnings, errors, state changes) for the namespace. AI detects anomalies and highlights critical events via LiteLLM/Granite.
Calls resources_list (Route) via K8s MCP Server to inventory OpenShift Routes with hosts, TLS termination, and target services. AI summarizes route configuration and TLS status via LiteLLM/Granite.
Calls getPerformanceMetrics via Quarkus MCP Server to retrieve CPU/memory usage metrics for nodes and pods in the namespace. AI analyzes resource utilization and provides optimization recommendations via LiteLLM/Granite.
Calls helm_list via K8s MCP Server to inventory all Helm releases in the namespace with chart versions, app versions, and deployment status. AI formats the release inventory with health assessment via LiteLLM/Granite.
| # | Workflow | MCP Tool | MCP Server | AI Model | Protocol |
|---|---|---|---|---|---|
| 1 | Deployment Monitor | monitorDeployments | Quarkus MCP (8080) | Granite (LiteLLM) | Streamable HTTP |
| 2 | Pod Status | pods_list_in_namespace | K8s MCP (8085) | Granite (LiteLLM) | Streamable HTTP + SSE |
| 3 | Pod Disruption Analyzer | analyzePodDisruptions | Quarkus MCP (8080) | Granite (LiteLLM) | Streamable HTTP |
| 4 | Event Monitor | events_list | K8s MCP (8085) | Granite (LiteLLM) | Streamable HTTP + SSE |
| 5 | Route Monitor | resources_list | K8s MCP (8085) | Granite (LiteLLM) | Streamable HTTP + SSE |
| 6 | Performance Metrics | getPerformanceMetrics | Quarkus MCP (8080) | Granite (LiteLLM) | Streamable HTTP |
| 7 | Helm Releases | helm_list | K8s MCP (8085) | Granite (LiteLLM) | Streamable HTTP + SSE |
Find all workflow JSON files in the workflows directory or in the n8n-sandbox repository.
When Mailpit is enabled, n8n workflows can send branded HTML email reports that are captured and viewable in the Mailpit web UI. Configure n8n SMTP credentials to point to the Mailpit service:
Access the Mailpit web UI via its OpenShift Route to view all captured email reports from your workflows.
The workflows above require the OpenShift MCP Server Helm chart deployed in your cluster. It provides a dual MCP server deployment: a custom Quarkus server (19 operational tools) and the official openshift/openshift-mcp-server (20+ Kubernetes tools), plus an MCP Inspector UI and LiteLLM proxy.
| Component | Port | Description |
|---|---|---|
| Quarkus MCP Server | 8080 | 19 tools: monitoring, deployment, performance testing |
| K8s MCP Server | 8085 | 20+ tools: CRUD, pods, helm, events, nodes |
| MCP Inspector | 8080 | Web UI for testing MCP tools interactively |
| LiteLLM Proxy | 4000 | OpenAI-compatible proxy for Granite/Qwen3 models |
Full documentation: maximilianopizarro.github.io/openshift-mcp-server
Configuration under main.config: and main.secret: in values.yaml is transformed 1:1 into Kubernetes ENV variables:
Consult the n8n Environment Variables Documentation.
A Red Hat UBI 9-based container image is available at quay.io/maximilianopizarro/n8n. It uses a 3-stage build that extracts n8n from the official Docker Hub image, rebuilds native modules (sqlite3) for Node.js 22 + glibc, and packages everything on registry.access.redhat.com/ubi9/nodejs-22-minimal.
The image is built automatically via GitHub Actions on every push to main and published to Quay.io.
You can build and test the UBI container image locally using Podman (or Docker) before deploying to a cluster:
Open http://localhost:5678 in your browser to access the n8n editor.
Access Mailpit at http://localhost:8025 to view captured emails.
| Build Stage | Base Image | Purpose |
|---|---|---|
| 1. Source | n8nio/n8n:1.123.28 | Extract n8n node_modules |
| 2. Builder | ubi9/nodejs-22 | Rebuild sqlite3 native module for Node.js 22 + glibc |
| 3. Runtime | ubi9/nodejs-22-minimal | Minimal production image (~350MB) |
| Requirement | Version |
|---|---|
| Kubernetes | >= 1.20.0 |
| Helm | >= 3.8 |
| Database | SQLite (embedded) or PostgreSQL |
| Dependency | Version | Condition |
|---|---|---|
| Valkey (Bitnami) | 2.4.7 | valkey.enabled |
monitorDeployments, pods_list_in_namespace, analyzePodDisruptions, events_list, resources_list, getPerformanceMetrics, helm_list) and branded HTML email reports via Mailpit APIenableServiceLinks: false to avoid N8N_PORT env conflict, route.sccRoleDisabled for restricted RBAC, empty podSecurityContext for random UIDregistry.access.redhat.com/ubi9/nodejs-22, published at quay.io/maximilianopizarro/n8n via GitHub Actionsmain.persistence.mountPath for writable paths in restricted environmentsClusterIP_ typo in service templaterole.yaml to use chart naming helpers (supports n8n-dev-east style release names)Actual email reports generated by the MCP workflows and captured in Mailpit: