This workflow implements an AI-powered incident investigation and root cause analysis system that automatically analyzes operational signals when a system incident occurs.
When an incident is triggered via webhook, the workflow gathers operational context including application logs, system metrics, recent deployments, and feature flag changes. These signals are processed to detect error patterns, cluster similar failures, and correlate them with recent system changes.
The workflow uses vector embeddings to group similar log messages, allowing it to detect dominant failure patterns across services. It then aligns these failures with contextual events such as deployments, configuration changes, or traffic spikes to identify potential causal relationships.
An AI agent analyzes all available evidence and generates structured root cause hypotheses, including confidence scores, supporting evidence, and recommended remediation actions.
Finally, the workflow posts a detailed incident report directly to Slack, enabling engineering teams to quickly understand the issue and respond faster.
This architecture helps teams reduce mean time to resolution (MTTR) by automating the early stages of incident investigation.
The workflow begins when an incident alert is received through a webhook endpoint.
The webhook payload may include information such as:
This event starts the automated investigation process.
A configuration node defines the operational parameters used throughout the workflow, including:
This allows the workflow to be easily adapted to different observability stacks.
The workflow collects system context from multiple sources:
Gathering this information provides the signals required to understand what happened before and during the incident.
Raw logs are processed to remove low-value entries such as debug or informational messages.
The workflow extracts structured error information including:
This step ensures that only relevant failure signals are analyzed.
Error messages are converted into embeddings using OpenAI.
The workflow stores these embeddings in an in-memory vector store to group similar log messages together.
This clustering step identifies dominant failure patterns that may appear across multiple sessions or services.
Clustered log data is analyzed to detect recurring error types and dominant failure clusters.
The workflow calculates statistics such as:
These insights help highlight the primary issues affecting the system.
Failure patterns are then aligned with contextual events such as:
The workflow calculates correlation scores based on temporal proximity and assigns likelihood scores to potential causes.
This allows the system to identify events that may have triggered the incident.
An AI agent analyzes the collected signals and generates structured root cause hypotheses.
The agent considers:
The output includes:
The final analysis is formatted into a structured incident report and posted to Slack.
The Slack message contains:
This enables engineers to quickly review the investigation results and take action.
Update the Workflow Configuration node with API endpoints for:
These APIs should return JSON responses containing recent operational data.
Add OpenAI credentials for:
These are used for log clustering and root cause analysis.
Add Slack credentials and specify the Slack channel ID in the configuration node.
Incident reports will be posted automatically to this channel.
Deploy the webhook endpoint generated by the Incident Trigger node.
Your monitoring or alerting system (PagerDuty, Grafana, Datadog, etc.) can call this webhook when incidents occur.
Once configured, activate the workflow in n8n.
When incidents are triggered, the workflow will automatically run the investigation pipeline and generate a Slack incident report.
Automatically analyze operational signals when alerts are triggered to identify possible causes.
Provide engineers with AI-generated root cause hypotheses and investigation insights.
Detect whether a recent deployment or configuration change caused a system failure.
Combine logs, metrics, and system events to produce a unified incident analysis.
Reduce mean time to resolution (MTTR) by automating the early stages of incident debugging.