Back to Templates
Protect your workflows with n8n's native Guardrails node, placed before and after your AI step. The input guardrails catch jailbreak attempts and PII before they reach your model. The output guardrails scan AI responses for NSFW content and secret keys before they reach your users.
What you'll do
What you'll learn
Why it matters
Guardrails are the seatbelts of your AI workflow. You hope you don't need them, but when a user sends a prompt injection attempt or the AI leaks sensitive data, you'll be glad they're there. This template uses n8n's dedicated Guardrails node to make safety checks a first-class part of your workflow without writing custom validation code.