Last update
Last update 11 hours ago
Categories
Share
This workflow provides a complete testing rig for evaluating text against seven essential AI guardrails used in production systems.
It helps you detect jailbreak attempts, PII exposure, NSFW content, secret key leaks, malicious URLs, topical misalignment, and keyword violations.
Use the included Google Sheet or CSV to batch-test multiple inputs instantly.
The workflow reads each test entry (Guardrail_Type + Input_Text) from a Google Sheet or CSV.
A Switch node sends the text to the appropriate guardrail:
Each guardrail uses Google Gemini to return:
Three sanitizers demonstrate how to clean unsafe text:
Each guardrail node outputs clean JSON, making debugging fast and transparent.
Use either:
Update only:
Create an OAuth2 credential → paste the Google JSON → connect your account.
Go to Credentials → Google Gemini (PaLM API) →
Paste your API key → attach it to all Guardrail nodes.
They visually explain:
Click Execute Workflow and inspect:
The included dataset allows instant testing:
Template Author: Sandeep Patharkar
Category: AI Safety / Agent Security
Difficulty: Intermediate
Estimated Setup Time: 10–15 minutes
Tags: Guardrails, AI Agents, Safety, Enterprise
Author: Sandeep Patharkar**
🔗 LinkedIn: https://www.linkedin.com/in/sandeeppatharkar
🏠 Skool AIC+: https://www.skool.com/aic-plus