This workflow automates platform trust and safety operations by deploying a multi-agent AI system that detects abuse signals, investigates behaviour, scores risk, checks policy compliance, and enforces actions automatically. Designed for platform safety teams, content moderation managers, and compliance officers, it eliminates manual triage delays and ensures high-severity violations are actioned immediately. An abuse signal webhook triggers behaviour analysis via OpenAI, classifying signals by severity. A routing node directs cases to a Governance Agent, which orchestrates Investigation, Risk Scoring, and Policy Compliance Checker sub-agents. Enforcement data is prepared, then routed by action type-logging to abuse records, alerting the security team via Slack, sending escalation emails, or triggering auto-enforcement actions based on threshold checks—before all outcomes are logged.