Back to Templates
This is a template for n8n's evaluation feature.
Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow.
By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't.
This template shows how to calculate a workflow evaluation metric: whether a category matches the expected one.
The workflow takes support tickets and generates a category and priority, which is then compared with the correct answers in the dataset.