Back to Templates
This is a template for n8n's evaluation feature.
Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow.
By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't.
This template shows how to calculate a workflow evaluation metric: whether an output matches an expected output (i.e. has the same meaning).
The workflow takes questions about the causes of historical events and compares them with the reference answers in the dataset.