Back to Templates

Evaluation Metric: Summarization

Created by

Created by: Jimleuk || jimleuk

Jimleuk

Last update

Last update 23 days ago

Categories

Share


This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript.

The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality

How it works

  • This evaluation works best for an AI summarization workflows.
  • For our scoring, we simple compare the generated response to the original transcript.
  • A key factor is to look out information in the response which is not mentioned in the documents.
  • A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination.

Requirements