Back to Templates

Monitor AI Chat Interactions with Gemini 2.5 and Langfuse Tracing

Created by

Created by: Eduardo Hales || ehales

Eduardo Hales

Last update

Last update 8 days ago

Share


This workflow contains community nodes that are only compatible with the self-hosted version of n8n.

How it works

This workflow is a simple AI Agent that connects to Langfuse so send tracing data to help monitor LLM interactions.

The main idea is to create a custom LLM model that allows the configuration of callbacks, which are used by langchain to connect applications such Langfuse.

This is achieves by using the "langchain code" node:

  • Connects a LLM model sub-node to obtain the model variables (model name, temp and provider) - Creates a generic langchain initChatModel with the model parameters.
  • Return the LLM to be used by the AI Agent node.

📋 Prerequisites

  • Langfuse instance (cloud or self-hosted) with API credentials
  • LLM API key (Gemini, OpenAI, Anthropic, etc.)
  • n8n >= 1.98.0 (required for LangChain code node support in AI Agent)

⚙️ Setup

  1. Add these to your n8n instance:
# Langfuse configuration
LANGFUSE_SECRET_KEY=your_secret_key
LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_BASEURL=https://cloud.langfuse.com  # or your self-hosted URL

# LLM API key (example for Gemini)
GOOGLE_API_KEY=your_api_key

Alternative: Configure these directly in the LangChain code node if you prefer not to use environment variables

  1. Import the workflow JSON

  2. Connect your preferred LLM model node

  3. Send a test message to verify tracing appears in Langfuse