Back to Templates

Build an MCP Server which answers questions with Retrieval Augmented Generation

Created by

Created by: Thomas Janssen || thomasjanssen-tech

Thomas Janssen

Last update

Last update 14 days ago

Share


Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG)

Tutorial

thumbnail.png
Click here to watch the full tutorial on YouTube

How it works

This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client.

AI Agent Template

Click here to navigate to the AI Agent n8n workflow which uses this MCP server

Warning

This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node.

Installation

  1. Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit

  2. Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model.

  3. Activate the n8n flow

activate n8n flow.png

  1. Run the "RAG Ingestion Pipeline" and upload some PDF documents

How to use it

  1. Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API.

More detailed instructions

Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data