Who is this for?
This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp.
What problem is this workflow solving?
Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging.
What these workflows do
Workflow 1: Document Ingestion & Indexing
Workflow 2: AI-Powered Query & Response via WhatsApp
Setup
Setting up vector embeddings
1- Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index.
2- Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries.
3- Ensure the index name matches the one configured in n8n (data_index).
See the example MongoDB search index template below for reference.
Setting up chat
1- Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending.
2- Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries.
3- Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval.
Make sure
Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with:
An embedding field storing vector data,
Relevant metadata fields (e.g., document ID, source), and
The same vector index name configured (e.g., data_index).