🤖 Build a customer service AI chatbot for Facebook Messenger with Google Gemini
📌 Overview
A streamlined Facebook Messenger chatbot powered by AI with conversation memory.
This is a simplified version designed for quick deployment, learning, and testing — not suitable for production environments.
Base workflows:
🎯 What This Workflow Does
✅ Core Features:
- Receives messages from Facebook Messenger via webhook
- Processes user messages with Google Gemini AI
- Maintains conversation context using Simple Memory node
- Automatically responds with AI-generated replies
- Handles webhook verification for Facebook setup
- Send image or video to customer through Facebook Messenger
🔹 Simplified Approach:
- Memory: Simple Memory node (10-message window)
- Format: Cleans text, strips markdown, truncates >1900 chars
- Response: Single message delivery
⚠️ Limitations & Trade-offs:
- No Smart Batching → fragmented user messages cause spam-like replies
- No Human Takeover Detection → bot continues even when admin joins
- Basic Memory Management → no persistence, not reliable in production
- Basic Text Formatting → strips markdown, truncates brutally, no smart splitting
🚀 When to Upgrade
Upgrade to full workflows when you need:
- Production deployment with reliability & persistence
- Analytics & tracking (query history, reports)
- Professional formatting (bold, italic, lists, code blocks)
- Handling long messages (>2000 chars)
- Smart batching for fragmented inputs
- Human handoff detection
- Full conversation persistence
Key upgrades available:
⚙️ Setup Requirements
Facebook Setup
- Create Facebook App at developers.facebook.com
- Add Messenger product
- Configure webhook:
- URL:
https://your-domain.com/webhook/your-path
- Verify token: secure string
- Subscribe to:
messages, messaging_postbacks
- Generate Page Access Token
- Copy token to "Set Context" node
n8n Setup
- Import workflow
- Edit "Set Context" node → update
page_access_token
- Configure "Gemini Flash" node credentials
- Deploy workflow (must be publicly accessible)
🔄 How It Works
User Message → Facebook Webhook → Validation
↓
Set Context (extract user_id, message, token)
↓
Mark Seen → Show Typing
↓
AI Agent (Gemini + 10-message memory)
↓
Format Output (remove markdown, truncate)
↓
Send Response via Facebook API
🏗️ Architecture Overview
Section 1: Webhook & Initial Processing
- Facebook Webhook: handles GET (verification) & POST (messages)
- Confirm Webhook: returns challenge / acknowledges receipt
- Filters text messages only
- Blocks echo messages from bot itself
Section 2: AI Processing with Memory
- Set Context: extracts user_id, message, token
- Seen & Typing: user feedback
- Conversation Memory: 10-message window, per-user isolation
- Process Merged Message: AI Agent with Jenix persona
- Gemini Flash: Google’s AI model for response generation
Section 3: Format & Delivery
- Cuts replies >2000 chars, strips markdown
- Sends text via Facebook Graph API
🎨 Customisation Guide
- Bot Personality: edit system prompt in "Process Merged Message" node
- Memory: adjust
contextWindowLength (default 10), change sessionKey if needed
- AI Model: replace Gemini Flash with OpenAI, Anthropic Claude, or other LLMs
📌 Important Notes
⚠️ Production Warning: testing only, memory lost on n8n restart in queue mode
📊 No Analytics: no history storage, no reporting
🔧 Format Limitations: responses ≤1800 chars, markdown stripped, no complex formatting
🛠️ Troubleshooting
- Bot not responding → check token, webhook accessibility, event subscriptions
- Memory not working → verify session key, ensure not in queue mode, restart workflow
- Messages truncated → adjust system prompt for conciseness, reduce response length
📜 License & Credits
Created by: Nguyễn Thiệu Toàn (Jay Nguyen)