🤖 Human-like Evolution API Agent with Redis & PostgreSQL
This production-ready template builds a sophisticated AI Agent using Evolution API that mimics human interaction patterns. Unlike standard chatbots that reply instantly to every incoming message, this workflow uses a Smart Redis Buffering System. It waits for the user to finish typing their full thought (text, audio, or image albums) before processing, creating a natural, conversational flow.
It features a Hybrid Memory Architecture: active conversations are cached in Redis for ultra-low latency, while the complete chat history is securely stored in PostgreSQL. To optimize token usage and maintain long-term coherence, a Context Refiner Agent summarizes the conversation history before the Main AI generates a response.
✨ Key Features
- Human-like Buffering: The agent waits (configurable time) to group consecutive messages, voice notes, and media albums into a single context. This prevents fragmented replies and feels like talking to a real person.
- Hybrid Memory: Combines Redis (Hot Cache) for speed and PostgreSQL (Cold Storage) for permanent history.
- Context Refinement: A specialized AI step summarizes past interactions, allowing the Main Agent to understand long conversations without exceeding token limits or increasing costs.
- Multi-Modal Support: Natively handles text, audio transcription, and image analysis via Evolution API.
- Parallel Processing: Manages "typing..." status and session checks in parallel to reduce response latency.
📋 Requirements
To use this workflow, you must configure the Evolution API correctly:
- Evolution API Instance: You need a running instance of Evolution API.
- N8n Community Node: Install the Evolution API node in your n8n instance.
- Database: A PostgreSQL database for chat history and a Redis instance for the buffer/cache.
- AI Models: API keys for your LLM (OpenAI, Anthropic, or Google Gemini).
⚙️ Setup Instructions
- Install the Node: Go to
Settings > Community Nodes in n8n and install n8n-nodes-evolution-api.
- Credentials: Configure credentials for Redis, PostgreSQL, and your AI provider (e.g., OpenAI/Gemini).
- Database Setup: Create a
chat_history table in PostgreSQL (columns must match the Insert node).
- Redis Connection: Configure your Redis credentials in the workflow nodes.
- Global Variables: Set the following in the "Global Variables" node:
wait_buffer: Seconds to wait for the user to stop typing (e.g., 5s).
wait_conversation: Seconds to keep the cache alive (e.g., 300s).
max_chat_history: Number of past messages to retrieve.
- Webhook: Point your Evolution API instance to this workflow's Webhook URL.
🚀 How it Works
- Ingestion: Receives data via Evolution API. Detects if it's text, audio, or an album.
- Smart Buffering: Holds the execution to collect all parts of the user's message (simulating a human reading/listening).
- Context Retrieval: Checks Redis for the active session. If empty, fetches from PostgreSQL.
- Refinement: The Refiner Agent summarizes the history to extract key details.
- Response: The Main Agent generates a reply based on the refined context and current buffer, then saves it to both Redis and Postgres.
💡 Need Assistance?
If you’d like help customizing or extending this workflow, feel free to reach out:
📧 Email: [email protected]
🔗 LinkedIn: John Alejandro Silva Rodríguez