Back to Integrations
integration integration
integration

Integrate LangChain Chat Memory Manager in your LLM apps and 422+ apps and services

Use Chat Memory Manager to easily build AI-powered applications with LangChain and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Chat Memory Manager integration

Redis node
Twilio node
+7

Enhance Customer Chat by Buffering Messages with Twilio and Redis

This n8n workflow demonstrates a simple approach to improve chat UX by staggering an AI Agent's reply for users who send in a sequence of partial messages and in short bursts. How it works Twilio webhook receives user's messages which are recorded in a message stack powered by Redis. The execution is immediately paused for 5 seconds and then another check is done against the message stack for the latest message. The purpose of this check lets use know if the user is sending more messages or if they are waiting for a reply. The execution is aborted if the latest message on the stack differs from the incoming message and continues if they are the same. For the latter, the agent receives the buffered messages up to that point and is able to respond to them in a single reply. Requirements A Twilio account and SMS-enabled phone number to receive messages. Redis instance for the messages stack. OpenAI account for the language model. Customising the workflow This workflow should work for other common messaging platforms such as Whatsapp and Telegram. 5 seconds too long or too short? Adjust the wait threshold to suit your customers.
jimleuk
Jimleuk
HTTP Request node
Webhook node
Respond to Webhook node
+6

AI Voice Chat using Webhook, Memory Manager, OpenAI, Google Gemini & ElevenLabs

Who is this for? This workflow is designed for businesses or developers looking to integrate voice-based chat applications with dynamic responses and conversational memory. What problem does this solve? It automates AI-powered voice conversations, maintaining context between sessions and converting speech-to-text and text-to-speech. What this workflow does: The workflow receives audio input, transcribes it using OpenAI, and processes the conversation using Google Gemini Chat Model (you can use OpenAI Chat Model). Responses are converted back to speech using ElevenLabs. Prerequisites: You'll need API keys for: OpenAI (you can obtain it from OpenAI website) ElevenLabs (you can obtain it from their website) Google Gemini (You can obtain it from Google AI Studio) Setup: Configure you API keys Ensure that the value (voice_message) in the "Path" parameter in the Webhook node is used as the name of the parameter that will contain the voice message you are sending via the HTTP Post request.
ayoub-n8n
Ayoub
Aggregate node
+4

Chat with OpenAI Assistant (by adding a memory)

OpenAI Assistant is a powerful tool, but at the time of writing it doesn't automatically remember past messages from a conversation. This workflow demonstrates how to get around this, by managing the chat history in n8n and passing it to the assistant when required. This makes it possible to use OpenAI Assistant for chatbot use cases. Note that to use this template, you need to be on n8n version 1.28.0 or later.
davidn8n
David Roberts
Google Sheets node
Redis node
Webhook node
+11

Conversational Interviews with AI Agents and n8n Forms

This n8n template combines an AI agent with n8n's multi-page forms to create a novel interaction which allows automated question-and-answer sessions. One of the more obvious use-cases of this interaction is what I'm calling the AI interviewer. You can read the full post here: https://community.n8n.io/t/build-your-own-ai-interview-agents-with-n8n-forms/62312 Live demo here: https://jimleuk.app.n8n.cloud/form/driving-lessons-survey How it works A form trigger is used to start the interview and a new session is created in redis to capture the transcript. An AI agent is then tasked to ask questions to the user regarding the topic of the interview. This is setup as a loop so the questions never stop unless the user wishes to end the interview. Each answer is recorded in our session set up earlier between questions. When the user requests to end the interview we break the loop and show the interview completion screen. Finally, the session is then saved in a Google Sheet which can then be shared with team members and for the purpose of data analysis. How to use You'll need to be on a n8n instance that is accessible to your target audience. Not technical enough to setup your own server? Try out n8n cloud and instantly deploy template! Remember to activate the workflow so the form trigger is published and available for users to use. Requirements Groq LLM for AI agent. Feel free to swap this out for any other LLM. Redis(-compatible) storage for capturing sessions Customising this workflow The next step would be adding tools! AI interviews with knowledge retrieval could definitely open up other possibilities. Eg. An onboarding wizard generating questions by pulling facts from internal knowledgebase.
jimleuk
Jimleuk

Supported modes

Get Many Messages
Retrieve chat messages from connected memory
Insert Messages
Insert chat messages into connected memory
Delete Messages
Delete chat messages from connected memory

About Chat Memory Manager

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Chat Memory Manager with your company’s tech stack and create automation workflows

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING