Back to Integrations
integration integration
integration

Integrate LangChain Question and Answer Chain in your LLM apps and 422+ apps and services

Use Question and Answer Chain to easily build AI-powered applications with LangChain and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Question and Answer Chain integration

Telegram node
Telegram Trigger node
+9

Telegram chat with PDF

What this template does This template serves as a Chatbot that enables you to ask questions about the content of a PDF directly in Telegream. It checks incoming Telegram messages if they contain a document. If they do, it stores the PDF in a Pinecone Vector store. If there's no document, it will search the Vector Store for information and try to answer your question. Setup Open the Telegram app and search for the BotFather user (@BotFather) Start a chat with the BotFather Type /newbot to create a new bot Follow the prompts to name your bot and get a unique API token Save your access token and username Once you set your bot, you can send the pdf, and then ask questions about the content. How to adjust it to your needs You can exchange the Groq chat model with any model that you like Exchange Pinecone with any other vector store tool you like (e.g. Supabase, Postgres or QDrant) #Telegram, #Pinecone, #Openai, #GroQ
felipecataneo
felipe biava cataneo
HTTP Request node
+11

Build a Financial Documents Assistant using Qdrant and Mistral.ai

This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
jimleuk
Jimleuk
Webhook node
Google Drive node
Respond to Webhook node
+8

AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow

How it works: Using a Crew of AI agents (Senior Researcher, Visionary, and Senior Editor), this crew will automatically determine the right questions to ask to produce a detailed fundamental stock analysis. This application has two components: a front-end and a Stock Q&A engine. The front end is the team of agents automatically figuring out the questions to ask, and the back-end part is the ability to answer those questions with the SEC 10K data. This template implements the Stock Q&A engine. For the front-end of the application, you can choose one of two options: using CrewAI with the Replit environment (code approach) fully visual approach with n8n template (AI-powered automated stock analysis) Setup steps: Use first workflow in template to upsert a company annual report PDF (such as from SEC 10K filling) Get URL for Webhook in second workflow template CrewAI front-end: Youtube overview video Fork this AI Agent environment Crew Agent Environment Set the webhook URL into N8N_WEBHOOK_URL variable Set OpenAI_API_KEY variable
derekcheungsa
Derek Cheung
Merge node
+17

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

This n8n workflow takes in a document such as a research paper, marketing or sales deck or company filings, and breaks them down into 3 templates: study guide, briefing doc and timeline. These templates are designed to help a student, associate or clerk quickly summarise, learn and understand the contents to be more productive. Study guide - a short quiz of questions and answered generated by the AI Agent using the contents of the document. Briefing Doc - key information and insights are extracted by the AI into a digestable form. Timeline - key events, durations and people are identified and listed into a simple to understand timeline by the AI How it works A local file trigger watches a local network directory for new documents. New documents are imported into the workflow, its contents extracted and vectorised into a Qdrant vector store to build a mini-knowledgebase. The document then passes through a series of template generating prompts where the AI will perform "research" on the knowledgebase to generate the template contents. Generated study guide, briefing and timeline documents are exported to a designated folder for the user. Requirements Self-hosted version of n8n. Qdrant instance for knowledgebase. Mistral.ai account for embeddings and AI model. Customising your workflow Try adding your own templates or adjusting the existing templates to suit your unique use-case. Anything is quite possible and limited only by your imagination! Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1VV5R2nW-IhVcFP_k8uEks4LsLRZrHSNG/view?usp=sharing
jimleuk
Jimleuk
Google Drive node
Supabase node
+7

Supabase Insertion & Upsertion & Retrieval

This is a demo workflow to showcase how to use Supabase to embed a document, retrieve information from the vector store via chat and update the database. Setup steps: set your credentials for Supabase set your credentials for an AI model of your choice set credentials for any service you want to use to upload documents please follow the guidelines in the workflow itself (Sticky Notes) Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at [email protected]
riascho
Ria
HTTP Request node
Slack node
Webhook node
+17

Advanced AI Demo (Presented at AI Developers #14 meetup)

This workflow was presented at the AI Developers meet up in San Fransico on 24 July, 2024. AI workflows Categorize incoming Gmail emails and assign custom Gmail labels. This example uses the Text Classifier node, simplifying this usecase. Ingest a PDF into a Pinecone vector store and chat with it (RAG example) AI Agent example showcasing the HTTP Request tool. We teach the agent how to check availability on a Google Calendar and book an appointment.
max-n8n
Max Tkacz

About Question and Answer Chain

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Question and Answer Chain with your company’s tech stack and create automation workflows