Back to Integrations
integration integration
integration Default Data Loader node

Integrate LangChain Default Data Loader in your LLM apps and 422+ apps and services

Use Default Data Loader to easily build AI-powered applications with LangChain and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Default Data Loader integration

Google Sheets node
HTTP Request node
+20

API Schema Extractor

This workflow automates the process of discovering and extracting APIs from various services, followed by generating custom schemas. It works in three distinct stages: research, extraction, and schema generation, with each stage tracking progress in a Google Sheet. 🙏 Jim Le deserves major kudos for helping to build this sophisticated three-stage workflow that cleverly automates API documentation processing using a smart combination of web scraping, vector search, and LLM technologies. How it works Stage 1 - Research: Fetches pending services from a Google Sheet Uses Google search to find API documentation Employs Apify for web scraping to filter relevant pages Stores webpage contents and metadata in Qdrant (vector database) Updates progress status in Google Sheet (pending, ok, or error) Stage 2 - Extraction: Processes services that completed research successfully Queries vector store to identify products and offerings Further queries for relevant API documentation Uses Gemini (LLM) to extract API operations Records extracted operations in Google Sheet Updates progress status (pending, ok, or error) Stage 3 - Generation: Takes services with successful extraction Retrieves all API operations from the database Combines and groups operations into a custom schema Uploads final schema to Google Drive Updates final status in sheet with file location Ideal for: Development teams needing to catalog multiple APIs API documentation initiatives Creating standardized API schema collections Automating API discovery and documentation Accounts required: Google account (for Sheets and Drive access) Apify account (for web scraping) Qdrant database Gemini API access Set up instructions: Prepare your Google Sheets document with the services information. Here's an example of a Google Sheet – you can copy it and change or remove the values under the columns. Also, make sure to update Google Sheets nodes with the correct Google Sheet ID. Configure Google Sheets OAuth2 credentials, required third-party services (Apify, Qdrant) and Gemini. Ensure proper permissions for Google Drive access.
polina-n8n
Polina Medvedieva
Airtable node
HTTP Request node
Merge node
+24

Scale Deal Flow with a Pitch Deck AI Vision, Chatbot and QDrant Vector Store

Are you a popular tech startup accelerator (named after a particular higher order function) overwhelmed with 1000s of pitch decks on a daily basis? Wish you could filter through them quickly using AI but the decks are unparseable through conventional means? Then you're in luck! This n8n template uses Multimodal LLMs to parse and extract valuable data from even the most overly designed pitch decks in quick fashion. Not only that, it'll also create the foundations of a RAG chatbot at the end so you or your colleagues can drill down into the details if needed. With this template, you'll scale your capacity to find interesting companies you'd otherwise miss! Requires n8n v1.62.1+ How It Works Airtable is used as the pitch deck database and PDF decks are downloaded from it. An AI Vision model is used to transcribe each page of the pitch deck into markdown. An Information Extractor is used to generate a report from the transcribed markdown and update required information back into pitch deck database. The transcribed markdown is also uploaded to a vector store to build an AI chatbot which can be used to ask questions on the pitch deck. Check out the sample Airtable here: https://airtable.com/appCkqc2jc3MoVqDO/shrS21vGqlnqzzNUc How To Use This template depends on the availability of the Airtable - make a duplicate of the airtable (link) and its columns before running the workflow. When a new pitchdeck is received, enter the company name into the Name column and upload the pdf into the File column. Leave all other columns blank. If you have the Airtable trigger active, the execution should start immediately once the file is uploaded. Otherwise, click the manual test trigger to start the workflow. When manually triggered, all "new" pitch decks will be handled by the workflow as separate executions. Requirements OpenAI for LLM Airtable For Database and Interface Qdrant for Vector Store Customising This Workflow Extend this starter template by adding more AI agents to validate claims made in the pitch deck eg. Linkedin Profiles, Page visits, Reviews etc.
jimleuk
Jimleuk
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
GitHub node
HTTP Request node
Merge node
+14

Building RAG Chatbot for Movie Recommendations with Qdrant and Open AI

Create a recommendation tool without hallucinations based on RAG with the Qdrant Vector database. This example is based on movie recommendations on the IMDB-top1000 dataset. You can provide your wishes and your "big no's" to the chatbot, for example: "A movie about wizards but not Harry Potter", and get top-3 recommendations. How it works a video with the full design process Upload IMDB-1000 dataset to Qdrant Vector Store, embedding movie descriptions with OpenAI; Set up an AI agent with a chat. This agent will call a workflow tool to get movie recommendations based on a request written in the chat; Create a workflow which calls Qdrant's Recommendation API to retrieve top-3 recommendations of movies based on your positive and negative examples. Set Up Steps You'll need to create a free tier Qdrant Cluster (Qdrant can also be used locally; it's open-sourced) and set up API credentials You'll OpenAI credentials You'll need GitHub credentials & to upload the IMDB Kaggle dataset to your GitHub.
mrscoopers
Jenny
Notion node
Notion Trigger node
Embeddings OpenAI node
+4

Store Notion's Pages as Vector Documents into Supabase with OpenAI

Workflow updated on 17/06/2024:** Added 'Summarize' node to avoid creating a row for each Notion content block in the Supabase table.* Store Notion's Pages as Vector Documents into Supabase This workflow assumes you have a Supabase project with a table that has a vector column. If you don't have it, follow the instructions here: Supabase Langchain Guide Workflow Description This workflow automates the process of storing Notion pages as vector documents in a Supabase database with a vector column. The steps are as follows: Notion Page Added Trigger: Monitors a specified Notion database for newly added pages. You can create a specific Notion database where you copy the pages you want to store in Supabase. Node: Page Added in Notion Database Retrieve Page Content: Fetches all block content from the newly added Notion page. Node: Get Blocks Content Filter Non-Text Content: Excludes blocks of type "image" and "video" to focus on textual content. Node: Filter - Exclude Media Content Summarize Content: Concatenates the Notion blocks content to create a single text for embedding. Node: Summarize - Concatenate Notion's blocks content Store in Supabase: Stores the processed documents and their embeddings into a Supabase table with a vector column. Node: Store Documents in Supabase Generate Embeddings: Utilizes OpenAI's API to generate embeddings for the textual content. Node: Generate Text Embeddings Create Metadata and Load Content: Loads the block content and creates associated metadata, such as page ID and block ID. Node: Load Block Content & Create Metadata Split Content into Chunks: Divides the text into smaller chunks for easier processing and embedding generation. Node: Token Splitter
dataki
Dataki
HTTP Request node
Merge node
+12

Recipe Recommendations with Qdrant and Mistral

This n8n workflow demonstrates creating a recipe recommendation chatbot using the Qdrant vector store recommendation API. Use this example to build recommendation features in your AI Agents for your users. How it works For our recipes, we'll use HelloFresh's weekly course and recipes for data. We'll scrape the website for this data. Each recipe is split, vectorised and inserted into a Qdrant Collection using Mistral Embeddings Additionally the whole recipe is stored in a SQLite database for later retrieval. Our AI Agent is setup to recommend recipes from our Qdrant vector store. However, instead of the default similarity search, we'll use the Recommendation API instead. Qdrant's Recommendation API allows you to provide a negative prompt; in our case, the user can specify recipes or ingredients to avoid. The AI Agent is now able to suggest a recipe recommendation better suited for the user and increase customer satisfaction. Requirements Qdrant vector store instance to save the recipes Mistral.ai account for embeddings and LLM agent Customising the workflow This workflow can work for a variety of different audiences. Try different sets of data such as clothes, sports shoes, vehicles or even holidays.
jimleuk
Jimleuk
Default Data Loader node

About Default Data Loader

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Default Data Loader with your company’s tech stack and create automation workflows

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗