This workflow aims at providing data visualization to a native SQL Agent.
Together, they can help with fostering data analysis and data visualization within a team.
It uses the native SQL Agent that works well and adds some visualization capabilities thanks to OpenAI Structured Output and Quickchart.io.
How it works
The first part of the workflow is a regular SQL Agent: it connects to a Database and is able to query it and translate the response in a human format.
Then, the Text Classifier is deciding if the user would benefit from a chart, supporting the SQL Agent's response.
If it does, then it executes the subworkflow to dynamically generate a chart and append the chart to the response from the SQL Agent.
If it doesn't, then the SQL Agent response is directly outputted.
The sub-workflow calls OpenAI through the HTTP Request node to retrieve a chart definition.
In the "set response" node, the chart definition is added at the end of a quickchart.io URL - the URL to the chart image. It is sent back to the AI Agent.
How to use it
Use an existing or create a new database.
For example, I've used this Kaggle dataset and uploaded it to a Supabase DB.
Add the PostgreSQL or MySQL credentials.
Alternatively, you can use SQLite binary files (check this template).
Activate the workflow.
Start chatting with the AI SQL Agent.
If the Text Classifier considers a chart would be useful, it will generate a chart in addition to the response from the SQL Agent.
Notes
The full Quickchart.io specifications have not been integrated, thus there are some possible glitches (e.g., due to the size of the graph, radar graphs are not displayed properly).
Task:
Create a simple API endpoint using the Webhook and Respond to Webhook nodes
Why:
You can prototype or replace a backend process with a single workflow
Main use cases:
Replace backend logic with a workflow
Task:
Merge two datasets into one based on matching rules
Why:
A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic.
Main use cases:
Appending data sets
Keep only new items
Keep only existing items
This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node.
It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file.
Config Options
repo_owner - Github owner
repo_name - Github repository name
repo_path - Path within the Github repository
>This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
This workflow allows extracting data from multiple pages website.
The workflow:
1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/.
2) Loads every country page (https://www.theswiftcodes.com/albania/)
3) Paginates every page in the country page.
4) Extracts data from the country page.
5) Saves data to MongoDB.
6) Paginates through all pages in all countries.
It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages.
There is a first section where the countries list is recovered and extracted.
Later, I try to read if a local cache page is available and I recover the cached page from the disk.
Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries.
I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage.
If the data present in the website changes, you can apply a Cron node to check the website once per week.
Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist.
I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io.
This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service.
If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
This workflow allows to scrape Google Maps data in an efficient way using SerpAPI.
You'll get all data from Gmaps at a cheaper cost than Google Maps API.
Add as input, your Google Maps search URL and you'll get a list of places with many data points such as:
phone number
website
rating
reviews
address
And much more.
Full guide to implement the workflow is here:
https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
Temporary solution using the undocumented REST API for backups using Google drive.
Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
This creates a git backup of the workflows and credentials.
It uses the n8n export command with git diff, so you can run as many times as you want, but only when there are changes they will create a commit.
Setup
You need some access to the server.
Create a repository in some remote place to host your project, like Github, Gitlab, or your favorite private repo.
Clone the repository in the server in a place that the n8n has access. In the example, it's the ., and the repository name is repo. Change it in the commands and in the workflow commands (you can set it as a variable in the wokflow). Checkout to another branch if you won't use the master one.
cd .
git clone repository
Or you could git init and then add the remote (git remote add origin YOUR_REPO_URL), whatever pleases you more.
As the server, check if everything is ok for beeing able to commit. Very likely you'll need to setup the user email and name. Try to create a commit, and push it to upstream, and everything you need (like config a user to comit) will appear in way. I strong suggest testing with exporting the commands to garantee it will work too.
cd ./repo
git commit -c "Initial commmit" --allow-empty
-u is the same as --set-upstream
git push -u origin master
Testing to push to upstream with the first exported data
npx n8n export:workflow --backup --output ./repo/workflows/
npx n8n export:credentials --backup --output repo/credentials/
cd ./repo
git add .
git commit -c "manual backup: first export"
git push
After that, if everything is ok, the workflow should work just fine.
Adjustments
Adjust the path in used in the workflow. See the the git -C PATH command is the same as cd PATH; git ....
Also, adjust the cron to run as you need. As I said in the beginning, you can run it even for every minute, but it will create commits only when there are changes.
Credentials encryption
The default for exporting the credentials is to do them encrypted. You can add the flag --decrypted to the n8n export:credentials command if you need to save them in plain. But as general rule, it's better to save the encryption key, that you only need to do that once, and them export it safely encrypted.
This n8n workflow demonstrates how to build a simple uptime monitoring service using scheduled triggers.
Useful for webmasters with a handful of sites who want a cost-effective solution without the need for all the bells and whistles.
How it works
Scheduled trigger reads a list of website urls in a Google Sheet every 5 minutes
Each website url is checked using the HTTP node which determines if the website is either in the UP or DOWN state.
An email and Slack message are sent for websites which are in the DOWN state.
The Google Sheet is updated with the website's state and a log created.
Logs can be used to determine total % of UP and DOWN time over a period.
Requirements
Google Sheet for storing websites to monitor and their states
Gmail for email alerts
Slack for channel alerts
Customising the workflow
Don't use Google Sheets? This can easily be exchanged with Excel or Airtable.
Using n8n a lot?
Soar above the limitations of the default n8n dashboard! This template gives you an overview of your workflows, nodes, and tags – all in one place. 💪
Built using XML stylesheets and the Bootstrap 5 library, this workflow is self-contained and does not depend on any third-party software. 🙌 It generates a comprehensive overview JSON that can be easily integrated with other BI tools for further analysis and visualization. 📊
Reach out to Eduard if you need help adapting this workflow to your specific use-case!
🚀 Benefits:
Workflow Summary** 📈: Instant overview of your workflows, active counts, and triggers.
Left-Side Panel** 📋: Quick access to all your workflows, nodes, and tags for seamless navigation.
Workflow Details** 🔬: Deep dive into each workflow's nodes, timestamps, and tags.
Node Analysis** 🧩: Identify the most frequently used nodes across your workflows.
Tag Organization** 🗂️: Workflows are grouped according to their tags.
Visually Stunning** 🎨: Clean, intuitive, and easy-to-navigate dashboard design.
XML & Bootstrap 5** 🛠️: Built using XML stylesheets and Bootstrap 5, ensuring a self-contained and responsive dashboard.
No Dependencies** 🔒: The workflow does not rely on any third-party software. Bootstrap 5 files are loaded via CDN but can be delivered directly from your server.
⚠️ Important note for cloud users
Since the cloud version doesn't support environmental variables, please make the following changes:
get-nodes-via-jmespath node. Update the instance_url variable: enter your n8n URL instead of {{$env["N8N_PROTOCOL"]}}://{{$env["N8N_HOST"]}}
Create HTML node. Please provide the n8n instance URL instead of {{ $env.WEBHOOK_URL }}
🌟Example:
Check out our other workflows:
n8n.io/creators/eduard
n8n.io/creators/yulia
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite.
The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent.
This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data.
This makes the process more secure and efficient, especially in cases where data confidentiality is crucial.
🚀 Setup
To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial).
Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections.
Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server.
With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency.
🗣️ Chat with your data
Start a chat: send a message in the chat window.
The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis.
The Langchain AI Agent receives the schema, your input and begins to work.
The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message.
An IF node checks whether the AI Agent has generated a query. When:
Yes: the AI Agent passes the SQL query to the next MySQL node for execution.
No: You get a direct answer from the Agent without further action.
The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand.
Once formatted, you get both the Agent answer and the query result in the chat window.
🌟 Example queries
Try these sample queries to see the schema-driven AI Agent in action:
Would you please list me all customers from Germany?
What are the music genres in the database?
What tables are available in the database?
Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query.
And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️
💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
Video Guide
I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n.
Youtube Link
Who is this for?
This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive.
What problem does this workflow solve?
Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files.
What this workflow does
The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include:
Fetching and comparing files to avoid duplicate processing.
Handling file downloads and extracting content based on the file type.
Converting documents into vectorized data for contextual information retrieval.
Storing and querying vectorized data from a Supabase vector store.
File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content.
Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions.
Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot.
Setup
N8N Workflow
Fetch File List from Supabase:
Use Supabase to retrieve the stored file list from a specified bucket.
Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing.
Compare and Filter Files:
Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table.
Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled.
Handle File Downloads:
Download new files using detailed storage configurations for public/private access.
Adjust the storage settings and GET requests to match your Supabase setup.
File Type Processing:
Use a Switch node to target specific file types (e.g., PDFs or text files).
Employ relevant tools to process the content:
For PDFs, extract embedded content.
For text files, directly process the text data.
Content Chunking:
Break large text data into smaller chunks using the Text Splitter node.
Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks.
Vector Embedding Creation:
Generate vectorized embeddings for the processed content using OpenAI's embedding tools.
Ensure metadata, such as file ID, is included for easy data retrieval.
Store Vectorized Data:
Save the vectorized information into a dedicated Supabase vector store.
Use the default schema and table provided by Supabase for seamless setup.
AI Chatbot Integration:
Add a chatbot node to handle user input and retrieve relevant document chunks.
Use metadata like file ID for targeted queries, especially when multiple documents are involved.
Testing
Upload sample files to your Supabase bucket.
Verify if files are processed and stored successfully in the vector store.
Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?").
Test for accuracy and contextual relevance of retrieved results.
Video Guide
I prepared a detailed guide that showed the whole process of building a resume analyzer.
Who is this for?
This workflow is ideal for developers, data analysts, and business owners who want to enable conversational interactions with their database. It’s particularly useful for cases where users need to extract, analyze, or aggregate data without writing SQL queries manually.
What problem does this workflow solve?
Accessing and analyzing database data often requires SQL expertise or dedicated reports, which can be time-consuming. This workflow empowers users to interact with a database conversationally through an AI-powered agent. It dynamically generates SQL queries based on user requests, streamlining data retrieval and analysis.
What this workflow does
This workflow integrates OpenAI with a Supabase database, enabling users to interact with their data via an AI agent. The agent can:
Retrieve records from the database.
Extract and analyze JSON data stored in tables.
Provide summaries, aggregations, or specific data points based on user queries.
Dynamic SQL Querying: The agent uses user prompts to create and execute SQL queries on the database.
Understand JSON Structure: The workflow identifies JSON schema from sample records, enabling the agent to parse and analyze JSON fields effectively.
Database Schema Exploration: It provides the agent with tools to retrieve table structures, column details, and relationships for precise query generation.
Setup
Preparation
Create Accounts:
N8N: For workflow automation.
Supabase: For database hosting and management.
OpenAI: For building the conversational AI agent.
Configure Database Connection:
Set up a PostgreSQL database in Supabase.
Use appropriate credentials (username, password, host, and database name) in your workflow.
N8N Workflow
AI agent with tools:
Code Tool:
Execute SQL queries based on user input.
Database Schema Tool:
Retrieve a list of all tables in the database.
Use a predefined SQL query to fetch table definitions, including column names, types, and references.
Table Definition:
Retrieve a list of columns with types for one table.
This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions.
To use this template, you need to be on n8n version 1.50.0 or later.
This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content.
Note that to use this template, you need to be on n8n version 1.50.0 or later.
⚙️🛠️🚀🤖🦾
This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results).
On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content.
The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters:
url** – an address of the page to fetch
method** = full / simplified
maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent
Page content fetching is a multistep process:
An HTTP Request mode tries to get the page content.
If the page content was successfuly retrieved, a series of post-processing begin:
Extract HTML BODY; content
Remove all unnecessary tags to recude the page size
Further eliminate external URLs and IMG scr values (based on the method query parameter)
Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure
The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node).
NB:
You can isolate the HTTP Request part into a separate workflow.
Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object.
Please reach out to Eduard is you need further assistance with you n8n workflows and automations!
Note that to use this template, you need to be on n8n version 1.19.4 or later.
Enrich your company lists with OpenAI GPT-3 ↓
You’ll get valuable information such as:
Market (B2B or B2C)
Industry
Target Audience
Value Proposition
This will help you to:
add more personalization to your outreach
make informed decisions about which accounts to target
I've made the process easy with an n8n workflow.
Here is what it does:
Retrieve website URLs from Google Sheets
Extract the content for each website
Analyze it with GPT-3
Update Google Sheets with GPT-3 data
The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response.
Note that to use this template, you need to be on n8n version 1.19.4 or later.
Temporary solution using the undocumented REST API for backups using Google drive.
Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
A robust n8n workflow designed to enhance Telegram bot functionality for user management and broadcasting. It facilitates automatic support ticket creation, efficient user data storage in Redis, and a sophisticated system for message forwarding and broadcasting.
How It Works
Telegram Bot Setup: Initiate the workflow with a Telegram bot configured for handling different chat types (private, supergroup, channel).
User Data Management: Formats and updates user data, storing it in a Redis database for efficient retrieval and management.
Support Ticket Creation: Automatically generates chat tickets for user messages and saves the corresponding topic IDs in Redis.
Message Forwarding: Forwards new messages to the appropriate chat thread, or creates a new thread if none exists.
Support Forum Management: Handles messages within a support forum, differentiating between various chat types and user statuses.
Broadcasting System: Implements a broadcasting mechanism that sends channel posts to all previous bot users, with a system to filter out blocked users.
Blocked User Management: Identifies and manages blocked users, preventing them from receiving broadcasted messages.
Versatile Channel Handling: Ensures that messages from verified channels are properly managed and broadcasted to relevant users.
Set Up Steps
Estimated Time**: Around 30 minutes.
Requirements**: A Telegram bot, a Redis database, and Telegram group/channel IDs are necessary.
Configuration**: Input the Telegram bot token and relevant group/channel IDs. Configure message handling and user data processing according to your needs.
Detailed Instructions**: Sticky notes within the workflow provide extensive setup information and guidance.
Live Demo Workflow
Bot: Telegram Bot Link (Click here)
Support Group: Telegram Group Link (Click here)
Broadcasting Channel: Telegram Channel Link (Click here)
Keywords: n8n workflow, Telegram bot, chat ticket system, Redis database, message broadcasting, user data management, support forum automation
This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure.
Set Up
Watch this quick set up video 👇
Key Features
Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema.
Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks.
Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses.
Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access.
Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them.
Ideal for
Product Managers**: Easily access and query product data across Notion databases.
Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy.
Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval.
Data Teams**: Automate large dataset queries across multiple properties and records.
How It Works
This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
Who is this for
This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup.
Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n
How it works
The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context.
Key features include:
Querying the database with flexible filters.
Searching within individual Notion pages and extracting relevant blocks.
Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification.
This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets.
Set up
Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
Note: This workflow uses the internal API which is not official. This workflow might break in the future.
The workflow executes every night at 23:59. You can configure a different time bin the Cron node.
Configure the GitHub nodes with your username, repo name, and the file path.
In the HTTP Request nodes (making a request to localhost:5678), create Basic Auth credentials with your n8n instance username and password.
This workflow is a experiment to build HTML pages from a user input using the new Structured Output from OpenAI.
How it works:
Users add what they want to build as a query parameter
The OpenAI node generate an interface following a structured output defined in the body
The JSON output is then converted to HTML along with a title
The HTML is encapsulated in an HTML node (where the Tailwind css script is added)
The HTML is rendered to the user via the Webhook response.
Set up steps
Create an OpenAI API Key
Create the OpenAI credentials
Use the credentials for both nodes HTTP Request (as Predefined Credential type) and OpenAI
Activate your workflow
Once active, go to the production URL and add what you'd like to build as the parameter "query"
Example: https://production_url.com?query=a%20signup%20form
Example of generated page