This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent.
Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites.
How it works
This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent.
The AI agent summarises and identifies if any follow-on meetings are required.
If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees.
Requirements
Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!)
OpenAI account for access to the LLM.
Customising the workflow
This example only books follow-on meetings but could be extended to generate reports or send emails.
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules.
Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/
How it works
This workflow will watch an email inbox for incoming invoices from suppliers
It will download the attached PDFs and processing them through a third party service called LlamaParse.
LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown.
Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable.
The workflow exports the extracted data from the AI agent to Google Sheets once the job complete.
Requirements
The criteria of the email trigger must be configured to capture emails with attachments.
The gmail label "invoice synced" must be created before using this workflow.
A LlamaIndex.ai account to use the LlamaParse service.
An OpenAI account to use GPT for AI work.
Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs.
Customizing this workflow
This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel.
Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
This workflow will check a mailbox for new emails and if the Subject contains Expenses or Reciept it will send the attachment to Mindee for processing then it will update a Google sheet with the values.
To use this node you will need to set the Email Read node to use your mailboxes credentials and configure the Mindee and Google Sheets nodes to use your credentials.
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading.
Disclaimer
This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes.
How it works
A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events.
When a file is created, its contents are uploaded to the vector store.
When a file is updated, its previous records are replaced.
When the file is deleted, the corresponding records are also removed from the vector store.
A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system.
Requirements
A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem.
Qdrant instance to store the records.
Customising the workflow
This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more.
Want to go fully local?
A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts.
Multimodal Parsing is better than traditiona OCR because:
It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM.
It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion.
It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire!
How it works
You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing
A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals.
Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements.
Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node.
Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed.
Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input.
Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page.
Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items.
Requirements
Google Gemini API for Multimodal LLM.
Google Drive access for document storage.
Stirling PDF instance for PDF to Image conversion
Customising the workflow
At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude.
If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps.
Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
Purpose of workflow:
The purpose of this workflow is to create an AI-powered technical analysis agent capable of analyzing financial charts, specifically for cryptocurrencies like Bitcoin or equity stocks. This agent provides users with insights into market trends, price movements, and technical indicators to assist in making informed trading decisions.
How it works:
The agent uses the Sonnet model from Anthropic for LLM
It integrates with TradingView charts through the chart-img.com API to generate and download financial charts.
The agent analyzes the chart using AI vision capabilities, examining candlestick patterns, pricing trends, and technical indicators like Relative Strength Index (RSI) and Directional Movement Index (DMI).
It provides a detailed analysis of the chart, including support and resistance levels, market trends, and volume analysis.
The agent generates a visual representation of the analysis, displaying candlesticks, volume, RSI, and DMI.
Step by step setup:
Create free API key from chart-img.com
Set the exchange for the ticker (defaults to NYSE)
Tutorial
Task:
Create a simple API endpoint using the Webhook and Respond to Webhook nodes
Why:
You can prototype or replace a backend process with a single workflow
Main use cases:
Replace backend logic with a workflow
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step.
Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow
Main use cases:
Connect with apps and services that n8n doesn’t have integration with
Web scraping
How it works
This workflow can be divided into three branches, each serving a distinct purpose:
1.Splitting into Items (HTTP Request - Get Mock Albums):
The workflow initiates with a manual trigger (On clicking 'execute').
It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums."
The obtained data is split into items using the Item Lists node, facilitating easier management.
2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract):
Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random."
The HTML Extract node extracts the article title from the fetched Wikipedia page.
3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request):
It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node.
The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page).
This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
Task:
Merge two datasets into one based on matching rules
Why:
A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic.
Main use cases:
Appending data sets
Keep only new items
Keep only existing items
This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node.
It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file.
Config Options
repo_owner - Github owner
repo_name - Github repository name
repo_path - Path within the Github repository
>This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
This n8n template builds a simple WhatsApp chabot acting as a Sales Agent. The Agent is backed by a product catalog vector store to better answer user's questions.
This template is intended to help introduce n8n users interested in building with WhatsApp.
How it works
This template is in 2 parts: creating the product catalog vector store and building the WhatsApp AI chatbot.
A product brochure is imported via HTTP request node and its text contents extracted.
The text contents are then uploaded to the in-memory vector store to build a knowledgebase for the chatbot.
A WhatsApp trigger is used to capture messages from customers where non-text messages are filtered out.
The customer's message is sent to the AI Agent which queries the product catalogue using the vector store tool.
The Agent's response is sent back to the user via the WhatsApp node.
How to use
Once you've setup and configured your WhatsApp account and credentials
First, populate the vector store by clicking the "Test Workflow" button.
Next, activate the workflow to enable the WhatsApp chatbot.
Message your designated WhatsApp number and you should receive a message from the AI sales agent.
Tweak datasource and behaviour as required.
Requirements
WhatsApp Business Account
OpenAI for LLM
Customising this workflow
Upgrade the vector store to Qdrant for persistance and production use-cases.
Handle different WhatsApp message types for a more rich and engaging experience for customers.