This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent.
Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites.
How it works
This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent.
The AI agent summarises and identifies if any follow-on meetings are required.
If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees.
Requirements
Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!)
OpenAI account for access to the LLM.
Customising the workflow
This example only books follow-on meetings but could be extended to generate reports or send emails.
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules.
Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/
How it works
This workflow will watch an email inbox for incoming invoices from suppliers
It will download the attached PDFs and processing them through a third party service called LlamaParse.
LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown.
Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable.
The workflow exports the extracted data from the AI agent to Google Sheets once the job complete.
Requirements
The criteria of the email trigger must be configured to capture emails with attachments.
The gmail label "invoice synced" must be created before using this workflow.
A LlamaIndex.ai account to use the LlamaParse service.
An OpenAI account to use GPT for AI work.
Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs.
Customizing this workflow
This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel.
Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading.
Disclaimer
This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes.
How it works
A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events.
When a file is created, its contents are uploaded to the vector store.
When a file is updated, its previous records are replaced.
When the file is deleted, the corresponding records are also removed from the vector store.
A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system.
Requirements
A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem.
Qdrant instance to store the records.
Customising the workflow
This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more.
Want to go fully local?
A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
This workflow will check a mailbox for new emails and if the Subject contains Expenses or Reciept it will send the attachment to Mindee for processing then it will update a Google sheet with the values.
To use this node you will need to set the Email Read node to use your mailboxes credentials and configure the Mindee and Google Sheets nodes to use your credentials.
This flow is supported by a Chrome plugin created with Cursor AI.
The idea was to create a Chrome plugin and a backend service in N8N to do chart analytics with OpenAI. It's a good sample on how to submit a screenshot from the browser to N8N.
Who is it for?
N8N developers who want to learn about using a Chrome plugin, an N8N webhook and OpenAI.
What opportunity does it present?
This sample opens up a whole range of N8N connected Chrome extensions that can analyze screenshots by using OpenAI.
What this workflow does?
The workflow contains:
a webhook trigger
an OpenAI node with GPT-4O-MINI and Analyze Image selected
a response node to send back the Text that was created after analysing the screenshot.
All this is needed to talk to the Chrome extension which is created with Cursor AI.
The idea is to visit the tradingview.com crypto charts, click the Chrome plugin and get back analytics about the shown chart in understandable language. This is driven by the N8N flow.
With the new image analytics capabilities of OpenAI this opens up a world of opportunities.
Requirements/setup
OpenAI API key
Cursor AI installed
The Chrome extension. Download
The N8N JSON code. Download
How to customize it to your needs?
Both the Chrome extension and N8N flow can be adapted to use on other websites. You can consider:
analyzing a financial screen and ask questions about the data shown
analyzing other charts
extending the N8N workflow with other AI nodes
With AI and image analytics the sky is the limit and in some cases it saves you from creating complex API integrations.
Download Chrome extension
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts.
Multimodal Parsing is better than traditiona OCR because:
It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM.
It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion.
It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire!
How it works
You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing
A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals.
Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements.
Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node.
Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed.
Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input.
Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page.
Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items.
Requirements
Google Gemini API for Multimodal LLM.
Google Drive access for document storage.
Stirling PDF instance for PDF to Image conversion
Customising the workflow
At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude.
If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps.
Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.