This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent.
Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites.
How it works
This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent.
The AI agent summarises and identifies if any follow-on meetings are required.
If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees.
Requirements
Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!)
OpenAI account for access to the LLM.
Customising the workflow
This example only books follow-on meetings but could be extended to generate reports or send emails.
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules.
Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/
How it works
This workflow will watch an email inbox for incoming invoices from suppliers
It will download the attached PDFs and processing them through a third party service called LlamaParse.
LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown.
Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable.
The workflow exports the extracted data from the AI agent to Google Sheets once the job complete.
Requirements
The criteria of the email trigger must be configured to capture emails with attachments.
The gmail label "invoice synced" must be created before using this workflow.
A LlamaIndex.ai account to use the LlamaParse service.
An OpenAI account to use GPT for AI work.
Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs.
Customizing this workflow
This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel.
Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
This workflow will check a mailbox for new emails and if the Subject contains Expenses or Reciept it will send the attachment to Mindee for processing then it will update a Google sheet with the values.
To use this node you will need to set the Email Read node to use your mailboxes credentials and configure the Mindee and Google Sheets nodes to use your credentials.
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading.
Disclaimer
This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes.
How it works
A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events.
When a file is created, its contents are uploaded to the vector store.
When a file is updated, its previous records are replaced.
When the file is deleted, the corresponding records are also removed from the vector store.
A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system.
Requirements
A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem.
Qdrant instance to store the records.
Customising the workflow
This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more.
Want to go fully local?
A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
This n8n workflows builds another example of creating a knowledgebase assistant but demonstrates how a more deliberate and targeted approach to ingesting the data can produce much better results for your chatbot.
In this example, a government tax code policy document is used. Whilst we could split the document into chunks by content length, we often lose the context of chapters and sections which may be required by the user.
Our approach then is to first split the document into chapters and sections before importing into our vector store. Additionally, using metadata correctly is key to allow filtering and scoped queries.
Example
Human: "Tell me about what the tax code says about cargo for intentional commerce?"
AI: "Section 11.25 of the Texas Property Tax Code pertains to "MARINE CARGO CONTAINERS USED EXCLUSIVELY IN INTERNATIONAL COMMERCE." In this section, a person who is a citizen of a foreign country or an en..."
How it works
The tax code policy document is downloaded as a zip file from the government website and its pages are extracted as separate chapters.
Each chapter is then parsed and split into its sections using data manipulation expressions.
Each section is then inserted into our Qdrant vector store tagged with its source, chapter and section numbers as metadata.
When our AI Agent needs to retrieve data from our vector store, we use a custom workflow tool to perform the query to Qdrant.
Because we're relying on Qdrant's advanced filtering capabilities, we perform the search using the Qdrant API rather than the Qdrant node.
When the AI Agent, needs to pull full wording or extracts, we can use Qdrant's scroll API and metadata filtering to do so. This makes Qdrant behave like a key-value store for our document.
Requirements
A Qdrant instance is required for the vector store and specifically for it's filtering functionality.
Mistral.ai account for Embeddings and AI models.
Customising this workflow
Depending on your use-case, consider returning actual PDF pages (or links) to the user for the extra confirmation and to build trust.
Not using Mistral? You are able to replace but note to match the distance and dimension size of Qdrant collection to your chosen embedding model.
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts.
Multimodal Parsing is better than traditiona OCR because:
It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM.
It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion.
It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire!
How it works
You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing
A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals.
Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements.
Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node.
Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed.
Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input.
Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page.
Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items.
Requirements
Google Gemini API for Multimodal LLM.
Google Drive access for document storage.
Stirling PDF instance for PDF to Image conversion
Customising the workflow
At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude.
If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps.
Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.