This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent.
Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites.
How it works
This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent.
The AI agent summarises and identifies if any follow-on meetings are required.
If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees.
Requirements
Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!)
OpenAI account for access to the LLM.
Customising the workflow
This example only books follow-on meetings but could be extended to generate reports or send emails.
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules.
Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/
How it works
This workflow will watch an email inbox for incoming invoices from suppliers
It will download the attached PDFs and processing them through a third party service called LlamaParse.
LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown.
Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable.
The workflow exports the extracted data from the AI agent to Google Sheets once the job complete.
Requirements
The criteria of the email trigger must be configured to capture emails with attachments.
The gmail label "invoice synced" must be created before using this workflow.
A LlamaIndex.ai account to use the LlamaParse service.
An OpenAI account to use GPT for AI work.
Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs.
Customizing this workflow
This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel.
Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
This workflow will check a mailbox for new emails and if the Subject contains Expenses or Reciept it will send the attachment to Mindee for processing then it will update a Google sheet with the values.
To use this node you will need to set the Email Read node to use your mailboxes credentials and configure the Mindee and Google Sheets nodes to use your credentials.
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading.
Disclaimer
This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes.
How it works
A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events.
When a file is created, its contents are uploaded to the vector store.
When a file is updated, its previous records are replaced.
When the file is deleted, the corresponding records are also removed from the vector store.
A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system.
Requirements
A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem.
Qdrant instance to store the records.
Customising the workflow
This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more.
Want to go fully local?
A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
This n8n workflows builds another example of creating a knowledgebase assistant but demonstrates how a more deliberate and targeted approach to ingesting the data can produce much better results for your chatbot.
In this example, a government tax code policy document is used. Whilst we could split the document into chunks by content length, we often lose the context of chapters and sections which may be required by the user.
Our approach then is to first split the document into chapters and sections before importing into our vector store. Additionally, using metadata correctly is key to allow filtering and scoped queries.
Example
Human: "Tell me about what the tax code says about cargo for intentional commerce?"
AI: "Section 11.25 of the Texas Property Tax Code pertains to "MARINE CARGO CONTAINERS USED EXCLUSIVELY IN INTERNATIONAL COMMERCE." In this section, a person who is a citizen of a foreign country or an en..."
How it works
The tax code policy document is downloaded as a zip file from the government website and its pages are extracted as separate chapters.
Each chapter is then parsed and split into its sections using data manipulation expressions.
Each section is then inserted into our Qdrant vector store tagged with its source, chapter and section numbers as metadata.
When our AI Agent needs to retrieve data from our vector store, we use a custom workflow tool to perform the query to Qdrant.
Because we're relying on Qdrant's advanced filtering capabilities, we perform the search using the Qdrant API rather than the Qdrant node.
When the AI Agent, needs to pull full wording or extracts, we can use Qdrant's scroll API and metadata filtering to do so. This makes Qdrant behave like a key-value store for our document.
Requirements
A Qdrant instance is required for the vector store and specifically for it's filtering functionality.
Mistral.ai account for Embeddings and AI models.
Customising this workflow
Depending on your use-case, consider returning actual PDF pages (or links) to the user for the extra confirmation and to build trust.
Not using Mistral? You are able to replace but note to match the distance and dimension size of Qdrant collection to your chosen embedding model.
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts.
Multimodal Parsing is better than traditiona OCR because:
It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM.
It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion.
It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire!
How it works
You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing
A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals.
Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements.
Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node.
Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed.
Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input.
Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page.
Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items.
Requirements
Google Gemini API for Multimodal LLM.
Google Drive access for document storage.
Stirling PDF instance for PDF to Image conversion
Customising the workflow
At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude.
If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps.
Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
Temporary solution using the undocumented REST API for backups using Google drive.
Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
A robust n8n workflow designed to enhance Telegram bot functionality for user management and broadcasting. It facilitates automatic support ticket creation, efficient user data storage in Redis, and a sophisticated system for message forwarding and broadcasting.
How It Works
Telegram Bot Setup: Initiate the workflow with a Telegram bot configured for handling different chat types (private, supergroup, channel).
User Data Management: Formats and updates user data, storing it in a Redis database for efficient retrieval and management.
Support Ticket Creation: Automatically generates chat tickets for user messages and saves the corresponding topic IDs in Redis.
Message Forwarding: Forwards new messages to the appropriate chat thread, or creates a new thread if none exists.
Support Forum Management: Handles messages within a support forum, differentiating between various chat types and user statuses.
Broadcasting System: Implements a broadcasting mechanism that sends channel posts to all previous bot users, with a system to filter out blocked users.
Blocked User Management: Identifies and manages blocked users, preventing them from receiving broadcasted messages.
Versatile Channel Handling: Ensures that messages from verified channels are properly managed and broadcasted to relevant users.
Set Up Steps
Estimated Time**: Around 30 minutes.
Requirements**: A Telegram bot, a Redis database, and Telegram group/channel IDs are necessary.
Configuration**: Input the Telegram bot token and relevant group/channel IDs. Configure message handling and user data processing according to your needs.
Detailed Instructions**: Sticky notes within the workflow provide extensive setup information and guidance.
Live Demo Workflow
Bot: Telegram Bot Link (Click here)
Support Group: Telegram Group Link (Click here)
Broadcasting Channel: Telegram Channel Link (Click here)
Keywords: n8n workflow, Telegram bot, chat ticket system, Redis database, message broadcasting, user data management, support forum automation
This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure.
Set Up
Watch this quick set up video 👇
Key Features
Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema.
Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks.
Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses.
Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access.
Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them.
Ideal for
Product Managers**: Easily access and query product data across Notion databases.
Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy.
Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval.
Data Teams**: Automate large dataset queries across multiple properties and records.
How It Works
This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
Who is this for
This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup.
Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n
How it works
The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context.
Key features include:
Querying the database with flexible filters.
Searching within individual Notion pages and extracting relevant blocks.
Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification.
This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets.
Set up
Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
Note: This workflow uses the internal API which is not official. This workflow might break in the future.
The workflow executes every night at 23:59. You can configure a different time bin the Cron node.
Configure the GitHub nodes with your username, repo name, and the file path.
In the HTTP Request nodes (making a request to localhost:5678), create Basic Auth credentials with your n8n instance username and password.
This workflow is a experiment to build HTML pages from a user input using the new Structured Output from OpenAI.
How it works:
Users add what they want to build as a query parameter
The OpenAI node generate an interface following a structured output defined in the body
The JSON output is then converted to HTML along with a title
The HTML is encapsulated in an HTML node (where the Tailwind css script is added)
The HTML is rendered to the user via the Webhook response.
Set up steps
Create an OpenAI API Key
Create the OpenAI credentials
Use the credentials for both nodes HTTP Request (as Predefined Credential type) and OpenAI
Activate your workflow
Once active, go to the production URL and add what you'd like to build as the parameter "query"
Example: https://production_url.com?query=a%20signup%20form
Example of generated page