Back to Integrations
integration integration
integration

Integrate Compression with 500+ apps and services

Unlock Compression’s full potential with n8n, connecting it to similar Core Nodes apps and over 1000 other services. Create adaptable and scalable workflows between Compression and your stack. All within a building experience you will love.

Popular ways to use Compression integration

Dropbox node
HTTP Request node

Compress binary files to zip format

This workflow allows you to compress binary files to zip format. HTTP Request node: The workflow uses the HTTP Request node to fetch files from the internet. If you want to fetch files from your local machine, replace it with the Read Binary File or Read Binary Files node. Compression node: The Compression node compresses the file into a zip. If you want to compress the files to gzip, then select the gzip format instead. Based on your use-case, you may want to write the files to your disk or upload it to Google Drive or Box. If you want to write the compressed file to your disk, replace the Dropbox node with the Write Binary File node, or if you want to upload the file to a different service, use the respective node.
harshil1712
ghagrawal17
+8

Parse DMARC reports, save them in database and notify on DKIM or SPF error

Who is it for If you are a postmaster or you manage email server, you can set up DKIM and SPF records to ensure that spoofing your email address is hard. On your domain you can also set up DMARC record to receive XML reports from email providers (rua tag). Those reports contain data if email they received passed DKIM and SPF verifications. Since DMARC email is public, you will receive a lot of emails from email providers, not only if DKIM/SPF fail. There is no need for it - you probably only need to know if SPF/DKIM failed. So this script is intended to automatically parse all DMARC reports that come from email providers, but ONLY send you notification if SPF or DKIM failed - meaning that either someone tries to spoof your email or your DKIM/SPF is improperly set up. How it works script monitors postmaster email for DMARC reprots (rua) unpacks report and parses XML into JSON maps JSON and formats fields for MySQL/MariaDB input inputs into database sends notification on DKIM or SPF failure Remember to set up email input mailbox notification channels for slack for email
lukaszpp
Łukasz
Airtable node
HTTP Request node
Merge node
+24

Scale Deal Flow with a Pitch Deck AI Vision, Chatbot and QDrant Vector Store

Are you a popular tech startup accelerator (named after a particular higher order function) overwhelmed with 1000s of pitch decks on a daily basis? Wish you could filter through them quickly using AI but the decks are unparseable through conventional means? Then you're in luck! This n8n template uses Multimodal LLMs to parse and extract valuable data from even the most overly designed pitch decks in quick fashion. Not only that, it'll also create the foundations of a RAG chatbot at the end so you or your colleagues can drill down into the details if needed. With this template, you'll scale your capacity to find interesting companies you'd otherwise miss! Requires n8n v1.62.1+ How It Works Airtable is used as the pitch deck database and PDF decks are downloaded from it. An AI Vision model is used to transcribe each page of the pitch deck into markdown. An Information Extractor is used to generate a report from the transcribed markdown and update required information back into pitch deck database. The transcribed markdown is also uploaded to a vector store to build an AI chatbot which can be used to ask questions on the pitch deck. Check out the sample Airtable here: https://airtable.com/appCkqc2jc3MoVqDO/shrS21vGqlnqzzNUc How To Use This template depends on the availability of the Airtable - make a duplicate of the airtable (link) and its columns before running the workflow. When a new pitchdeck is received, enter the company name into the Name column and upload the pdf into the File column. Leave all other columns blank. If you have the Airtable trigger active, the execution should start immediately once the file is uploaded. Otherwise, click the manual test trigger to start the workflow. When manually triggered, all "new" pitch decks will be handled by the workflow as separate executions. Requirements OpenAI for LLM Airtable For Database and Interface Qdrant for Vector Store Customising This Workflow Extend this starter template by adding more AI agents to validate claims made in the pitch deck eg. Linkedin Profiles, Page visits, Reviews etc.
jimleuk
Jimleuk
HTTP Request node
Google Drive node
+7

Transcribing Bank Statements To Markdown Using Gemini Vision AI

This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts. Multimodal Parsing is better than traditiona OCR because: It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM. It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion. It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire! How it works You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals. Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements. Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node. Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed. Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input. Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page. Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items. Requirements Google Gemini API for Multimodal LLM. Google Drive access for document storage. Stirling PDF instance for PDF to Image conversion Customising the workflow At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude. If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps. Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
jimleuk
Jimleuk
HTTP Request node
+16

Build a Tax Code Assistant with Qdrant, Mistral.ai and OpenAI

This n8n workflows builds another example of creating a knowledgebase assistant but demonstrates how a more deliberate and targeted approach to ingesting the data can produce much better results for your chatbot. In this example, a government tax code policy document is used. Whilst we could split the document into chunks by content length, we often lose the context of chapters and sections which may be required by the user. Our approach then is to first split the document into chapters and sections before importing into our vector store. Additionally, using metadata correctly is key to allow filtering and scoped queries. Example Human: "Tell me about what the tax code says about cargo for intentional commerce?" AI: "Section 11.25 of the Texas Property Tax Code pertains to "MARINE CARGO CONTAINERS USED EXCLUSIVELY IN INTERNATIONAL COMMERCE." In this section, a person who is a citizen of a foreign country or an en..." How it works The tax code policy document is downloaded as a zip file from the government website and its pages are extracted as separate chapters. Each chapter is then parsed and split into its sections using data manipulation expressions. Each section is then inserted into our Qdrant vector store tagged with its source, chapter and section numbers as metadata. When our AI Agent needs to retrieve data from our vector store, we use a custom workflow tool to perform the query to Qdrant. Because we're relying on Qdrant's advanced filtering capabilities, we perform the search using the Qdrant API rather than the Qdrant node. When the AI Agent, needs to pull full wording or extracts, we can use Qdrant's scroll API and metadata filtering to do so. This makes Qdrant behave like a key-value store for our document. Requirements A Qdrant instance is required for the vector store and specifically for it's filtering functionality. Mistral.ai account for Embeddings and AI models. Customising this workflow Depending on your use-case, consider returning actual PDF pages (or links) to the user for the extra confirmation and to build trust. Not using Mistral? You are able to replace but note to match the distance and dimension size of Qdrant collection to your chosen embedding model.
jimleuk
Jimleuk
AWS S3 node
Aggregate node

Download and Compress Folder from S3 to ZIP File

This workflow downloads all files from a specific folder in a S3 Bucket and compresses them so you can download it via n8n or do further processings. Fill in your Credentials and Settings in the Nodes marked with "*". Might serve well as Blueprint or as manual Download for S3 Folders. Since I found it rather tricky to compress all binary files into one zip file I figured might it be an interesting Template. Hint: This is the expression to get every binary key to compress them dynamically. (used in the "Compress"-Node) Enjoy the Workflow! ❤️ https://let-the-work-flow.com Workflow Automation & Development
geckse
Marcel Claus-Ahrens

Supported Actions

Compress
Compress files into a zip or gzip archive
Decompress
Decompress zip or gzip archives

Over 3000 companies switch to n8n every single week

Connect Compression with your company’s tech stack and create automation workflows

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon