Back to Integrations
integration integration
integration

Integrate Read Binary File with 500+ apps and services

Unlock Read Binary File’s full potential with n8n, connecting it to similar Core Nodes apps and over 1000 other services. Create adaptable and scalable workflows between Read Binary File and your stack. All within a building experience you will love.

The Read Binary File integrations are replaced by the Read/Write Files from Disk integrations

The Read/Write Files from Disk node replaces the Read Binary File node from version 1.21.0 onwards. Check out the Read/Write Files from Disk node!

Popular ways to use Read Binary File integration

Read a spreadsheet file

Companion workflow for Spreadsheet File node docs
sm-amudhan
amudhan
Merge node
+3

Nathan: Your n8n Personal Assistant

Nathan is a proof of concept framework for creating a personal assistant who can handle various day to day functions for you.
tephlon
jason
Postgres node

Insert Excel data to Postgres

Read XLS from file Convert it to JSON Insert it in Postgres
jan
Jan Oberhauser

Extract text from a PDF file

Companion workflow for Read PDF node docs
sm-amudhan
amudhan
HTTP Request node
+8

Scrape and store data from multiple website pages

This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
mcolomer
Miquel Colomer

Over 3000 companies switch to n8n every single week

Connect Read Binary File with your company’s tech stack and create automation workflows

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon